#CC Job Script Directives
CloudyCluster Script Directives
The options below serve as both commandline options and job script directives that begin with #CC in the job script.
-
Option / #CC Directive Description -nt low | moderate | high | 10GB Specifies the amount of network capacity needed for the job. If not specified, defaults to “default” which means it will not factor into the calculation of instance type needed for the job. -ni {number_instances} The number of instances that you want the job to run on. The default setting is one instance.* -cpu {cpu_count} The number of CPUs that you want per instance that your job is running on. The default setting is one CPU per instance. -mem {mem_size_in_MB} The amount of memory in MB per instance. The default setting is 1000 MB (1 GB) per instance.* -s {name_of_scheduler_to_use} Specifies the name of the Scheduler/Target that you want to use. The default value is to use the default Scheduler/Target for the Scheduler/Target type you have requested. This default variable can be set using the ccq.config file with the variable defaultScheduler={schedulerName}. -st Torque | Slurm | default Specifies the type of Scheduler/Target that you want to use. The accepted values are Torque, Condor, SGE, and Slurm. If the Scheduler/Target type is not specified with a job script then ccqsub will attempt to figure out from the job script what type of Scheduler/Target the job is to be run on. If no job script is submitted then the value will default to the default Scheduler/Target for the Cluster. -up Use Preemptible Yes| No Use preemptible instances instead of on demand instances. This parameter is optional and if a preemptible price is set it will automatically be set to yes for you. The default option is “no”. -gcpit {instance_type} Specifies the GCP instance type that the job is to be run on. If no instance type is specified, then the amount of RAM and CPUs will be used to determine an appropriate GCP Instance. A default instance type can be set using the “defaultInstanceType” directive in the CCQ Config file. -gcput1 Uses tier 1 networking for the job, requires the use of Tier 1 networking specified by the -gcpit directive. -op cost | performance Specifies whether to use the instance type that is most cost effective or one that will give better performance regardless of cost. The default is “cost”. -p mcn | mnc | cmn | cnm | ncm | nmc Specifies the priority that is considered when calculating the appropriate instance type for the job. Where m = memory, n = network, and c = cpu. For example specifying “-p ncm” would mean that when calculating the instance type the priority is Network requirements, Cpu requirements, then Memory requirements. This means that Networking is considered first, then the number of Cps, then the amount of memory when choosing an instance type. The default is “mcn” or Memory, Cpus, and then Network. -gcpvt ssd Specifies the type of Volume to launch with the Compute Engine instances for the job on. The default is “ssd”. This value can also be set using the volumeType={volumeType} variable in the ccq.config file. -vt ssd (Deprecated) Specifies the type of Volume to launch with the Compute Engine instances for the job on. The default is “ssd”. This value can also be set using the volumeType={volumeType} variable in the ccq.config file. -cl {days_for_login_cert_to_be_valid_for} Specifies the number of days that the generated CCQ login certificate is valid. This certificate is used so that you do not have to enter your username/password combination each time you submit a job. The default is 1 day, and the value must be an integer greater than or equal to 0. Setting the certificate valid length to 0 will disable the generation of login certificates. If the certLength variable is set in the ccq.config file then the value in the ccq.config file will override the value entered via the commandline. -pr Specifies that CCQ should print the estimated price for a specific job script but not run the job. No resources will be launched and the estimated price of the job will be shown. This only includes in the instance costs per hour. -o {stdout_file_location} The path to the file where you want the Standard Output from your job to be written. The default location is the directory where ccqsub was invoked. The name of the file will be the job name combined with the job id on the machine the job was submitted from.* -e {stderr_file_location} The path to the file where you want the Standard Error from your job to be written. The default location is the directory where ccqsub was invoked. The name of the file will be the job name combined with the job id on the machine the job was submitted from.* -ti Specifies that CCQ should immediately terminate the instances created by the CCQ job as soon as the CCQ job has completed and not to wait to see if they can be used for other jobs. This argument only applies if the job creates a new compute group. If the job re- uses existing instances they will not be terminated upon job completion. -ps Specifies that CCQ should skip the Provisioning stage where it checks to make sure the job’s user is on the Compute Nodes before continuing. This may be desired if the users are already baked into the Image. If this option is given and the users are not on the Image the job could fail. -si true|false Specifies if the Compute Instances should enter the Ready state without waiting for the other instances in it’s group to enter the Ready state. This is used for HTC (High Throughput Computing) mode where lots of smaller jobs are submitted by the CCQ job and utilize the other compute instances as they come up. The default value is False. -tl {days}:{hours}:{minutes} Specifies the amount of time that the job is allowed to run before CCQ will automatically terminate all the instances. If the job completes successfully within the time limit then the instances will be deleted via the CCQ auto-delete process. The default value is that there is not a time limit and the job will run for as long as it needs to. The format to specify a time limit is: {days}:{hours}:{minutes}, this is the amount of time from the initial processing of the job that CCQ will let the job run. You may also specify “unlimited” if you do not want the instances to terminate until you delete them. -cp Specifies that this CCQ job should only create placeholder/parent instances and not actually submit a job to the HPC Scheduler. This allows for the compute instances to be created dynamically and remain running as long as the specified time limit. The use of this argument requires the -tl argument as well. The default value is False. -gcpgi {image_id} Specifies if the Image Id of the Google Image that CCQ should use to launch the Compute Instances for the job. This MUST be an image that contains the CloudyCluster software or IT WILL NOT WORK. If no image is specified then the CloudyCluster image the Scheduler instance is using will be used. -mi {maximum_idle_time} Specifies the maximum amount of time that the instances created by the job should remain running if no jobs are running on the instances. The maximum idle time is specified in terms of minutes and the default is 5. -pj {project billing label} This specifies that instances started will be given the label requested under the key ccbilling. The label cannot contain the dash character. This is intended to be used to track the amount spent on a per job or per project basis. Instances cannot be reused across billing labels.
- Note: -e| We recommend only using a single version of this directive. Either -e in Torque or -e in #CC.
- Note: -o| We recommend only using a single version of this directive. Either -o in Torque or -o in #CC.
- Note: -N | We recommend only using a single version of this directive. Either -N in Slurm or -ni in #CC.
- Note: -e| We recommend only using a single version of this directive. Either -e in Slurm or -e in #CC.
- Note: -o| We recommend only using a single version of this directive. Either -o in Slurm or -o in #CC.