RealityServer Features

Submitting Jobs

There are two main ways to submit jobs for use with the RealityServer Queue Manager.

  • Using the queue_manager_submit_job command
  • Manually inserting a job into the queue

Submit Using RealityServer

If you always have at least one RealityServer running anyway, then using the queue_manager_submit_job command is definitely the easiest way to submit jobs as well as ensure they are valid and accepted by the queue. Here is an example of what a call to queue_manager_submit_job might look like:

{"jsonrpc": "2.0", "method": "queue_manager_submit_job", "params": {
    "queue_name": "example_queue_1"
    "command": "example_batch_command",
    "parameters": {
        "example_param1": true,
        "example_param2": 5.0,
        "example_param3": {
            "x": 1.0, "y": 3.0, "z": 2.0
        }
    },
    "tasks": [
        {
            "name": "s3_upload",
            "config": {
                "bucket": "example_bucket",
                "key": "folder/render-${message_id}.${mime_ext}"
            }
        },
        {
            "name": "http_post",
            "config": {
                "postback_uri": "https://example.com/store/render-${message_id}.${mime_ext}"
            }
        }
    ]
}, "id": 1}

Refer to the queue_manager_submit_job command documentation for further details on the parameters shown here. We'll briefly summarise the parameters here.

  • queue_name - The name of your configured queue. Your configuration can define multiple queues potentially using different services. Each will have a name which you use here to determine which queue you want to submit to.
  • command - The RealityServer command that will be executed by the job. This can be any valid command that has been configured with the allow_command directive in the queue manager configuration.
  • parameters - The parameters to pass to the command, exactly as though the command was called in the normal way.
  • tasks - A list of tasks to run following successful completion of the job. This is a list of Queue_task_data objects which each contain a name string and a config map. See below for details.

The queue_manager_submit_job command will return the message id of the queued job as its result. The format of this identifier will depend on the specific queuing system being used. Since the tasks (see below) are able to include the message id as part of their configuration this can be used to match submitted jobs with results of tasks.

Tasks

Tasks run when a job has successfully completed and are responsible for actually doing something with the results of the job. RealityServer currently ships with two task types:

  • http_post
  • s3_upload

Additional task types may be added in the future. The following two sections will cover the use of these tasks in more detail.

HTTP Postback Task

The HTTP postback task takes the result of the job and sends it to the given postback_uri as a HTTP POST request where the body of the request is the result data. The MIME type of the request will be set to that of the result returned by RealityServer. Here is an example Queue_task_data object which configures a http_post task:

{
    "name": "http_post",
    "config": {
        "postback_uri": "https://example.com/store/render-${message_id}.${mime_ext}",
        "timeout": 10,
        "connect_timeout": 2
    }
}

The name of http_post tells RealityServer to use the HTTP postback task type. The config object contains at least one key postback_uri which specifies the URL where the POST request will be made.

The postback_uri string supports variable substitution as shown above. The following variables are available:

  • message_id - The message identifier that was returned by the queuing system when the job was submitted. This allows results to be matched with jobs if required and helps ensure unique naming (as message identifiers are typically unique).
  • mime_ext - This is the extension identified by the MIME type (the part after the forward slash). Useful for setting the extension if you are generating filenames for storing files.

There are two additional, optional keys, timeout and connect_timeout which allow you to set a custom timeout for the request and connection phase respectively. Both are specified in seconds. The request timeout has no limit by default and the connection phase timeout has a limit of 300 seconds by default.

The HTTP postback task has several configuration options which are documented in the Queue Manager Directives section of the RealityServer Configuration Guide. These allow you to control whether or not the postback will follow redirects and whether or not to validate the SSL certificates of the server that the request is being sent to.

Amazon S3 Upload Task

The Amazon S3 upload task takes the result of the job and uses the AWS API to upload the result data to an Amazon S3 bucket. This has a number of advantages as it does not require you to set up a separate server to handle a postback request and the results can be immediately served and used by a CDN if desired. Here is an example Queue_task_data object which configures a s3_upload task:

{
    "name": "s3_upload",
    "config": {
        "bucket": "example_bucket",
        "key": "folder/render-${message_id}.${mime_ext}",
        "acl": "public-read",
        "cache_control": "public",
        "expires": "2019-11-15T23:13:00Z",
        "content_disposition": "attachment",
        "storage_class": "STANDARD",
        "metadata": {
            "mig-metadata-example": "meta1"
        },
        "tags": {
            "mig-tag-example": "tag1"
        }
    }
},

The name of s3_upload tells RealityServer to use the S3 upload task type. The config object contains at least two keys bucket which specifies name of the configured bucket to which the upload will occur and key which determines the key used to store the data in the bucket. In Amazon S3 terms the key is what uniquely identifies the object in the bucket. Please refer to the Amazon S3 Documentation for more details. The key may contain forward slash characters which will cause most S3 clients to show directories when browsing.

The key string supports variable substitution as shown above. The following variables are available:

  • message_id - The message identifier that was returned by the queuing system when the job was submitted. This allows results to be matched with jobs if required and helps ensure unique naming (as message identifiers are typically unique).
  • mime_ext - This is the extension identified by the MIME type (the part after the forward slash). Useful for setting the extension if you are generating filenames for storing files.

Optionally the config object may also contain an acl key which defines the AWS S3 Access Control List for the uploaded object. This defines who can access the object. The following string values are supported:

  • private
  • public-read
  • public-read-write
  • authenticated-read
  • aws-exec-read
  • bucket-owner-read
  • bucket-owner-full-control

Please note that S3 allows bucket policies to be defined which may disallow certain ACL options for objects within the bucket. For example it is possible to set up a bucket to disallow any public ACL to be set. Ensure you set up your bucket to allow you to set the desired ACL.

The desired S3 storage class may be optionally specified with the storage_class key. The following string values are supported:

  • STANDARD
  • REDUCED_REDUNDANCY
  • STANDARD_IA
  • ONEZONE_IA
  • INTELLIGENT_TIERING
  • GLACIER
  • DEEP_ARCHIVE

The HTTP content disposition header can be optionally set with the content_disposition key. This must be a string value and will set the Content-Disposition header stored in S3.

User metadata may be added by specifying the metadata key with a map as the value. The map must use string keys and values and these will be set on the S3 object as user metadata. S3 user metadata automatically prefixes the key with x-amz-meta- to distinguish user metadata from system metadata. If your key contains this prefix it will be automatically stripped to avoid adding it twice.

Tagging may be specified by specifying the tags key with a map as the value. The map must use string keys and values and these will be set on the S3 object as tags.

You may also optionally specify caching configuration in the cache_control and expires keys which set the Cache-Control and Expires headers respectively. Please refer to RFC 7234 Hypertext Transfer Protocol (HTTP/1.1): Caching, for valid values for the Cache-Control Header. The expires key must be a string representing the desired expiry date and time in either RFC822 or ISO_8601 date formats. This will be automatically converted to the format specified by RFC 7234 when setting the Expires header. These headers are stored as part of the S3 object metadata.

The S3 upload task has several configuration options which are documented in the Amazon Web Services Directives section of the RealityServer Configuration Guide. These allow you to set up the named bucket used in the above configuration and the region needed by AWS to find the bucket.

Submitting Directly to the Queue

In cases where you will not have a running RealityServer available to accept jobs (for example where you have your RealityServer cluster potentially scale down to 0 instances) then you can create your own tools to manually insert jobs into the queue. This effectively means crafting the correct JSON payload for the job and inserting it into the queue using the API provided by the queuing service. The JSON payload is closely related to the parameters of the queue_manager_submit_job command. Essentially the parameters are the same except the queue_name is omitted. Here is an example:

{
    "command": "example_batch_command",
    "parameters": {
        "example_param1": true,
        "example_param2": 5.0,
        "example_param3": {
            "x": 1.0, "y": 3.0, "z": 2.0
        }
    },
    "tasks": [
        {
            "name": "s3_upload",
            "config": {
                "bucket": "example_bucket",
                "key": "folder/render-${message_id}.${mime_ext}"
            }
        },
        {
            "name": "http_post",
            "config": {
                "postback_uri": "https://example.com/store/render-${message_id}.${mime_ext}"
            }
        }
    ]
}

To submit this job to a queue you need to use whatever means your queue system supports to queue a message with a given payload. There are many different ways this can be achieved, a full description is beyond the scope of this documentation.