This library is required for all custom workers for the Adobe Asset Compute Service. It provides an easy to use framework and takes care of common things like asset & rendition access, validation and type checks, event notification, error handling and more.
- Adobe Asset Compute Worker SDK
- Installation
- Overview
- Examples
- API details
- Contribution guidelines
- Available resources and libraries
- Licensing
npm install @adobe/asset-compute-sdk
These are the high-level steps done by the Adobe Asset Compute Worker SDK:
- Setup
- Initiates the metrics agent and Adobe IO Events handler (see asset-compute-commons for more information)
- Sets up the proper directories for local access to source and rendition
- Download source file from
url
insource
object - Run
renditionCallback
function for each rendition (worker) or for all the renditions at once (batch worker)- The rendition callback is where you put your worker logic. At the minimum, this function needs to convert the local source file into a local rendition file
- Notify the client via Adobe IO Events after each rendition
- It sends a
rendition_created
orrendition_failed
event depending on the outcome (see Asset Compute API asynchronous events for more information) - If the worker is part of a chain of workers, it will only send successful rendition events after the last worker in the chain
- It sends a
Calls rendition function (renditionCallback) for each rendition.
const { worker } = require('@adobe/asset-compute-sdk');
async function renditionCallback(source, rendition, params) => {
// ... worker logic
}
const main = worker(renditionCallback, options);
await main(params);
Calls rendition function once with all the renditions.
const { batchWorker } = require('@adobe/asset-compute-sdk');
async function batchRenditionCallback(source, rendition, outdir, params) => {
// ... worker logic
}
const main = batchWorker(batchRenditionCallback, options);
await main(params);
Processes renditions using from a worker written in shellscript.
const { shellScriptWorker } = require('../lib/api');
const main = shellScriptWorker(); // assumes script is in `worker.sh`
await main(params);
Shellscript worker with custom script name
const { shellScriptWorker } = require('../lib/api');
const main = shellScriptWorker('custom-worker-name.sh'); // assumes script is in `custom-worker-name.sh`
await main(params);
If a variable is over 128kb in size (which can happen for some XMP metadata), it cannot be passed as an environment variable to the shell script. Instead, the variable is written to a file under ./vars
and the path to that file is stored in the environment variable. An additional environment variable FILE_PARAMS contains the list of all variables that required this substitution (if any). One easy way to check if a variable has been stored in a file is to check using a pattern match, for example:
# Example of passing the variable as STDIN to a command regardless of if it a file or environment variable
if [[ "$rendition_myvariable" == "./vars/"* ]]
then
# Value was stored in a file, do something with the file contents
cat "$rendition_myvariable" >> somecommand
else
# The value is in the environment variable $rendition_myvariable
echo "$rendition_myvariable" >> somecommand
The worker
and batchWorker
take two parameters: renditionCallback
and options
as described below.
The renditionCallback
function is where you put your custom worker code. The basic expectation of this function is to look at parameters from rendition.instructions
and convert it into a rendition, then write this rendition to rendition.path
.
Producing the rendition may involve external libraries or APIs. These steps should also be accomplished inside your renditionCallback
function.
The parameters for the rendition callback function are: source
, rendition
, and params
Object containing the following attributes:
Name | Type | Description | Example |
---|---|---|---|
url |
string |
URL pointing to the source binary. | "http://example.com/image.jpg" |
path |
string |
Absolute path to local copy of source file | "/tmp/image.jpg" |
name |
string |
File name. File extension in the name might be used if no mime type can be detected. Takes precedence over filename in URL path or filename in content-disposition header of the binary resource. Defaults to "file". | "image.jpg" |
headers |
object |
Object containining additional headers to use when doing a HTTP(S) request towards the url |
headers: { 'Authorization': 'auth-headers' } |
Object containing the following attributes:
Name | Type | Description |
---|---|---|
instructions |
object |
rendition parameters from the worker params (e.g. quality, dpi, format, height etc. See full list here |
directory |
string |
directory to put the renditions |
name |
string |
filename of the rendition to create |
path |
string |
Absolute path to store rendition locally (must put rendition here in order to be uploaded to cloud storage) |
index |
number |
number used to identify a rendition |
Original parameters passed into the worker (see full Asset Compute prcoessing API Doc)
Note: This argument is usually not needed, as a callback should take its information from the rendition.instructions
which are the specific rendition parameters from the request.
At the bare minimum, the rendition callback function must write something to the rendition.path
.
Simplest example (copying the source file):
async function renditionCallback(source, rendition) => {
// Check for unsupported file
const stats = await fs.stat(source.path);
if (stats.size === 0) {
throw new SourceUnsupportedError('source file is unsupported');
}
// process infile and write to outfile
await fs.copyFile(source.path, rendition.path);
}
The renditionCallback
function in batchWorker
is where you put your custom worker code. It works similarly to the renditionCallback
function in worker
with slightly different parameters. The main difference is it only gets called once per worker (instead of for each rendition).
The basic expectation of this function is to go through each of the renditions
and using the rendition's instructions
convert the it into a rendition, then write this rendition to it's corresponding rendition.path
.
The parameters for the rendition callback function are: source
, renditions
, outdir
, and params
Source is the exact same as for renditionCallback
in worker
Renditions is an array of rendition
objects. Each rendition
object has the same structure as for renditionCallback
in worker
directory to put renditions produced in batch workers
params
is the exact same as for renditionCallback
in worker
At the bare minimum, the rendition callback function must write something to the rendition.path
.
Simplest example (copying the source file):
async function renditionCallback(source, renditions, outdir, params) => {
// Check for unsupported file
const stats = await fs.stat(source.path);
if (stats.size === 0) {
throw new SourceUnsupportedError('source file is unsupported');
}
// process infile and write to outfile
renditions.forEach(rendition, () => {
await fs.copyFile(source.path, outdir + rendition.path);
})
}
Optional parameters to pass into workers
-
disableSourceDownload
: Boolean used to disable the source download (defaults to false). -
disableRenditionUpload
: Boolean used to disable the rendition upload (defaults to false).WARNING: Use this flag only if no rendition should be uploaded. This will make the worker activation fail since the asset compute SDK expects a rendition output.
Disable source download example:
const { worker } = require('@adobe/asset-compute-sdk');
async function renditionCallback(source, rendition, params) => {
// downloads source inside renditionCallback so does not need asset-compute-sdk to download source file
await fetch(source.url);
}
const options = {
disableSourceDownload: true
};
const main = worker(renditionCallback, options);
await main(params);
Disable rendition upload example:
const { worker } = require('@adobe/asset-compute-sdk');
const options = {
disableRenditionUpload: true
};
const main = worker(renditionCallback, options);
await main(params);
Note: this feature is not available for custom workers of the Adobe Asset Compute service.
Image post processing is available since version 2.4.0
and must be enabled by the worker by setting
rendition.postProcess = true;
in the processing callback.
For shell script workers, they can create a JSON file whose path is given to the script by the env var optionsfile
and include this in the file:
{
"postProcess": true
}
These instructions are supported:
fmt
with png, jpg/jpeg, tif/tiff and gifwidth
andheight
quality
for jpeg and gifinterlace
for pngjpegSize
for jpegdpi
convertToDpi
crop
Asset Compute Service has repository modularity and naming guidelines. It is modular to the extent possible, as fostered by the serverless concept and OpenWhisk framework. It means having small and focused GitHub repositories that support decoupled development and deployment lifecycles. One repository for one action is OK if it represents its own small services such as a worker. If you want to create a separate repository, log an issue in Asset Compute SDK repository.
For detailed guidelines, see the contribution guidelines. Also, follow these Git commit message guidelines.
The open-sourced libraries of Asset Compute Service are:
- Asset Compute SDK: the worker SDK and main framework for third-party custom workers.
- Asset Compute Commons: Common utilities needed by all Asset Compute serverless actions.
- Asset Compute Client: JavaScript client for the Adobe Asset Compute Service.
- Asset Compute example workers: Samples of third-party Asset Compute worker.
- ESlint configuration: Shared ESLint configuration for Nodejs projects related to the Adobe Asset Compute service.
- Asset Compute Development Tool: Library for the developer tool to explore and to test the Adobe Asset Compute Service.
- aio-cli-plugin-asset-compute: Asset Compute plug-in for Adobe I/O Command Line Interface.
- Adobe Asset Compute integration tests: Integration tests for the Asset Compute developer experience.
This project is licensed under the Apache V2 License. See LICENSE for more information.