Workers are docker containers aiming to execute costly computations. Workers communicate with the Optimization Server master through messaging (using AMQP with RabbitMQ) to receive jobs to be executed, and to send back logs, KPIs and progresses of the job they are executing. Workers declare tasks (kind of jobs) they can execute in a YML file, as well as their input(s) and output(s). At most one job is executed in parallel on a given worker. If one wants k jobs of the same task to be executable in parallel, one has to spawn k workers, declaring this task, and connected to the same Optimization Server master.
Two packaged workers are provided to solve optimization problems with Optimization Server out of the box. The first worker supports IBM Ilog Cplex ((Integer) Linear Programming) and CpOptimizer (Constraint Programming) tasks. The second worker supports OPL (modeling language) tasks. They are described in Sub-section 1.
On the other hand, one can easily create a custom worker to support any kind of custom task. For this purpose, we provide a library that manages communication with the Optimization Server master, as well as code samples. Details about how to implement a custom worker are given in Sub-section 2.
Workers (pre-packaged and custom ones) are automatically discovered by the master, as soon as they are deployed. Out of the box integration with the web console (see Section Web console) is provided, even for custom workers.
If a job input is sent zipped by a client (with the isZipped flag set to true), the input content is automatically un-zipped before being passed to the “execute” method of the task on the worker.
It is possible to entrust a worker for performing any kind of costly computation. If the computation to be processed remotely consists in solving an optimization problem described in a standard format, then the packaged workers should be used. Otherwise, one can easily write a custom worker that handles the relevant computation. You can use any language to implement your task but Optimization Server provides the tools only for Java and Python.
When DBOS is deployed on Kubernetes, workers can be deployed as on demand workers. This mean a worker is started only when a task need to be executed. Any worker can be deployed in a such way as soon it is packaged in a Docker image.
Read the ‘on demand’ workers section for more details.