Introduction

The Job Service facilitates the scheduled execution of tasks in a cloud environment. These tasks are implemented by independent services, and can be started by using any of the Job Service supported interaction modes, based on Http calls or Knative Events delivery.

To schedule the execution of a task you must create a Job, that is configured with the following information:

  • Schedule: the job triggering periodicity.

  • Recipient: the entity that is called on the job execution for the given interaction mode, and receives the execution parameters.

Job Service Generic Diagram

Integration with the Workflows

In the context of the Kogito Serverless Workflows, the Job Service is responsible for controlling the execution of the time-triggered actions. And thus, all the time-base states that you can use in a workflow, are handled by the interaction between the workflow and the Job Service.

For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job Service, and when the timeout is met, a http callback is executed to notify the workflow.

Time Based States And Job Service Interaction

To set up this integration you can use different communication alternatives, that must be configured by combining the Job Service and the Quarkus Workflow Project configurations.

If the project is not configured to use the Job Service, all time-based actions will use an in-memory implementation of that service. However, this setup must not be used in production, since every time the application is restarted, all the timers are lost. This last is not suited for serverless architectures, where the applications might scale to zero at any time, etc.

Jobs life-span

Since the main goal of the Job Service is to work with the active jobs, such as the scheduled jobs that needs to be executed, when a job reaches a final state, it is removed from the Job Service. However, in some cases where you want to keep the information about the jobs in a permanent repository, you can configure the Job Service to produce status change events, that can be collected by the Data Index Service, where they can be indexed and made available by GraphQL queries.

Executing

To execute the Job Service in your docker or kubernetes environment, you must use the following image:

In the next topics you can see the different configuration parameters that you must use, for example, to configure the persistence mechanism, the eventing system, etc. More information on this image can be found here.

We recommend that you follow this procedure:

  1. Identify the persistence mechanism to use and see the required configuration parameters.

  2. Identify if the Eventing API is required for your needs and see the required configuration parameters.

  3. Identify if the project containing your workflows is configured with the appropriate Job Service Quarkus Extension.

Finally, to run the image, you must use the environment variables exposed by the image, and other configurations that you can set using additional environment variables or using system properties with java like names.

Exposed environment variables

Variable Description

SCRIPT_DEBUG

Enable debug level of the image and its operations.

JOBS_SERVICE_PERSISTENCE

Any of the following values: postgresql, ephemeral, or infinispan to select the persistence mechanism to use, see.

If used, these values must always be passed as environment variables.

Using environment variables

To configure the image by using environment variables you must pass one environment variable per each parameter.

Job Service image configuration for docker execution example
docker run -it -e JOBS_SERVICE_PERSISTENCE=postgresql -e VARIABLE_NAME=value quay.io/kiegroup/kogito-jobs-service-allinone:latest
Job Service image configuration for Kubernetes execution example
spec:
  containers:
    - name: jobs-service-postgresql
      image: quay.io/kiegroup/kogito-jobs-service-allinone-nightly:latest
      imagePullPolicy: Always
      ports:
        - containerPort: 8080
          name: http
          protocol: TCP
      env:
        # Set the image parameters as environment variables in the container definition.
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: JOBS_SERVICE_PERSISTENCE
          value: "postgresql"
        - name: QUARKUS_DATASOURCE_USERNAME
          value: postgres
        - name: QUARKUS_DATASOURCE_PASSWORD
          value: pass
        - name: QUARKUS_DATASOURCE_JDBC_URL
          value: jdbc:postgresql://timeouts-showcase-database:5432/postgres?currentSchema=jobs-service
        - name: QUARKUS_DATASOURCE_REACTIVE_URL
          value: postgresql://timeouts-showcase-database:5432/postgres?search_path=jobs-service

This is the recommended approach when you execute the Job Service in kubernetes. The timeouts showcase example Quarkus Workflow Project with standalone services contains an example of this configuration, see.

Using system properties with java like names

To configure the image by using system properties you must pass one property per parameter, however, in this case, all these properties are passed as part of a single environment with the name JAVA_OPTIONS.

Job Service image configuration for docker execution example
docker run -it -e JOBS_SERVICE_PERSISTENCE=postgresql -e JAVA_OPTIONS='-Dmy.sys.prop1=value1 -Dmy.sys.prop2=value2' \
quay.io/kiegroup/kogito-jobs-service-allinone:latest

I case that you need to convert a java like property name, to the corresponding environment variable name, to use the environment variables configuration alternative, you must apply the naming convention defined in the Quarkus Configuration Reference. For example, the name quarkus.datasource.jdbc.url must be converted to QUARKUS_DATASOURCE_JDBC_URL.

Global configurations

Global configurations that affects the job execution retries, startup procedure, etc.

  • Using environment variables

  • Using system properties with java like names

Name Description Default

KOGITO_JOBS_SERVICE_BACKOFFRETRYMILLIS

A long value that defines the retry back-off time in milliseconds between job execution attempts, in case the execution fails.

1000

KOGITO_JOBS_SERVICE_MAXINTERVALLIMITTORETRYMILLIS

A long value that defines the maximum interval in milliseconds when retrying to execute jobs, in case the execution fails.

60000

Name Description Default

kogito.jobs-service.backoffRetryMillis

A long value that defines the retry back-off time in milliseconds between job execution attempts, in case the execution fails.

1000

kogito.jobs-service.maxIntervalLimitToRetryMillis

A long value that defines the maximum interval in milliseconds when retrying to execute jobs, in case the execution fails.

60000

Persistence

An important configuration aspect of the Job Service is the persistence mechanism, it is where all the jobs information is stored, and guarantees no information is lost upon service restarts.

The Job Service image is shipped with the PostgreSQL, Ephemeral, and Infinispan persistence mechanisms, that can be switched by setting the JOBS_SERVICE_PERSISTENCE environment variable to any of these values postgresql, ephemeral, or infinispan. If not set, it defaults to the ephemeral option.

The kogito-jobs-service-allinone image is a composite packaging that include one different image per each persistence mechanism, making it clearly bigger sized than the individual ones. If that size represents an issue in your installation you can use the individual ones instead. Finally, if you use this alternative, the JOBS_SERVICE_PERSISTENCE must not be used, since the persistence mechanism is auto-determined.

PostgreSQL

PostgreSQL is the recommended database to use with the Job Service. Additionally, it provides an initialization procedure that integrates Flyway for the database initialization. Which automatically controls the database schema, in this way, the tables are created or updated by the service when required.

In case you need to externally control the database schema, you can check and apply the DDL scripts for the Job Service in the same way as described in Manually executing scripts guide.

To configure the PostgreSQL persistence you must provide these configurations:

  • Using environment variables

  • Using system properties with java like names

Variable Description Example value

JOBS_SERVICE_PERSISTENCE

Configure the persistence mechanism that must be used.

postgresql

QUARKUS_DATASOURCE_USERNAME

Username to connect to the database.

postgres

QUARKUS_DATASOURCE_PASSWORD

Password to connect to the database

pass

QUARKUS_DATASOURCE_JDBC_URL

JDBC datasource url used by Flyway to connect to the database.

jdbc:postgresql://timeouts-showcase-database:5432/postgres?currentSchema=jobs-service

QUARKUS_DATASOURCE_REACTIVE_URL

Reactive datasource url used by the Job Service to connect to the database.

postgresql://timeouts-showcase-database:5432/postgres?search_path=jobs-service

Variable Description Example value

JOBS_SERVICE_PERSISTENCE

Always an environment variable

postgresql

quarkus.datasource.username

Username to connect to the database.

postgres

quarkus.datasource.password

Password to connect to the database

pass

quarkus.datasource.jdbc.url

JDBC datasource url used by Flyway to connect to the database.

jdbc:postgresql://timeouts-showcase-database:5432/postgres?currentSchema=jobs-service

quarkus.datasource.reactive.url

Reactive datasource url used by the Job Service to connect to the database.

postgresql://timeouts-showcase-database:5432/postgres?search_path=jobs-service

The timeouts showcase example Quarkus Workflow Project with standalone services, shows how to run a PostgreSQL based Job Service as a Kubernetes deployment. In your local environment you might have to change some of these values to point to your own PostgreSQL database.

Ephemeral

The Ephemeral persistence mechanism is based on an embedded PostgresSQL database and does not require any external configuration. However, the database is recreated on each service restart, and thus, it must be used only for testing purposes.

Variable Description Example value

JOBS_SERVICE_PERSISTENCE

Configure the persistence mechanism that must be used.

ephemeral

If the image is started by not configuring any persistence mechanism, the Ephemeral will be defaulted.

Infinispan

To configure the Infinispan persistence you must provide these configurations:

  • Using environment variables

  • Using system properties with java like names

Variable Description Example value

JOBS_SERVICE_PERSISTENCE

Configure the persistence mechanism that must be used.

infinispan

QUARKUS_INFINISPAN_CLIENT_HOSTS

Sets the host name/port to connect to. Each one is separated by a semicolon.

host1:11222;host2:11222

QUARKUS_INFINISPAN_CLIENT_USE_AUTH

Enables or disables authentication. Set it to false when connecting to an Infinispan Server without authentication.

The enablement of this parameter depends on your local infinispan installation. If not set, the default value is true.

QUARKUS_INFINISPAN_CLIENT_SASL_MECHANISM

Sets SASL mechanism used by authentication. For more information about this parameter, see Quarkus Infinispan Client Reference.

When the authentication is enabled the default value is DIGEST-MD5.

QUARKUS_INFINISPAN_CLIENT_AUTH_REALM

Sets realm used by authentication.

When the authentication is enabled the default value is default.

QUARKUS_INFINISPAN_CLIENT_USERNAME

Sets username used by authentication.

Use this property if the authentication is enabled.

QUARKUS_INFINISPAN_CLIENT_PASSWORD

Sets password used by authentication.

Use this property if the authentication is enabled.

Variable Description Example value

JOBS_SERVICE_PERSISTENCE

Always an environment variable

infinispan

quarkus.infinispan-client.hosts

Sets the host name/port to connect to. Each one is separated by a semicolon.

host1:11222;host2:11222

quarkus.infinispan-client.use-auth

Enables or disables authentication. Set it to false when connecting to an Infinispan Server without authentication.

The enablement of this parameter depends on your local infinispan installation. If not set, the default value is true.

quarkus.infinispan-client.sasl-mechanism

Sets SASL mechanism used by authentication. For more information about this parameter, see Quarkus Infinispan Client Reference.

When the authentication is enabled the default value is DIGEST-MD5.

quarkus.infinispan-client.auth-realm

Sets realm used by authentication.

When the authentication is enabled the default value is default.

quarkus.infinispan-client.username

Sets username used by authentication.

Use this property if the authentication is enabled.

quarkus.infinispan-client.password

Sets password used by authentication.

Use this property if the authentication is enabled.

The Infinispan client configuration parameters that you must configure depends on your local Infinispan service. And thus, the table above shows only a sub-set of all the available options. To see the list of all the options supported by the quarkus infinispan client you must read the Quarkus Infinispan Client Reference.

Eventing API

The Job Service provides a Cloud Event based API that can be used to create and delete jobs. This API is useful in deployment scenarios where you want to use an event based communication from the workflow runtime to the Job Service. For the transport of these events you can use the knative eventing system or the kafka messaging system.

Knative eventing

By default, the Job Service Eventing API is prepared to work in a knative eventing system. This means that by adding no additional configurations parameters, it’ll be able to receive cloud events via the knative eventing system to manage the jobs. However, you must still prepare your knative eventing environment to ensure these events are properly delivered to the Job Service, see knative eventing supporting resources.

Finally, the only configuration parameter that you must set, when needed, is to enable the propagation of the Job Status Change events, for example, if you want to register these events in the Data Index Service.

  • Using environment variables

  • Using system properties with java like names

Variable Description Default value

KOGITO_JOBS_SERVICE_HTTP_JOB_STATUS_CHANGE_EVENTS

true to establish if the Job Status Change events must be propagated. If you set this value to true you must be sure that the sink binding was created.

false

Variable Description Default value

kogito.jobs-service.http.job-status-change-events

true to establish if the Job Status Change events must be propagated. If you set this value to true you must be sure that the sink binding was created.

false

Knative eventing supporting resources

To ensure the Job Service receives the knative events to manage the jobs, you must create the create job events and delete job events triggers shown in the diagram below. Additionally, if you have enabled the Job Status Change events propagation you must create the sink binding.

Knative Eventing API Resources
Figure 1. Knative eventing supporting resources

The following snippets shows an example on how you can configure these resources. Consider that these configurations might need to be adjusted to your local kubernetes cluster.

We recommend that you visit this example Quarkus Workflow Project with standalone services to see a full setup of all these configurations.

Create Job event trigger configuration example
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: jobs-service-postgresql-create-job-trigger
spec:
  broker: default
  filter:
    attributes:
      type: job.create
  subscriber:
    ref:
      apiVersion: v1
      kind: Service
      name: jobs-service-postgresql
    uri: /v2/jobs/events
Delete Job event trigger configuration example
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: jobs-service-postgresql-delete-job-trigger
spec:
  broker: default
  filter:
    attributes:
      type: job.delete
  subscriber:
    ref:
      apiVersion: v1
      kind: Service
      name: jobs-service-postgresql
    uri: /v2/jobs/events

For more information about triggers, see Knative Triggers.

Job Service sink binding configuration example
apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
  name: jobs-service-postgresql-sb
spec:
  sink:
    ref:
      apiVersion: eventing.knative.dev/v1
      kind: Broker
      name: default
  subject:
    apiVersion: apps/v1
    kind: Deployment
    selector:
      matchLabels:
        app.kubernetes.io/name: jobs-service-postgresql
        app.kubernetes.io/version: 2.0.0-SNAPSHOT

For more information about sink bindings, see Knative Sink Bindings.

Kafka messaging

To enable the Job Service Eventing API via the Kafka messaging system you must provide these configurations:

  • Using environment variables

  • Using system properties with java like names

Variable Description Default value

QUARKUS_PROFILE

Set the quarkus profile with the value kafka-events_support to enable the kafka messaging based Job Service Eventing API.

By default the kafka eventing api is disabled.

KOGITO_JOBS_SERVICE_KAFKA_JOB_STATUS_CHANGE_EVENTS

true to establish if the Job Status Change events must be propagated.

true when the kafka-events-support profile is set.

KAFKA_BOOTSTRAP_SERVERS

A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster.

localhost:9092 when the kafka-events-support profile is set.

MP_MESSAGING_INCOMING_KOGITO_JOB_SERVICE_JOB_REQUEST_EVENTS_V2_TOPIC

Kafka topic for events API incoming events. I general you don’t need to change this value.

kogito-job-service-job-request-events-v2 when the kafka-events_support profile is set.

MP_MESSAGING_OUTGOING_KOGITO_JOB_SERVICE_JOB_STATUS_EVENTS_TOPIC

Kafka topic for job status change outgoing events. I general you don’t need to change this value.

kogito-jobs-events when the kafka-events_support profile is set.

Variable Description Default value

quarkus.profile

Set the quarkus profile with the value kafka-events_support to enable the kafka messaging based Job Service Eventing API.

By default the kafka eventing api is disabled.

kogito.jobs-service.kafka.job-status-change-events

true to establish if the Job Status Change events must be propagated.

true when the kafka-events-support profile is set.

kafka.bootstrap.servers

A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster.

localhost:9092 when the kafka-events-support profile is set.

mp.messaging.incoming.kogito-job-service-job-request-events-v2.topic

Kafka topic for events API incoming events. I general you don’t need to change this value.

kogito-job-service-job-request-events-v2 when the kafka-events_support profile is set.

mp.messaging.outgoing.kogito-job-service-job-status-events.topic

Kafka topic for job status change outgoing events. I general you don’t need to change this value.

kogito-jobs-events when the kafka-events_support profile is set.

Depending on your Kafka messaging system configuration you might need to apply additional Kafka configurations to connect to the Kafka broker, etc. To see the list of all the supported configurations you must read the Quarkus Apache Kafka Reference Guide.

Leader election

Currently, the Job Service is a singleton service, and thus, just one active instance of the service can be scheduling and executing the jobs.

To avoid issues when it is deployed in the cloud, where it is common to eventually have more than one instance deployed, the Job Service supports a leader instance election process. Only the instance that becomes the leader activates the external communication to receive and schedule jobs.

All the instances that are not leaders, stay inactive in a wait state and try to become the leader continuously.

When a new instance of the service is started, it is not set as a leader at startup time but instead, it starts the process to become one.

When an instance that is the leader for any issue stays unresponsive or is shut down, one of the other running instances becomes the leader.

job service leader
Figure 2. Job Service leader election

This leader election mechanism uses the underlying persistence backend, which currently is only supported in the PostgreSQL implementation.

There is no need for any configuration to support this feature, the only requirement is to have the supported database with the data schema up-to-date as described in the PostgreSQL section.

In case the underlying persistence does not support this feature, you must guarantee that just one single instance of the Job Service is running at the same time.

Found an issue?

If you find an issue or any misleading information, please feel free to report it here. We really appreciate it!