Security

Automatiko comes with two levels of security

  • service level

  • instance level

Service level security

security on service level is based on RBAC (Roles Baesd Access Control) which is applied on every service api endpoint. Service level security is taken directly from the workflow definion’s custom attributes. Attribute name used for defining service level security is called securityRoles and accepts comma separated list of roles.

This in turn will annotate every endpoint of the service for that workflow (and its sub workflows) with @RolesAllowed("managers", "admin").

With that to be enforced requires one of possible configurations of service authentication.

An example that uses property based security is Vacation requests

That configuration will secure the service api and will allow access to it for authenticated users that are members of defined roles.

This level of security is considered the as base and should be enabled for most (if not all) services that run in production.

In addition to securing the service api, enabling service level security will also automatically set so called initiator of the instance. That means user who issued request that resulted in creating new instance will be associated with it as initiator/owner. This is prerequisite for the next level of security - instance level security.

Instance level security

Security on the instance level allows to authorize access based on context of the instance. A common access policy is based on initiator to give either exclusive rights on given instance to the initiator or some kind of special rights.

An example of such access policy is that only initiator/owner can abort running instance.

Instance level security is implemented by Access Policy

As can be noticed on the above interface, access policy restrict access on various operation types that can be executed on given instance

  • create instance check - who can create new instances

  • read instance check - who can see the instance

  • update instance check - who can update data of the instance

  • delete instance check - who can delete (abort) the instance

  • signal instance check - who can signal the instance

This allows fine grained access restrictions that can take into account entire context of the instance. This also includes actual data of the instance at a given point in time.

Instance level security is taken directly from the workflow definion’s custom attributes. Attribute name used for defining service level security is called accessPolicy and accepts single policy identifier listed below.

Automatiko comes with few out of the box access policies

  • allow all

  • participants

  • initiator

Allow all access policy

allow all access policy is the default access policy if none is defined. It essentially does not make any checks.

Participants access policy

participants access policy is one of the most common as it restrict access for every check to either

  • the initaiator

  • users that have currently assigned user tasks

Initiator access policy

initiator access policy is an extension to participants access policy that will restrict access to delete operation to initator only. That means that users who have currently assigned user tasks can still access the instance and work on tasks but won’t be able to delete such instance.

Initiator is assigned based on authenticated user but it can also be assigned based on data object that is tagged with initiator tag. This allows to create instances on behalf of someone or in case user info is not directly available e.g. messaging.

An example that uses instance level security is Vacation requests

Composite access policy

Composite access policy allows to combine multiple access polcies where all of them must be satisfied to be considered valid.

Composite policy is configured by using composite:policy1,policy2,policy3 syntax. So the composite is the identifier of the access policy and the part after : are the options in this case the policies (their identifiers) that are part of the composite access control.

Custom access policy

Users can create their own access policies by implementing io.automatiko.engine.api.auth.NamedAccessPolicy interface. This interface is an extension to regular AccessPolicy interface to enforce the identifier that will be used to register the policy.

Reference from workflow perspective stays exactly the same, meaning it is referenced by the identifier.

Custom access policies are expected to be CDI beans to be able to be discovered and registered. This is both an advantage (can reference other beans) and disadvantage (are bound to declared scope - usually application scope and by that can be shared across workflows), so keep in mind the consequences.

Below is a sample custom access policy that takes advantage of http request information to restrict access to localhost only.

import java.util.Set;

import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;

import io.automatiko.engine.api.auth.IdentityProvider;
import io.automatiko.engine.api.auth.NamedAccessPolicy;
import io.automatiko.engine.api.workflow.ProcessInstance;
import io.automatiko.engine.workflow.AbstractProcessInstance;
import io.vertx.core.http.HttpServerRequest;

@ApplicationScoped
public class LocalHostOnlyAccessPolicy<T> implements NamedAccessPolicy<ProcessInstance<T>> {

    @Inject
    HttpServerRequest injectedHttpServletRequest;

    private boolean allowFromLocalhostOnly() {

        if (injectedHttpServletRequest != null && injectedHttpServletRequest.host() != null
                && !injectedHttpServletRequest.host().toLowerCase().startsWith("localhost")) {
            return false;
        }
        return true;
    }

    @Override
    public boolean canCreateInstance(IdentityProvider identityProvider) {
        return allowFromLocalhostOnly();
    }

    @Override
    public boolean canReadInstance(IdentityProvider identityProvider, ProcessInstance<T> instance) {
        return allowFromLocalhostOnly();
    }

    @Override
    public boolean canUpdateInstance(IdentityProvider identityProvider, ProcessInstance<T> instance) {
        return allowFromLocalhostOnly();
    }

    @Override
    public boolean canDeleteInstance(IdentityProvider identityProvider, ProcessInstance<T> instance) {
        return allowFromLocalhostOnly();
    }

    @Override
    public boolean canSignalInstance(IdentityProvider identityProvider, ProcessInstance<T> instance) {
        return allowFromLocalhostOnly();
    }

    @Override
    public Set<String> visibleTo(ProcessInstance<?> instance) {
        return ((AbstractProcessInstance<?>) instance).visibleTo();
    }

    @Override
    public String identifier() {
        return "localHostOnly";
    }

}

Encryption of data at rest

Another important part of security is that data stored (workflow instance data) in (whatever) data store might require additional encryption at rest. This means that in case someone gains access to the raw data stored won’t be able to easily read and extract information out if it.

Encryption at rest is available for any kind of data store supported by Automatiko. It requires enabling the codec that will intercept the data to be stored and encode it based on internal implementation details.

Automatiko comes with two out of the box codec implementation

  • AES

  • Base64

Base64 is only for test purpose to ensure that data can be easily encrypted and should not be used for production

Use it

AES

To use out of the box AES codec implementation it requires following properties to be set inside application.properties file

quarkus.automatiko.persistence.encryption=aes

automatiko.encryption.aes.key=XXXXXXXXXXXXX

where XXXXXXXXXXXXX is your AES key.

Base64

To use out of the box Base64 codec implementation it requires following property to be set inside application.properties file

quarkus.automatiko.persistence.encryption=base64

Implement custom codec

Codecs are implemented in pluggable way so there is a way to provide your own implementation with whatever algorithm needed.

To do so, create a class that implements io.automatiko.engine.api.workflow.encrypt.StoredDataCodec and implement both methods

  • encode

  • decode

Any exceptions thrown that prevents it from successful encryption should be propagated up to the caller.

The codec implementation must be a CDI bean so is discovered and made available to the persistence layer.

Only one implementation is allowed for given project.