Usage
After installation, the CRD for this operator must be created:
kubectl apply -f /etc/stackable/opa-operator/crd/openpolicyagent.crd.yamlTo create a single node OPA (v0.51.0) cluster with Prometheus metrics exposed on port 8081:
    apiVersion: opa.stackable.tech/v1alpha1
    kind: OpaCluster
    metadata:
      name: simple-opa
    spec:
      image:
        productVersion: "0.51.0"
        stackableVersion: "0.0.0-dev"
      servers:
        roleGroups:
          default:
            selector:
              matchLabels:
                kubernetes.io/os: linuxPlease note that the version you need to specify is not only the version of OPA which you want to roll out, but has to be amended with a Stackable version as shown. This Stackable version is the version of the underlying container image which is used to execute the processes. For a list of available versions please check our image registry. It should generally be safe to simply use the latest image version that is available.
Policy Language
Users can define policies by using Rego, OPAs policy language.
Policy definitionas are deployed as ConfigMap resources as described in implementation notes.
Here is an example:
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: test
  labels:
    opa.stackable.tech/bundle: "true" (1)
data:
  test.rego: | (2)
    package test
    hello {
      true
    }
    world {
      false
    }| 1 | Mark this ConfigMapas a bundle source. | 
| 2 | test.regois the file name to use inside the bundle for these rules. | 
Monitoring
The managed OPA instances are automatically configured to export Prometheus metrics. See Monitoring for more details.
Log aggregation
The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent:
spec:
  clusterConfig:
    vectorAggregatorConfigMapName: vector-aggregator-discovery
  servers:
    config:
      logging:
        enableVectorAgent: true
        containers:
          opa:
            console:
              level: NONE
            file:
              level: INFOThe Stackable operator for OPA only supports automatic log configuration due to the lack of customization for the OPA logging.
Furthermore, the only customization possible for console output for the opa and bundle-builder containers is NONE. This deactivates console logging. Other log levels for console logging in these containers will be overwritten by the file log level.
Further information on how to configure logging, can be found in Logging.
Configuration & Environment Overrides
The cluster definition also supports overriding configuration properties and environment variables, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role).
| Do not override port numbers. This will lead to faulty installations. | 
Environment Variables
Environment variables can be (over)written by adding the envOverrides property.
For example per role group:
servers:
  roleGroups:
    default:
      config: {}
      envOverrides:
        MY_ENV_VAR: "MY_VALUE"or per role:
servers:
  envOverrides:
    MY_ENV_VAR: "MY_VALUE"
  roleGroups:
    default:
      config: {}Storage for data volumes
The OPA Operator currently does not support using PersistentVolumeClaims for internal storage.
Resource requests
Stackable operators handle resource requests in a sligtly different manner than Kubernetes. Resource requests are defined on role or group level. See Roles and role groups for details on these concepts. On a role level this means that e.g. all workers will use the same resource requests and limits. This can be further specified on role group level (which takes priority to the role level) to apply different resources.
This is an example on how to specify CPU and memory resources using the Stackable Custom Resources:
---
apiVersion: example.stackable.tech/v1alpha1
kind: ExampleCluster
metadata:
  name: example
spec:
  workers: # role-level
    config:
      resources:
        cpu:
          min: 300m
          max: 600m
        memory:
          limit: 3Gi
    roleGroups: # role-group-level
      resources-from-role: # role-group 1
        replicas: 1
      resources-from-role-group: # role-group 2
        replicas: 1
        config:
          resources:
            cpu:
              min: 400m
              max: 800m
            memory:
              limit: 4GiIn this case, the role group resources-from-role will inherit the resources specified on the role level. Resulting in a maximum of 3Gi memory and 600m CPU resources.
The role group resources-from-role-group has maximum of 4Gi memory and 800m CPU resources (which overrides the role CPU resources).
| For Java products the actual used Heap memory is lower than the specified memory limit due to other processes in the Container requiring memory to run as well. Currently, 80% of the specified memory limits is passed to the JVM. | 
For memory only a limit can be specified, which will be set as memory request and limit in the Container. This is to always guarantee a Container the full amount memory during Kubernetes scheduling.
A minimal OPA setup consists of 1 Pod per Node (DaemonSet) and has the following resource requirements per scheduled Pod:
- 
600mCPU request
- 
1200mCPU limit
- 
512Mimemory request and limit
Of course, additional services, require additional resources. For Stackable components, see the corresponding documentation on further resource requirements.
Corresponding to the values above, the operator uses the following resource defaults for the main app container:
servers:
  roleGroups:
    default:
      config:
        resources:
          cpu:
            min: 250m
            max: 500m
          memory:
            limit: 256Mi| The default values are most likely not sufficient to run a proper cluster in production. Please adapt according to your requirements. |