Skip to main content

Configure Database Services for KOBIL Shift

Find here more info to configure required database services for the KOBIL Shift component. Please check for the specific advises always from Chart package "README" file.

General

Database related settings are all configured in the KOBIL Shift metaconfiguration file "values.yaml". This covers the parameter for credentials for database access, the db-service hostname, port and the database names per KOBIL Shift service. As KOBIL Shift consists of many services which each requires it's own database you will find many "database" sections within the KOBIL Shift service(s) sections in the metaconfiguration file. Ahead of the native DBMS services KOBIL Shift makes use of Kafka- and Redis-services in addition.

Main database settings are configured in the "common" section of the metaconfiguration "values.yaml" - plus specific parameter in the KOBIL Shift services sections.

Overview Database Services:

Please refer always to the KOBIL Shift charts README for most current information

  • Postgres  (scram-sha-256 password hashing is NOT supported) - main DB-service for all KOBIL Shift services (but SCP services)
  • Redis
  • Kafka Queues/Topics
  • mongoDB for SCP services only
  • ElasticSearch

Detailed supportetd versions see System Requirements.

General DBMS settings

KOBIL Shift Charts may cover the DB-service credentials in plain text in the metaconfiguration file - or using specific Secret to cover the credential sets. The configuration parameter for DB-name, Host, Port are always part of the metaconfiguration file by plain text parameter.

Sample IDP-section

# -- Configuration for idp-core
idpCore:
enabled: true
replicaCount: 1
database:
host: postgres
port: 5432
name: "idp_core"
auth:
username: user
password: "password"

Using Kubernetes Secrets

DB/Service login credentials

Important feature for the KOBIL Shift Deployment is the use of kubernetes secrets (https://kubernetes.io/docs/concepts/configuration/secret/) instead of plain text parameter in the metaconfiguration file "values.yaml". The Secret created in the namespace is covering the DB-credentials (see metaconfiguration file parameter common.existingSecretDatastoreCredentials and common.existingSecretAdminCredentials). This allows to run KOBIL Shift deployment without credentials configuration in the metaconfiguration "values.yaml" file. Find more details for using this secrets to configure credentials as well as which other parameters can be configures via secrets in the README (available via charts repository).

When parameter common.existingSecretDatastoreCredentials (and/or common.existingSecretAdminCredentials) is configured this overrules the metaconfiguration parameter plain text which are becoming obsolete then (i.e. idpCore.database.auth.username|password) . Still you may find the plain text metaconfiguration credentials parameter in the deployment output which are not honored at Pod startup and runtime then.

DB/Service Communication Encryption Keys

The component KOBIL Shift AST services is using application level encryption keys for data content (DB) level and service communication encryption.

Keys can be configured specifically per AST database or commonly with one configuration. KOBIL strongly suggests to use a common configuration for encryption keys. Find more info for this in the KOBIL Shift readme.

  • main parameter when using kubernetes secrets is existingSecretEncryptionKeys to make use of a secret covering the encryption key per service

  • main parameter when not using kubernetes secrets is databaseEncryptionMasterKey.

IMPORTANT: You must ensure to have uniquely configure your production install with a specific encryption key and to not use "default" values since they are unsafe. See the README for encryption key structure.

Encryption keys are set in the metaconfiguration file "values.yaml" in section "common":

metaconfiguration "common" section

# -- Section for configuration parameters that are common to more than one service.
common:
# -- The name of an existing secret with datastore credentials.
# See README.md for details and the required structure.
# NOTE: When it's set, the datastore credentials configured in this file
# are ignored.
existingSecretDatastoreCredentials: ""
# -- The name of an existing secret with admin credentials.
# See README.md for details and the required structure.
# NOTE: When it's set, the admin credentials configured in this file
# are ignored.
existingSecretAdminCredentials: ""

datastores:
# -- Global configuration of Redis used by all services
redis:
  host: redis
port: 6379

database:
# -- Optional TLS configuration for database connection.
# Currently used by smartscreen and ast services.
  tls:
# -- TLS mode. Supported values are
# `PREFER`: This mode tries to establish database connection using TLS.
# If that fails, it tries non-TLS connection. No server certificate validation is performed.
# `VERIFY_CA`: This mode requires TLS connection, i.e. there is no fallback
# to non-TLS. This mode performs server certificate validation against the provided trust store.
# `VERIFY_FULL`: This mode acts like VERIFY_CA with additional hostname verification of the server certificate.
mode: PREFER
trustStore:
# -- Type of the truststore. Supported types are `JKS` and `PKCS12`.
  type: JKS
# -- Truststore of the selected type in BASE64 encoding.
# This setting is required for TLS modes `VERIFY_CA` and `VERIFY_FULL`.
store: ""
# -- Password to open the truststore. This setting is required when a truststore is provided.
password: ""

  ast:
# -- The name of an existing secret with encryption keys.
# See README.md for details and the required structure.
# NOTE: When it's set, the encryption keys configured in this file
# are ignored.
existingSecretEncryptionKeys: ""

# -- Encryption master key for ast sessions.
# Must be randomly generated and unique for each Shift deployment.
# Must be set to an alphanumeric (UTF-8) string of length 64.
# Changing it invalidates all current ast sessions.
sessionEncryptionMasterKey: ""

# -- Encryption master key for sensitive data store in database.
# Must be randomly generated and unique for each Shift deployment.
# Must be set to an alphanumeric (UTF-8) string of length 64.
# This value cannot be changed after installation.
databaseEncryptionMasterKey: ""  

# -- Redis credentials used by ast services.
  redis:
  user: default
  password: password

TLS configuration

To especially configure the TLS/secured communication as part of "common" section you have to add a truststore to the KOBIL Shift namespace.

Configuration process:

  • ensure to have JKS truststore covering the appropriate certificates
  • save truststore to the KOBIL Shift namespace in a secret (see readme for structure)
  • validate secrets is loaded at POD runtime

For DB side configuration please refer to the respective DB manuals.

REDIS service configuration

Redis-services could be installed with or without "Cluster"-Mode. KOBIL Shift can make use both Redis service configurations - but with different parameter specification in the metaconfiguration.

Once the REDIS service is properly set up (or the REDIS Cluster service), please use in the metaconfiguration file "values.yaml" the redis host setting in below mentioned format.

redis service host configuration

common:
datastores:
# -- Common configuration of Redis used by all services
redis:
host: redis
# Host : {redis svc name}.{env namespace}.svc.cluster.local:<port>
# redis-cluster.kobil-shift.svc.cluster.local:6379
port: 6379
..
ast:
redis:
user: default
password: password

Note: Both endpoints array and the new method will work as expected.

In case of KOBIL Shift migrations it is potentially required to refresh and cleanup the Redis Datastructure (KEYS). This is possible by using the "redis-cli" "FLUSHALL" command to drop existing data and data-objects/structures.

Sample procedure for this is as follows: Check for the Redis-Master(role)-Pod
Use the Redis-Master(role)-Pod to execute the FLUSHALL command.

KAFKA-Services

KOBIL Shift can make use of the Strimzi-Kafka Operator to create the required Kafka-Services (running Kafka-cluster setup with Zookeeper Management and Control logic). See configuration in KOBIL Shift metaconfig - which is by default enabled.

# -- Configuration of the Kafka custom resources. Requires [Strimzi Kafka operator](https://strimzi.io/)
strimzi:
# -- Enable/disable deployment of Kafka custom resources [true|false]
enabled: true

Also possible is to configure for accessing existing Kafka-services in your environment. Find details here: External Kafka

In case of running multiple deployments accessing one centralized Kafkaservice - then the kafkatopic names need to customized to ensure being unique per deployment. Please contact KOBIL ProjectTeam for assistance if this customization is required. The current design of KOBIL Shift is to exclusively use the KafkaTopics with pre-defined names.

Important Note:  Once Kafka-services are initially deployed they should be not modified during runtime.  Doing a Kafka-service reconfiguration is not supported during KOBIL Shift "uptime".

In case the initally created Kafka-Cluster service requires a tuning for scale up this is possible by removing the Kafka-Cluster related stateful-sets - then to drop the persistent-volumes and after this to run a new KOBIL Shift deployment.

Details to add here:

  • configuration section in the KOBIL Shift metaconfiguration file - section "strimzi" and "kafka"
  • typical configuration setup for kafka - using available profile "basic", "tuned" and "custom" ("custom" requires specificiation section strimzi.sizing.custom to cover subsections for kafka and zookeeper)

Recreate the kafka-services

Intention for a recreating kafka-services is the tuning aspect in case initially configured kafka-services are not matching required performance aspects.

  • scale down KOBIL Shift deployments (AST- and IDP-deployments, also smart-screen and -dashboard) - also the KOBIL Shift Operator deployment
  • verify and find the kafkas.kafka.strimzi.io, kafkatopics.kafka.strimzi.io objects from strimzi
  • verify and find the PVC and PV used by kafka-services
  • verify and find the KOBIL servicegroup CR for kafka

Act as follows:

  • scale down ie.
for DEP in `kubectl get deployments | awk '{print \$1}' | grep <pattern> `; do echo $DEP; kubectl scale
--replicas=0 deployment/$DEP; done
  • delete the kafkas for this namespace / i.e. kubectl delete kafkas.kafka.strimzi.io <pattern>
  • delete the kafkatopics for this namespace / i.e.
for TOPIC in `kubectl get kafkatopics.kafka.strimzi.io | awk '{print $1}' | grep -v NAME `; do echo $TOPIC; kubectl delete kafkatopics.kafka.strimzi.io $TOPIC; done
  • kubectl delete pvc <zookeeper_PVC> <kafka-cluster_PVC>
  • reconfigure the KOBIL Shift metaconfiguration "strimzi" setup to cover the new kafka configuration
  • run the helm upgrade with the new metaconfiguration file (which implicitly overrule the deployment down-scaling initially)