KOBIL SHIFT
KOBIL SHIFT helm chart
Prerequisites
-
Kubernetes version 1.21, 1.22, 1.23, 1.24, 1.25.
-
Helm version 3.10.2.
-
Postgres version 13.4 for all services except SCP. Note: scram-sha-256 password hashing is NOT supported.
-
MongoDB version 4.4 for SCP.
-
Redis 6.2.
-
Istio. Currently tested with 1.17.2 and 1.18.2.
-
Strimzi Kafka Operator with support for Kafka 3.4.0 (0.33.2 - 0.37.0). Currently tested with 0.33.2.
helm install -n kafka strimzi-kafka-operator strimzi-kafka-operator --version x.y.z --repo https://strimzi.io/charts/ --set watchAnyNamespace=true
-
Access to KOBIL chart museum
helm repo add kobil https://charts.kobil.com --username {chart_username} --password {chart_password}
-
An
imagePullSecret
providing access to the relevant repositories at Azure (kobilsystems.azurecr.io
)kubectl create secret docker-registry registry-azure \
--docker-server=kobilsystems.azurecr.io \
--docker-username=your_user_token_name \
--docker-password=your_password -
KOBIL Shift operator
Deploy KOBIL Shift-Operator
KOBIL Shift Operator charts are available at https://charts.kobil.com
.
Before deployment, configure image pull secret, Docker image registry, and Helm Chart repository credentials in configuration file shift-operator-values.yaml
:
global:
imagePullSecrets:
- registry-azure
registry: kobilsystems.azurecr.io
helmRepo:
url: https://charts.kobil.com
username: ""
password: ""
helm install shift-operator -f shift-operator-values.yaml -n shift kobil/shift-operator --version x.y.z
Verify that KOBIL Shift-Operator is running by executing:
kubectl -n shift get deployments
Also verify that the custom resource definition servicegroups.shift.kobil.com
is available by executing kubectl get crd
.
Deploy KOBIL Shift
The next step is to deploy KOBIL Shift in the same namespace where the Shift Operator is running.
Set appropriate configuration for the KOBIL Shift services in configuration file shift-values.yaml
. The included values.yaml can be used as template. See section Values and Issuer CA for additional information.
helm install shift -f shift-values.yaml -n shift kobil/shift --version x.y.z
Deploying Shift chart creates multiple servicegroups.shift.kobil.com
objects which are managed by Shift Operator. Use
kubectl -n shift get servicegroups.shift.kobil.com
to obtain an overview of the deployed servicegroups. The READY column shows the status of the servicegroup. The status changes to true
, once Shift Operator successfully deployed all services in the servicegroup and the corresponding workloads are in a ready state.
If a servicegroup fails to become ready, use command
kubectl -n shift describe servicegroups.shift.kobil.com <name-of-servicegroup>
to obtain information about the deployment error.
In addition, Shift chart creates resources which are managed by Strimzi Kafka Operator and Istio. The status of the Kafka cluster can be observed using command
kubectl -n shift get kafkas.kafka.strimzi.io
The deployed Istio resources can be viewed using commands
kubectl -n shift get gateways.networking.istio.io
kubectl -n shift get virtualservices.networking.istio.io
kubectl -n shift get destinationrules.networking.istio.io
Post deployment configuration
The following settings need to be performed in IDP UI. Open https://idp.{{global.routing.domain}}/auth/admin
and login using IDP master admin credentials:
username: {{ global.idp.adminUser.username }}
password: {{ global.idp.adminUser.passsord }}
Configure SMTP settings under 'Realm Settings -> Email'.
Create confidential OIDC client workspacemanagement
and configure its client_id and client_secret in file shift-values.yaml
.
smartdashboardKongConfigurationBackend:
# -- client_id and client_secret of an OIDC client in IDP master realm. Must be manually created.
config:
masterClientId: "workspacemanagement"
masterClientSecret: "client_secret"
Then execute command
helm upgrade shift -f shift-values.yaml -n shift kobil/shift --version x.y.z
Open URL https://smartdashboard.{{global.routing.domain}}/dashboard/master/workspace-management
to access Kobil Portal.
Issuer CA
Part of every SHIFT deployment are multiple identities as Certificate Authorities (CA).
- A CA must have a key pair (CA Key Pair).
- A CA must have a digital certificate (CA Certificate) signed either by its own private key, as a self-signed root certificate, or be signed by another CA's private key, as an intermediate certificate.
- The main CA identity of a SHIFT deployment will be named Issuer CA.
- The main CA identity's certificate will be named Issuer CA Certificate.
- The main CA identity's key pair will be named Issuer CA Key Pair.
- A sub CA Certificate of SHIFT deployments, which are signed by the Issuer CA's private key will be named Tenant Signer CA Certificate.
- Sub CA key pairs will be named Tenant Signer CA Key Pair.
An Issuer CA Certificate and Issuer CA Key Pair must be generated for each SHIFT deployment. The Issuer CA Key Pair cannot be changed afterwards. Generate the Issuer CA Certificate according to the following instructions.
- The Issuer CA Key Pair must be in PKCS#8 format. Both the Issuer CA Certificate and Issuer CA Key Pair must be DER-encoded
- The Issuer CA Key Pair must be of one of the supported algorithms:
- RSA with
>= 2048 bit
keys - ECDSA with one of the supported curves:
secp256r1
(orP-256
),secp384r1
(orP-384
),secp521r1
(orP-521
)
- Ed25519
- RSA with
- Set the key algorithm used by the Issuer CA Key Pair using
ca.signers.key_generation.algorithm
. If using an intermediate certificate, it is recommended to use the same key algorithm that is configured for the root certificate. Thecurve
parameter (for ECDSA), or thestrength
parameter (for RSA), can be set to a more secure configuration, compared to the root certificate, to provide additional security. - The Issuer CA Certificate must have the
Basic Constraints
Certificate Extension withCA=True
andpathLen
unset or>= 1
. The Certificate Extension must be marked as critical. - The Issuer CA Certificate must have the
Key Usage
Certificate Extension with at least the bits forkeyCertSign
andcRLSign
set. The Certificate Extension must be marked as critical. Other usage bits should not be set. - The Issuer CA Certificate must have specific Certificate Extension policies set.
These Certificate Extensions depend on the feature sets that are required.
Alternatively, the Certificate Extensions may specify
anyPolicy
. These Certificate Extension policies should not be marked critical. The following Certificate Extension policies are supported:Base
policy with OID1.3.6.1.4.1.14481.109.4.1
. This policy must always be added.SCP
policy with OID1.3.6.1.4.1.14481.109.4.2
. This policy is needed when scp services are deployed and messaging features are used.mTLS
policy with OID1.3.6.1.4.1.14481.109.4.3
. This policy is needed when ast services are configured to enforce mutualTLS communication with clients.
- The above policies are umbrella policies, combining multiple single policies, required by the associated feature set.
It is recommended to use these umbrella policies. They combine the following single policies:
Base
policy contains1.3.6.1.4.1.14481.109.1.0
(profileLEAF_CA
)1.3.6.1.4.1.14481.109.1.4
(profileAST_DEVICE
)
SCP
policy contains1.3.6.1.4.1.14481.109.1.1
(profileSIGNATURE
)1.3.6.1.4.1.14481.109.1.2
(profileAUTHENTICATION
)1.3.6.1.4.1.14481.109.1.3
(profileENCRYPTION
)1.3.6.1.4.1.14481.109.1.6
(profileSIGNATURE_GATEWAY
)1.3.6.1.4.1.14481.109.1.7
(profileAUTHENTICATION_GATEWAY
)1.3.6.1.4.1.14481.109.1.8
(profileENCRYPTION_GATEWAY
)
mTLS
policy contains1.3.6.1.4.1.14481.109.1.9
(profileTLS_CLIENT
)1.3.6.1.4.1.14481.109.1.10
(profileTLS_CLIENT_AND_KEY
)
- The Issuer CA Certificate may have the
Extended Key Usage
Certificate Extension with theid_kp_OCSPSigning
key purpose set. Other key purpose IDs should not be set. - Other Certificate Extensions should not be present.
Below is a simple example how to generate an Issuer CA Certificate as a self-signed root certificate using OpenSSL. The example generates an Issuer CA Certificate for all three umbrella policies described above.
-
Create file
openssl.cnf
with the following content[req]
default_bits = 4096
encrypt_key = no
default_md = sha512
prompt = no
utf8 = yes
x509_extensions = v3_req
distinguished_name = req_distinguished_name
# Adjust below values as required
[req_distinguished_name]
C = DE
ST = Rheinland-Pfalz
L = Worms
O = KOBIL GmbH
CN = KOBIL Shift Issuer CA
[v3_req]
basicConstraints = critical, CA:TRUE, pathlen:1
keyUsage = critical, keyCertSign, cRLSign
# explicit policies
certificatePolicies = 1.3.6.1.4.1.14481.109.4.1, 1.3.6.1.4.1.14481.109.4.2, 1.3.6.1.4.1.14481.109.4.3
# or alternatively anyPolicy
# certificatePolicies = 2.5.29.32.0 -
Create an ECDSA key-pair for curve P-521, convert it to PKCS#8 format, and store it in file
key.der
.openssl ecparam -name P-521 -genkey -noout -outform DER | openssl pkcs8 -inform DER -topk8 -nocrypt -outform DER -out key.der
-
Create a self-signed certificate with a validity of 10 years for the public key generated in the previous step and store it in file
cert.der
.openssl req -nodes -x509 -days 3650 -config openssl.cnf -key key.der -keyform DER -out cert.der -outform DER
-
Base64 encode key and certificate. The content of resulting files
key.b64
andcert.b64
can be added to valuescommon.ast.issuer.key
andcommon.ast.issuer.certs
, respectively.openssl enc -a -A -in key.der -out key.b64
openssl enc -a -A -in cert.der -out cert.b64
Updating policies
It is possible to change the supported Certificate Extension policies of the Issuer CA Certificate. This can be done by reissuing the Issuer CA Certificate with the updated Certificate Extension policies.
When reissuing the Issuer CA Certificate, the Issuer CA Key Pair, both the public and private keys, must not be changed.
When using the above OpenSSL example, edit the openssl.cnf
config file and adjust the policies accordingly.
Then reissue the Issuer CA Certificate using the command:
openssl req -nodes -x509 -days 3650 -config openssl.cnf -key key.der -keyform DER -out cert.der -outform DER
Then base64 encode it using command
openssl enc -a -A -in cert.der -out cert.b64
Then update the value common.ast.issuer.certs
with content of file cert.b64
.
After updating the Certificate Extension policies of the Issuer CA Certificate, existing Tenant Signer CA Certificates must be manually updated using the following instructions:
-
For each tenant, in which a Tenant Signer CA Certificate exists, obtain an access token with admin write privileges
- In the default permission configuration, the required role is
ks-management/Admin
- If the permission configuration was changed from the default, use a token with any of the
api.security.jwtAuth.external.writeAccessRoles
from the AST-CA service's values
- In the default permission configuration, the required role is
-
Execute
PATCH /v1/tenants/<tenant>/signers/admin
with the admin token and an empty request body, e.g.curl -X 'PATCH' https://asts.example.com/v1/tenants/<tenant>/signers/admin \
--header "authorization: bearer <token>" -
If successful, the AST-CA service returns a body that looks like this:
{
"id": "<the new signer ID>",
"tenant": "<tenant>",
"name": "<signer name>"
} -
The AST-CA Service has enqueued the Tenant Signer CA Certificate to be reissued and will recreate it using the same Tenant Signer CA Key Pair
-
It is recommended to recreate the SDK Config JWT for tenants in which the Tenant Signer CA Certificate as been reissued, but it is not required.
Mutual TLS
Shift supports a mode where clients are forced to perform mutual TLS authentication when accessing certain endpoints. This feature is configured using values section common.mutualTLS:
.
Set common.mutualTLS.services.enabled: true
to enable this feature. When enabled, the first instance that terminates TLS for client traffic must be configured to optionally perform mutual TLS authentication.
The trusted CA certificates to use when verifying client certificates must be the same certificates provided in value common.ast.issuer.certs
or alternatively in the existing secret common.ast.issuer.existingSecretIssuerCa
.
Client certificates must be added to the request headers of requests which are forwarded to upstream services. The names of the request header for client certificates must be specified as a list using the value common.mutualTLS.services.certRequestHeaders:
. Multiple header names are supported.
common:
mutualTLS:
services:
enabled: true
certRequestHeaders:
- "x-forwarded-client-cert"
In case Istio ingress gateway is the first instance that terminates TLS for client traffic, set common.mutualTLS.istioIngressGateway.enabled: true
to configure it for mutual TLS.
The trusted CA certificates to use when verifying client certificates must also be configured. The required format is a single line base64 encoded list of certificates in PEM format. Either provide them directly using value common.mutualTLS.istioIngressGateway.cacerts:
or manually put them in a Kubernetes secret and set common.mutualTLS.istioIngressGateway.useExistingCaCertsSecret: true
. The name of the existing secret must be {{ .Values.global.routing.tlsSecret }}-cacert
, e.g. tls-secret-cacert
when using the default.
When using mutual TLS on the Istio ingress gateway, the value of common.mutualTLS.services.certRequestHeaders:
must not be changed.
common:
mutualTLS:
services:
enabled: true
istioIngressGateway:
enabled: true
useExistingCaCertsSecret: true
Example for an existing CA certificates secret.
apiVersion: v1
kind: Secret
metadata:
name: tls-secret-cacert
type: Opaque
data:
cacert: "single line base64 encoded list of certificates in PEM format"
When using the mutual TLS feature, the certificate policy mTLS
must be added to the Issuer CA Certificate. See Section Issuer CA for details on the policies. See Section Updating policies for details on how to add the mTLS
policy to an existing Issuer CA Certificate and how to update existing Tenant Signer CA Certificates.
Upgrading
Upgrading from versions before 0.153.0
This shift version updates the version of the included Kafka cluster from 3.2.0 to 3.4.0. Supported Strimzi Kafka Operator versions change from 0.29.0 - 0.33.2 to 0.33.2 - 0.37.0.
Before applying the update, ensure that Strimzi Kafka Operator version 0.33.2 is installed. This is the only version that supports both Kafka 3.2.0 and 3.4.0, c.f. supported versions of Strimzi Kafka Operator.
Also ensure that a shift version using Kafka 3.2.0 is running before applying the upgrade, i.e. shift version 0.133.0 or newer.
Upgrading from versions before 0.134.0
Shift version 0.134.0 removes two no longer needed Kafkatopic Kubernetes resources (com.kobil.smartscreen.resource-changes
and com.kobil.smartscreen.events
). Since the included Kafka cluster by default is set to prevent topic deletion, the Kafka topics are not actually deleted and the Kafkatopic Kubernetes resources will be automatically recreated by the Strimzi topic operator. This has no negative impact. Optionally, the following steps can be performed after updating to shift version 0.134.0 to permanently delete the Kafka topics:
-
Enable topic deletion in Kafka by adding the following to custom-values.yaml and upgrade the shift helm release.
strimzi:
valuesOverride:
kafka:
config:
delete.topic.enable: "true" -
Delete the Kafkatopic Kubernetes resources
com.kobil.smartscreen.resource-changes
andcom.kobil.smartscreen.events
:kubectl -n shift delete kafkatopics.kafka.strimzi.io com.kobil.smartscreen.resource-changes com.kobil.smartscreen.events
-
Remove the
valuesOverride
block and upgrade the shift helm release to disable topic deletion.
Upgrading from versions before 0.133.0
This shift version updates the version of the included Kafka cluster from 3.0.0 to 3.2.0. Supported Strimzi Kafka Operator versions change from 0.26.0 - 0.29.0 to 0.29.0 - 0.33.2.
Before applying the update, ensure that Strimzi Kafka Operator version 0.29.0 is installed. This is the only version that supports both Kafka 3.0.0 and 3.2.0, c.f. supported versions of Strimzi Kafka Operator.
Also ensure that a shift version using Kafka 3.0.0 is running before applying the upgrade, i.e. shift version 0.74.0 or newer.
Upgrading from versions before 0.93.0
Updating to this version causes down time of Smart Screen until manual migration is performed. The existing Smart Screen must be migrated manually after performing the upgrade. Migration is done using a http request to smartscreen-services. If migration is omitted, the services will launch, but clients will see an empty Smart Screen. Any changes to Smart Screen happening after the update and before the migration will be overwritten. Since this service is not exposed outside of the cluster, port forwarding must be used, e.g.
kubectl -n <shift-namespace> port-forward svc/<svc-name-smartscreen-services> 8080:80
Then execute the following curl command.
curl -X 'POST' http://localhost:8080/v1/commands/migrate
Upgrading from versions before 0.86.0
Due to an incompatibility in the Infinispan version used by idp-core, a rolling update is not possible when using the default idp-core image. The idp-core deployment must be scaled down to 0 before applying the upgrade. This causes downtime.
Upgrading from versions before 0.80.0
Shift version 0.80.0 removes built in defaults for security related configurations. This includes ast-services session and database encryption keys as well as the issuer private key and certificate. To allow existing installations to keep functioning, a new value testInstallation
was added. When set to true
, the previous defaults are used.
Note that testInstallation: true
must only be used for test and demo deployments and is not suitable for productive usage.
Upgrading from versions before 0.68.0
-
Prepare for the upgrade
-
Some Kafka topics were renamed, which means that the Kafkatopic resource is reinstalled. To avoid topic deletion, ensure that the kafka option
delete.topic.enable: "false"
is set (this is the default since shift version 0.47.0). -
To avoid that the persistent volumes (pv) are deleted if the corresponding persistent volume claims (pvc) are accidently removed, edit the persistent volume resources and change
spec.persistentVolumeReclaimPolicy
fromDelete
toRetain
. -
Some CD tools prune the pvc resources during the upgrade. In case of ArgoCD, this can be avoided by adding the following to values.yaml (this requires shift version 0.45.0 or newer).
kafka:
kafka:
extraValues:
template:
persistentVolumeClaim:
metadata:
annotations:
argocd.argoproj.io/sync-options: Prune=false
zookeeper:
extraValues:
template:
persistentVolumeClaim:
metadata:
annotations:
argocd.argoproj.io/sync-options: Prune=false -
Any of the above config changes must be applied to the currently used version of shift.
-
-
Before the upgrade:
- In values.yaml, remove
global.kafka.enabled
and replace it withstrimzi.enabled
. - Additional Kafka topics added via value
kafka.topics
must be migrated to valuestrimzi.additionalTopics
. See Section Values for details.
- In values.yaml, remove
-
After performing the update, ast-stream application needs to be manually reset using Kafka Streams Application Reset Tool. This is required because the partitions of corresponding Kafka topics were increased.
-
Enable topic deletion in Kafka by adding the following to values.yaml and upgrade the shift helm release.
strimzi:
valuesOverride:
kafka:
config:
delete.topic.enable: "true" -
Scale down ast-stream service to zero. This can be done using command:
kubectl -n <shift-namespace> scale --replicas=0 deployment <ast-stream-deployment-name>`
-
Reset the stream application using command:
kubectl -n <shift-namespace> exec -it <kafka-pod-name-0> \
/bin/bash -- bin/kafka-streams-application-reset.sh \
--bootstrap-servers localhost:9092 \
--application-id com.kobil.ast.streamThe output of the command will be similar to
No input or intermediate topics specified. Skipping seek.
Deleting all internal/auto-created topics for application com.kobil.ast.stream
Done. -
Scale up ast-stream to the previous replicas using command:
kubectl -n <shift-namespace> scale --replicas=1 deployment <ast-stream-deployment-name>`
-
Disable topic deletion in Kafka by removing the values added in the first step and upgrade the shift helm release.
-
-
After performing the update, old topics can be deleted. This step is optional.
-
Enable topic deletion in kafka by adding the following to values.yaml and upgrade the shift helm release.
strimzi:
valuesOverride:
kafka:
config:
delete.topic.enable: "true" -
Open a shell in the running Kafka container
kubectl -n <shift-namespace> exec -it <kafka-pod> sh
-
Execute the following commands to delete old topics:
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic ast.audit
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic audit
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic health-check-topic
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.client.management
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.client.management.event
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.ast.healthCheck
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.ast.ca.signerCreated
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.astlogin.events
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.astmanagement.events
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.scp.notifier.push_messages
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.vertx.smartscreen.events
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.vertx.smartscreen.smartScreenTopic
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic checkStatus
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic createOperation
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic createTenant
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic createTransaction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic creditAction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic creditActionResult
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic digitalAction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic digitalActionResult
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic digitalBalanceTransaction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic initiateCancelTransaction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic initiateTransactionCreationAndPayment
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic initiateTransactionPayment
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic operationAction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic operationActionResult
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic operationCallback
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic responseNotification
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic securityNotification
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic statusCallback
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic statusNotification
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionCallback
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionNotificationDeliveryData
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionRequestDeliveryData
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionRequestNotification -
Remove the
valuesOverride
block and upgrade the shift helm release to disable topic deletion.
-
Sizing
Sizing of the Kafka cluster deployed using Strimzi Kafka Operator can be configured using value strimzi.sizing.mode
. Supported values are 'basic', 'tuned', 'custom'.
When using mode 'custom', values strimzi.sizing.custom.kafka
and strimzi.sizing.custom.zookeeper
must be specified. See also documentation on sizing and configuration
Note: Changing the sizing mode after deployment is highly discouraged, as it effects the topic replica count and partition assignment to nodes. It can even lead to data loss.
-
Performance mode 'basic' corresponds to configuration
strimzi:
sizing:
mode: "custom"
custom:
kafka:
replicas: 1
resources:
requests:
memory: 2Gi
cpu: "100m"
limits:
memory: 2Gi
jvmOptions:
-Xms: 1024m
-Xmx: 1024m
config:
auto.create.topics.enable: "false"
delete.topic.enable: "false"
default.replication.factor: 1
min.insync.replicas: 1
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
zookeeper:
replicas: 1
resources:
requests:
memory: 768Mi
cpu: "50m"
limits:
memory: 768Mi
jvmOptions:
-Xms: 512m
-Xmx: 512m -
Performance mode 'tuned' corresponds to configuration
strimzi:
sizing:
mode: "custom"
custom:
kafka:
replicas: 3
resources:
requests:
memory: 8Gi
cpu: "2"
limits:
memory: 8Gi
jvmOptions:
-Xms: 4096m
-Xmx: 4096m
config:
auto.create.topics.enable: "false"
delete.topic.enable: "false"
default.replication.factor: 3
min.insync.replicas: 2
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
zookeeper:
replicas: 3
resources:
requests:
memory: 1536Mi
cpu: "1"
limits:
memory: 1536Mi
jvmOptions:
-Xms: 1024m
-Xmx: 1024m
Values -- excluded here in online docu
Configuration
Additional search providers for smartscreen-search
Smartscreen-search supports additional optional search providers which are accessible with the new endpoint /tenants/{tenantId}/provider-search
. This new endpoint calls search APIs that can be configured via value smartscreenSearch.searchProviders
(default []
). A sample configuration looks like this
searchProviders:
- name: Test
uriTemplate: https://test.com/{tenantId}
headers:
Authorization: Bearer search-test
Content-Type: application/json
timeout: 100
httpMethod: GET
requestBody: '{ "request": "{query}" }'
The search configuration is templated. Templates will be replaced with variables when called. The following variables are currently available:
- The search
query
- The
languange
used by the app (if available) - The
tenantId
where the query was called - The OIDC
token
used by the mobile app to authenticate to smartscreen components (if available)
By default, no custom search providers are configured. Calling /provider-search
without search providers will return an empty result. It is possible to define multiple search providers, in which case, they will all be queried sequentially and /provider-search
will only return when all searches are finished (or timed out). It is advised to take this into consideration when configuring production systems.
Credentials and other sensitive data from existing Kubernetes secrets
Shift supports reading certain credentials and security sensitive data from existing Kubernetes secrets. If this feature is used, it is not required to add them in values.yaml, because they will be ignored.
Shift currently supports four existing secrets for database credentials, admin credentials, encryption keys, and issuer CA. Usage of these secrets can be configured independent of each other:
common:
# The name of an existing secret with datastore credentials.
# See README.md for details and the required structure.
# NOTE: When it's set, the datastore credentials configured in this file
# are ignored.
existingSecretDatastoreCredentials: "shift-datastore-credentials"
# -- The name of an existing secret with admin credentials.
# See README.md for details and the required structure.
# NOTE: When it's set, the admin credentials configured in this file
# are ignored.
existingSecretAdminCredentials: "shift-admin-passwords"
ast:
# -- The name of an existing secret with encryption keys.
# See README.md for details and the required structure.
# NOTE: When it's set, the encryption keys configured in this file
# are ignored.
existingSecretEncryptionKeys: "shift-encryption-keys"
# -- The issuer CA certificate and private key used to generate tenant signers.
# See README.md section [Issuer CA](#issuer-ca) for requirements on issuer CA generation.
issuer:
# -- The name of an existing secret with issuer CA.
# See README.md for details and the required structure.
# NOTE: When it's set, the issuer CA configured in this file
# is ignored.
existingSecretIssuerCa: "shift-issuer-ca"
These secrets must be created in the same namespace where shift is deployed.
Required structure for datastore secrets
Create a secret using below structure and add
- Database credentials for ast services, idp-core, idp-scp-connector, and scp-notifier.
- Redis password used by ast services and idp-scp-connector.
Credentials for all supported and enabled services must be added to the secret. Not used (disabled) services can be omitted.
apiVersion: v1
kind: Secret
metadata:
name: shift-datastore-credentials
type: Generic
stringData:
AST_SERVICES_REDIS_PASSWORD: "change-me"
IDP_CORE_DB_USERNAME: "change-me"
IDP_CORE_DB_PASSWORD: "change-me"
IDP_SCP_CONNECTOR_DB_USERNAME: "change-me"
IDP_SCP_CONNECTOR_DB_PASSWORD: "change-me"
AST_CA_DB_USERNAME: "change-me"
AST_CA_DB_PASSWORD: "change-me"
AST_CLIENT_MANAGEMENT_DB_USERNAME: "change-me"
AST_CLIENT_MANAGEMENT_DB_PASSWORD: "change-me"
AST_CLIENT_PROPERTIES_DB_USERNAME: "change-me"
AST_CLIENT_PROPERTIES_DB_PASSWORD: "change-me"
AST_LOGIN_DB_USERNAME: "change-me"
AST_LOGIN_DB_PASSWORD: "change-me"
AST_VERSION_DB_USERNAME: "change-me"
AST_VERSION_DB_PASSWORD: "change-me"
AST_LOCALIZATION_DB_USERNAME: "change-me"
AST_LOCALIZATION_DB_PASSWORD: "change-me"
AST_TMS_DB_USERNAME: "change-me"
AST_TMS_DB_PASSWORD: "change-me"
AST_KEY_PROTECTION_DB_USERNAME: "change-me"
AST_KEY_PROTECTION_DB_PASSWORD: "change-me"
SCP_NOTIFIER_DB_USERNAME: "change-me"
SCP_NOTIFIER_DB_PASSWORD: "change-me"
Required structure for admin credentials
Create a secret using below structure and add admin credentials for idp-core. Not used (disabled) services can be omitted.
apiVersion: v1
kind: Secret
metadata:
name: shift-admin-passwords
type: Generic
stringData:
IDP_CORE_ADMIN_USERNAME: "admin"
IDP_CORE_ADMIN_PASSWORD: "password"
Required structure for encryption keys
Create a secret using below structure and add database encryption master key and session encryption master key. Both keys must be alphanumeric (UTF-8) strings of length 64.
apiVersion: v1
kind: Secret
metadata:
name: shift-encryption-keys
type: Generic
stringData:
DATABASE_ENCRYPTION_MASTER_KEY: ""
SESSION_ENCRYPTION_MASTER_KEY: ""
Required structure for issuer CA
Create a secret using below structure and add issuer CA certificate and key. Only a single self-signed certificate is supported. The certificate must be a base64 encoded self-signed certificate. The key must be the issuer private and public key in PKCS#8 format as base64 string.
apiVersion: v1
kind: Secret
metadata:
name: shift-issuer-ca
type: Generic
data:
ISSUER_CA_CERTIFICATE: ""
ISSUER_CA_KEY: ""
Internal Features
ServiceGroup for additional helm charts
Shift chart has experimental support for adding arbitrary add-on
helm charts as ServiceGroup to be managed by shift operator. This feature is configured using values:
# -- Section for configuring `add-on` helm charts to be managed by shift operator.
addons:
# -- Name of the `add-on` helm chart
chartname:
# -- enable/disable deployment
enabled: true
# -- Chart version
version: 0.1.0
# -- ServiceGroup readiness check by shift operator. Set to `true` to enable.
# A servicegroup is considered ready if all pods created by the ServiceGroup
# are running. Shift operator uses label `app.kubernetes.io/instance` for
# selecting pods to check. The add-on helm chart must set label
# `app.kubernetes.io/instance: {{ .Release.Name }}` on all pods to
# ensure readiness check works properly.
readycheck: false
ServiceGroup's sub chart aliases
Shift supports deploying the same helm chart multiple times using aliases. This requires shift operator version 0.9.0
or higher.
The following example demonstrates it in the spec
context of a service group custom resource. The helm chart with name service-chart
is deployed twice using aliases service-one
and service-two
.
spec:
service-one:
chart: service-chart
version: 1.0.0
fullnameOverride: {{ include "ks.siblingFullname" (merge (dict "sibling" "service-one") .) }}
serviceTwoUrl: http://{{ include "ks.siblingFullname" (merge (dict "sibling" "service-two") .) }}
service-two:
chart: service-chart
version: 1.0.0
fullnameOverride: {{ include "ks.siblingFullname" (merge (dict "sibling" "service-two") .) }}
Shift operator uses the 'alias' to generate the helm release names.
Note, that when overriding the full name (fullnameOverride
) in a custom resource, the alias must be used in the sibling parameter. The same holds when configuring service names of other services, see the example value serviceTwoUrl
.
Overriding arbitrary values of service helm charts
Arbitrary default values of the services helm charts can be overridden using object valuesOverride:
in custom shift-values.yaml
. This feature can be used to change defaults for values that are not exposed by shift chart.
For example, to increase the memory requests and limits of idp-core to 4Gi use
idpCore:
valuesOverride:
mainContainer:
resources:
requests:
memory: "4Gi"
limits:
memory: "4Gi"
Values containing helm templates can be overridden using object valuesOverrideTpl:
. Values provided in valuesOverrideTpl:
have a higher priority than values provided in valuesOverride:
.
For example, to set the value baseUrl:
of service service
to the svc name of idp-core use
service:
valuesOverrideTpl:
baseUrl: 'http://{{ include "ks.siblingFullname" (merge (dict "sibling" "idp-core") .) }}:80/auth'