Skip to main content

Migration install

Migration Install for Kobil Security Server service - moving from Single-Tenant Security Server R 2.* to Multi-Tenant Security Server R3.*

Target for migration install is to switch over from single-Tenant Security Server installation into multi-Tenant Security Server installation using the existing Security Server-DB database. This will ensure user and device registrations remaining valid.

Default processing:
Any Security Server ( R2.* )-single-Tenant DB-Data is moved into a Security Server-MT ( R3.* ) "MASTER" Tenant context by default implementation.

Advanced processing:
Using an "installer" Security Server R 3.* installation applied first on the existing Security Server R 2.* allows per manual script-run (db-migration script) to move Single-Tenant Security Server-data into a specific MT-Security Server Tenant context with specific Tenant-Name (sub-tenant / not MASTER). When the "db-migration" job is completed the new k8s-Security Server deployment (which is in fact a fresh new install on Kubernets) will be able to open and read the migrated Security Server-DB-Data.
Find more info here: Security Server-ST Database migration into Subtenant

Security Server-Migration matrix

  • Security Server up to release 2.11 could be migrated to Security Server MT 3.4.1 (deprecated by 20.01.2022 - now using MT 3.6.1 db-migration scripting works)
  • Security Server release 2.12 and higher could be migrated to Security Server MT 3.5* and 3.6.*
  • Security Server release 2.12 or higher is at DB level not compatible to Security Server MT 3.4.1
FromTo
Security Server 2.10.03.6.latest
Security Server-2.1.113.6.latest
Security Server-2.1.143.6.latest
Security Server-2.9.03.6.latest
Security Server-2.9.13.6.latest
Security Server-2.11.0Not Supported
Security Server-2.12.03.6.latest
Security Server-2.12.13.6.latest
Security Server-2.12.23.6.latest

Very important Note:

The migration-install will make use of existing Security Server-DB-service stored Security Server-data. It is required the Security Server-service startup at Kubernetes is not triggering a testInstallation. This is configured by parameter ssms:certificate:testInstallation: false in the meta-configuration file "values.yaml". The default setting is "testInstallation: true" and will try to re-initialize the specified Security Server-DB-database into a test-install Security Server-DB.

Prepare "license context" for Security Server-DB database access before Security Server deployment and Security Server POD startup

  • retrieve from existing local installation Security Server "config.xml" file (required initially). The installer Security Server "config.xml" covers the required information to allow accessing and reading the existing Security Server-DB-data.
  • ensure to run "helm install" with meta-configuration file "values.yaml" covering ssms:certificate:testInstallation: false - this is REQUIRED
  • retrieve DB-service endpoint and credentials (required db configuration parameter for mpower "values.yaml" in the Security Server section)
  • retrieve existing Security Server-tuning configuration files (communication.xml,server.xml,,,) and custom Security Server truststore (optional ConfigMap objects created prior to initial deployment).
    Find more info for Security Server custom configuration here: Configuration of Kubernetes based Security Server 3.4.x and higher
  • create ssms-local-config secret into the targeted namespace prior to initial deployment (before running helm install first time) using the install Security Server "config.xml" data

Notes:

The file must read like this - only "config.xml" is allowed as this will become part of the key-value pair in the data section of the secret.
Use the original Security Server "config.xml" file and ensure to address the right namespace when creating the secret named "ssms-local-config".

Run/create the secret:

  kubectl create secret generic ssms-local-config --from-file=config.xml  

Once the secret is created - start the deployment (helm install). For more details about the "ssms-local-config" secret find below additional info / Appendix.

run "helm install" into namespace covering the prepared "ssms-local-config" secret

  • ensure to have repository pull-secret created prior to run "helm install" and helm kobil chart repository access is enabled
  • doublecheck the "ssms-local-config" secret is pre-allocated in right namespace
  • doubelcheck the meta-configuration file "values.yaml" is covering ssms:certificate:testInstallation: false
  • run helm install mpower -f ./values.yaml kobil/mpower covering in the Security Server section the targeted Security Server-DB service (host / credentials / options) in the target namespace
  • watch "ssms-master-config*" Pod(job) log output to verify access to Security Server-DB is possible
  • watch "ssms-mgt/svc" Pod log output to verify access and reading from Security Server-DB is possible matching the Security Server-modules to Security Server-table versions

Appendix (import license):

Additional info for the Security Server "ssms-local-config" secret:

When printout the created secret running "kubectl get secret ssms-local-config -o yaml" you should find object structure like this:

migrationinstall1

migrationinstall2

It is required to find in the data-section the "config.xml" key-value pair covering the data base64 encoded.
Once the secret is created you should doublecheck the "config.xml" value/payload.

Verify the content by:

   kubectl get secret ssms-local-config -o jsonpath='{.data.config.xml}' | base64 --decode  

This should confirm the created secret is fine.

Special Notes:

Considerations for using dependency deployment (using Ingress-Controller Daemon-Set / or platform specific routing for the Kobil-Services)