Skip to main content

How to change namespace with running previous deploy

caution

This is internal documentation. This document can be used only if it was recommended by the Support Team.

caution

This setup is deprecated from the 22.3 version.

A. How to change namespace in case there is deploy already running in the default namespace - parallel option

Prerequisites

  • The kubectl command-line tool
  • Access to a Kubernetes cluster with installed Deploy in the default namespace

Tested with:

  • Deploy operator 10.3.9 with external database.
  • xl-deploy 10.3.9 upgraded to 22.2.0-621.1206
  • xl-cli 22.2.0-621.1206
  • Aws EKS cluster

If you have already setup of the XLD default namespace it is possible to move the deployment to the custom namespace. Here we will use for example nsxld.

In the example we will use XLD 10.3 that will be upgraded to 22.2.0-621.1206 version with latest 22.2.x operator image xebialabsunsupported/deploy-operator:22.2.0-621.1206 from the https://hub.docker.com/r/xebialabsunsupported/deploy-operator/tags and latest operator package from the nexus.

Steps to setup operator on the custom namespace

With following steps you will setup XLD in the custom namespace, in parallel with running current setup in the default namespace.

caution

Before doing any of the following steps backup everything:

  • Clean up deploy Work directory
  • database data
  • any custom configuration that was done for the operator setup
  • any volume related to deploy in the default namespace, for example data from the mounted volumes on the master and worker pod:
    • /opt/xebialabs/xl-deploy-server/central-conf/deploy-server.yaml.template
    • /opt/xebialabs/xl-deploy-server/centralConfiguration/deploy-oidc.yaml
    • /opt/xebialabs/xl-deploy-server/work
    • /opt/xebialabs/xl-deploy-server/conf
    • /opt/xebialabs/xl-deploy-server/centralConfiguration
    • /opt/xebialabs/xl-deploy-server/ext
    • /opt/xebialabs/xl-deploy-server/hotfix/lib
    • /opt/xebialabs/xl-deploy-server/hotfix/plugins
    • /opt/xebialabs/xl-deploy-server/hotfix/satellite-lib
    • /opt/xebialabs/xl-deploy-server/log

A.1. Create custom namespace

Setup custom namespace on Kubernetes cluster, nsxld for example:

❯ kubectl create namespace nsxld

Replace nsxld name in this and following steps with your custom namespace name.

A.2. Backup everything on cluster

  1. Collect all custom changes that are done in the default namespace for XLD resources
    • StatefulSets
    • Deployments
    • ConfigMaps
    • Secrets
    • CustomResource
    • anything else that was customized
  2. Collect any other change that was done during initial setup according to the https://docs.xebialabs.com/v.22.1/deploy/how-to/k8s-operator/install-deploy-using-k8s-operator/#installing-deploy-on-amazon-eks
  3. If you are using your own database and messaging queue setup, do the data backup.
note

Note: Any data migration is out of scope of this document. For example in case of database data migration, check with your DB admins what to do in that case. For the external database case the best option is to migrate database to a new database schema, and use that schema on the new namespace.

A.3. Prepare the deploy operator

  1. Get the deploy operator package zip for AWS-EKS: deploy-operator-aws-eks-22.2.0-621.1206.zip (correct operator image is already setup in the package).

  2. Download and set up the XL CLI setup (xl cli version in this case 22.2.0-621.1206) from https://nexus.xebialabs.com/nexus/service/local/repositories/releases/content/com/xebialabs/xlclient/xl-client/22.2.0-621.1206/xl-client-22.2.0-621.1206-linux-amd64.bin Do the step 6 from the documentation Step 6—Download and set up the XL CLI

[sishwarya@localhost B-defaultns-downtime] $ ./xl-client-22.2.0-621.1206-linux-amd64.bin version
CLI version: 22.2.0-621.1206
Git version: v22.2.0-620.544-1-g520f8bb
API version XL Deploy: xl-deploy/v1
API version XL Release: xl-release/v1
Git commit: 520f8bbd7c61c7d11a16b677838250fa570730a2
Build date: 2022-06-21T11:33:41.580Z
GO version: go1.16
OS/Arch: linux/amd64
  1. Do the step 7 from the documentation Step 7—Set up the XL Deploy Container instance Use the 22.2.0-621.1206version of the deploy: docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabsunsupported/xl-deploy:22.2.0-621.1206

  2. Run the upgrade setup with a dry run and generate the blueprint file:

Here is sample of the responses:


[sishwarya@localhost s1] $ ../../xl-client-22.2.0-621.1206-linux-amd64.bin op --upgrade --dry-run
? Select the setup mode? advanced
? Select the Kubernetes setup where the digitalai Devops Platform will be installed or uninstalled: AwsEKS [AWS EKS]
? Do you want to use Kubernetes' current-context from ~/.kube/config? Yes
? Do you want to use the AWS SSO credentials ? No
? Do you want to use the AWS credentials from your ~/.aws/credentials file? Yes
? Do you want to use an custom Kubernetes namespace (current default is 'digitalai')? Yes
? Enter the name of the existing Kubernetes namespace where the XebiaLabs DevOps Platform will be installed, updated or undeployed: default
Connecting to EKS
? Product server you want to perform upgrade for daiDeploy [Digital.ai Deploy]
? Enter the repository name(eg: <repositoryName>/<imageName>:<tagName>) xebialabsunsupported
? Enter the deploy server image name(eg: <repositoryName>/<imageName>:<tagName>) xl-deploy
? Enter the image tag(eg: <repositoryName>/<imageName>:<tagName>) 22.2.0-621.1206
? Enter the deploy task engine image name for version 22 and above (eg: <repositoryName>/<imageName>:<tagName>) deploy-task-engine
? Choose the version of the XL Deploy for Upgrader setup of operator 22.2.0-621.1206
? Do you want to enable an oidc? Yes
? Do you want to use an existing external oidc configuration from previous installation? No
? Use embedded keycloak? Yes
? Enter Keycloak public URL deploy.keycloak.digitalai-testing.com
? Use embedded DB for keycloak? Yes
? Select the type of upgrade you want. operatorToOperator [Operator to Operator]
? Operator image to use xebialabsunsupported/deploy-operator:22.2.0-621.1206
? Do you want to use custom operator zip file for Deploy? Yes
? Deploy operator zip to use (absolute path or URL to the zip) /home/sishwarya/SprintTicket/S-84982_ns_xld_migration/B-defaultns-downtime/deploy-operator-aws-eks-22.2.0-621.1206.zip
? Enter the name of custom resource definition. digitalaideploys.xld.digital.ai
? Enter the name of custom resource. dai-xld
? Edit list of custom resource keys that will migrate to the new Deploy CR <Received>
-------------------------------- ----------------------------------------------------
| LABEL | VALUE |
-------------------------------- ----------------------------------------------------
| AWSAccessKey | ***** |
| AWSAccessSecret | ***** |
| AWSSessionToken | ***** |
| CrName | dai-xld |
| CrdName | digitalaideploys.xld.digital.ai |
| DeployImageVersionForUpgrader | 22.2.0-621.1206 |
| EksClusterName | devops-operator-cluster-test-cluster |
| EnableOidc | true |
| ImageNameDeploy | xl-deploy |
| ImageNameDeployTaskEngine | deploy-task-engine |
| ImageTag | 22.2.0-621.1206 |
| IsAwsCfgAvailable | true |
| K8sApiServerURL | https://72673EC78289B3B122CAC4CA8E6473C2.gr7.us-.. |
| K8sSetup | AwsEKS |
| KeycloakUrl | deploy.keycloak.digitalai-testing.com |
| Namespace | default |
| OperatorImageDeployGeneric | xebialabsunsupported/deploy-operator:22.2.0-621... |
| OperatorZipDeploy | /home/sishwarya/SprintTicket/S-84982_ns_xld_migr.. |
| OsType | linux |
| PreserveCrValuesDeploy | .metadata.name\n.spec.XldMasterCount\n.spec.XldW.. |
| RepositoryName | xebialabsunsupported |
| ServerType | daiDeploy |
| UpgradeType | operatorToOperator |
| UseAWSSsoCredentials | false |
| UseAWSconfig | true |
| UseCustomNamespace | true |
| UseEmbeddedKeycloak | true |
| UseExistingOidcConf | false |
| UseKeycloakWithEmbeddedDB | true |
| UseKubeconfig | true |
| UseOperatorZipDeploy | true |
-------------------------------- ----------------------------------------------------
? Do you want to proceed to the deployment with these values? Yes
? Current CRD resource "digitalaideploys.xld.digital.ai" is used in following CRs and namespaces:
Name Namespace
dai-xld default

Should CRD be reused. If Yes it will not be deleted, if No we will delete CRD "digitalaideploys.xld.digital.ai", and all related CRs will be deleted with it. Yes
Generated files successfully!
Update central configuration values... | Using same custom resource name dai-xld
Update with keycloak values... | Generated files successfully operatorToOperator upgrade on AwsEKS

That will create files and directories in the working directory. The main directory is xebialabs and inside it are all template files that we need to edit. Check the xebialabs/dai-deploy/daideploy_cr.yaml if all values are correctly set there.

A.3. Update the deploy operator package to support custom namespace (common part)

Update following files (relative to the provider's directory) with custom namespace name:

File nameYaml pathValue to set
xebialabs/xl-k8s-foundation.yaml [kind: Infrastructure]spec[0].children[0].children[0].namensxld
xebialabs/xl-k8s-foundation.yaml [kind: Infrastructure]spec[0].children[0].children[0].namespaceNamensxld
xebialabs/xl-k8s-foundation.yaml [kind: Environments]spec[0].children[0].members[1]- Infrastructure/DIGITALAI/K8s-MASTER/nsxld
xebialabs/dai-deploy/template-generic/cluster-role-digital-proxy-role.yamlmetadata.namensxld-xld-operator-proxy-role
xebialabs/dai-deploy/template-generic/cluster-role-manager-role.yamlmetadata.namensxld-xld-operator-manager-role
xebialabs/dai-deploy/template-generic/cluster-role-metrics-reader.yamlmetadata.namensxld-xld-operator-metrics-reader
xebialabs/dai-deploy/template-generic/leader-election-rolebinding.yamlsubjects[0].namespacensxld
xebialabs/dai-deploy/template-generic/manager-rolebinding.yamlmetadata.namensxld-xld-operator-manager-rolebinding
xebialabs/dai-deploy/template-generic/manager-rolebinding.yamlroleRef.namensxld-xld-operator-manager-role
xebialabs/dai-deploy/template-generic/manager-rolebinding.yamlsubjects[0].namespacensxld
xebialabs/dai-deploy/template-generic/proxy-rolebinding.yamlmetadata.namensxld-xld-operator-proxy-rolebinding
xebialabs/dai-deploy/template-generic/proxy-rolebinding.yamlroleRef.namensxld-xld-operator-proxy-role
xebialabs/dai-deploy/template-generic/proxy-rolebinding.yamlsubjects[0].namespacensxld
xebialabs/dai-deploy/template-generic/postgresql-init-keycloak-db.yamlmetadata.namedai-xld-nsxld-postgresql-init-keycloak-db
xebialabs/dai-deploy/template-generic/postgresql-init-keycloak-db.yamlspec.template.metadata.namedai-xld-nsxld-postgresql-init-keycloak-db
xebialabs/dai-deploy/template-generic/postgresql-init-keycloak-db.yamlspec.template.spec.initContainers[0].env.children[0].valuedai-xld-nsxld-postgresql
xebialabs/dai-deploy/template-generic/postgresql-init-keycloak-db.yamlspec.template.spec.containers[0].children[0].env.children[0].valuedai-xld-nsxld-postgresql
xebialabs/dai-deploy/daideploy_cr.yamlmetadata.namedai-xld-nsxld

Below fields needs to be updated only when we enable keycloak with embedded database and we use existing database for deploy:

File nameYaml pathValue to setNote
xebialabs/dai-deploy/daideploy_cr.yamlspec.keycloak.installtrue
xebialabs/dai-deploy/daideploy_cr.yamlspec.postgresql.installtrue
xebialabs/dai-deploy/daideploy_cr.yamlspec.postgresql.persistence.storageClass"aws-efs"Provide Storage Class to be defined as PostgreSQL specific to each provider, here we are using aws efs storageclass.

In the xebialabs/dai-deploy/template-generic/deployment.yaml add env section after spec.template.spec.containers[1].image (in the same level) in case if it is not available:

        image: xebialabs...
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace

In the xebialabs/dai-deploy-operator.yaml delete array element from the spec[0].children[0].deployables, where name is name: custom-resource-definition if it is not already removed during dry-run. This will not deploy again CRD, as it already exists, when it was deployed for the first time. Example of the element to delete

      - name: custom-resource-definition
type: k8s.ResourcesFile
fileEncodings:
".+\\.properties": ISO-8859-1
mergePatchType: strategic
propagationPolicy: Foreground
updateMethod: patch
createOrder: "3"
modifyOrder: "2"
destroyOrder: "1"
file: !file "dai-deploy/template-generic/custom-resource-definition.yaml"

A.4.a. Update the deploy operator package to support custom namespace - only in case of Nginx ingress controller

Following changes are in case of usage nginx ingress (default behaviour):

File nameYaml pathValue to set
xebialabs/dai-deploy/daideploy_cr.yamlspec.ingress.annotations.kubernetes.io/ingress.classnginx-dai-xld-nsxld
xebialabs/dai-deploy/daideploy_cr.yamlspec.nginx-ingress-controller.extraArgs.ingress-classnginx-dai-xld-nsxld
xebialabs/dai-deploy/daideploy_cr.yamlspec.nginx-ingress-controller.ingressClassResource.namenginx-dai-xld-nsxld
xebialabs/dai-deploy/daideploy_cr.yamlspec.nginx-ingress-controller.ingressClassResource.controllerClassk8s.io/ingress-nginx-dai-xld-nsxld
xebialabs/dai-deploy/daideploy_cr.yamlspec.keycloak.ingress.annotations.kubernetes.io/ingress.classnginx-dai-xld-nsxld

A.4.b. Update the deploy operator package to support custom namespace - only in case of Haproxy ingress controller

note

Note: To setup haproxy instead of default nginx configuration that is provided in the operator package you need to do following changes in the xebialabs/dai-deploy/daideploy_cr.yaml:

  • spec.haproxy-ingress.install = true
  • spec.nginx-ingress-controller.install = false
  • spec.ingress.path = "/"
  • in the spec.ingress.annotations replace all nginx. settings and put:
      kubernetes.io/ingress.class: "haproxy"
ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/affinity: cookie
ingress.kubernetes.io/session-cookie-name: JSESSIONID
ingress.kubernetes.io/session-cookie-strategy: prefix
ingress.kubernetes.io/config-backend: |
option httpchk GET /ha/health HTTP/1.0

Following changes are in case of usage haproxy ingress:

File nameYaml pathValue to set
xebialabs/dai-deploy/daideploy_cr.yamlspec.ingress.annotations.kubernetes.io/ingress.classhaproxy-dai-xld-nsxld
xebialabs/dai-deploy/daideploy_cr.yamlspec.haproxy-ingress.controller.ingressClasshaproxy-dai-xld-nsxld

A.5. Update additionally YAML files

Apply all collected changes from the default namespace to the CR in the deploy operator package xebialabs/dai-deploy/daideploy_cr.yaml. (The best is to compare new CR xebialabs/dai-deploy/daideploy_cr.yaml with the one from the default namespace)

Check the YAML files, and update them with additional changes. For example CR YAML and update it with any missing custom configuration.

If you are using your own database and messaging queue setup, setup it in the same way as in the default namespace, in the new CR in the deploy operator package xebialabs/dai-deploy/daideploy_cr.yaml. Database in this case of setup can be reused if there is network visibility in the new namespace where you are moving your installation

In case, During upgrade if we disabled oidc setup by answering below question

- Do you want to enable an oidc? No,
  • But you need to add external oidc configuration.
    • For example, you can do now OIDC setup, add the following fields with value under spec tag, for enabling oidc in the xebialabs/dai-deploy/daideploy_cr.yaml
spec:
oidc:
enabled: true
accessTokenUri: null
clientId: null
clientSecret: null
emailClaim: null
external: true
fullNameClaim: null
issuer: null
keyRetrievalUri: null
logoutUri: null
postLogoutRedirectUri: null
redirectUri: null
rolesClaim: null
userAuthorizationUri: null
userNameClaim: null
scopes: ["openid"]

Replace nulls with correct values, for more info check documentation

A.6. Be sure to not delete PVs

Do the step from C.2. Be sure to not delete PVs with you actions

A.7. Copy existing PVCs to the custom namespace

There are 3 options from the step from C.4. Move existing PVC to the custom namespace In this scenario you can only use:

  • C.4.OPTION_1 Create PVC in the custom namespace by copying PV data

  • C.4.OPTION_3 Move existing PVC to the custom namespace by CSI Volume Cloning

A.7.1 Do the following changes in master PVCs

  • Create the master pod in custom-namespace [eg: nsxld], similar to C.4.OPTION_1.2 Master - Start following pods
  • Connect to the master pod in custom-namespace.
    ❯ kubectl exec -it dai-xld-master-pv-access-nsxld -n nsxld -- sh
  • Update the following file in centralConfiguration folder.
    • deploy-task.yaml.
      • Point it to correct rabbitmq. [only required if your using embedded rabbitmq for deploy]
        • jms-url: amqp://dai-xld-nsxld-rabbitmq.nsxld.svc.cluster.local:5672/%2F
          ❯ kubectl exec -it pod/dai-xld-master-pv-access-nsxld -n nsxld -- sh
          / # cd opt/xebialabs/xl-deploy-server/centralConfiguration
          /opt/xebialabs/xl-deploy-server/centralConfiguration # cat deploy-task.yaml
          akka:
          io:
          dns:
          resolver: async-dns
          deploy:
          task:
          in-process-worker: false
          queue:
          archive-queue-name: xld-archive-queue
          external:
          jms-driver-classname: com.rabbitmq.jms.admin.RMQConnectionFactory
          jms-password: '{cipher}9880501a40e7618fa9bb582d84e4b0e296e9f92a43deaaa4ad63bb98ab69c5b3'
          jms-url: amqp://dai-xld-nsxld-rabbitmq.nsxld.svc.cluster.local:5672/%2F
          jms-username: guest
          in-process:
          maxDiskUsage: 100
          shutdownTimeout: 60000
          name: xld-tasks-queue
    • deploy-server.yaml
      • If keycloak is enabled.
        • Update the property deploy.server.security.auth.provider: oidc, if required change the hostname as well.
        ❯ kubectl exec -it pod/dai-xld-master-pv-access-nsxld -n nsxld -- sh
        / # cd opt/xebialabs/xl-deploy-server/centralConfiguration
        /opt/xebialabs/xl-deploy-server/centralConfiguration # cat deploy-server.yaml
        deploy.server:
        hostname: dai-xld-nsxld-digitalai-deploy-master-0.dai-xld-nsxld-digitalai-deploy-master.nsxld.svc.cluster.local
        license:
        daysBeforeWarning: 10
        security:
        auth:
        provider: "oidc"
  • Update the following file in conf folder.
      * xld-wrapper.conf.common
    * Search for jmx_prometheus_javaagent.jar and update it to "jmx_prometheus_javaagent-0.16.1.jar".
    Note : Either update the value or delete the file [xld-wrapper.conf.common], it will be downloaded automatically with the latest changes during startup.
    ❯ kubectl exec -it pod/dai-xld-master-pv-access-nsxld -n nsxld -- sh
    / # cd opt/xebialabs/xl-deploy-server/conf
    /opt/xebialabs/xl-deploy-server/conf # rm -rf xld-wrapper.conf.common

A.7.2 Do the following changes in worker PVCs

  • Create the worker pod in custom-namespace [eg: nsxld].
  • Connect to the worker pod
  • Update the following file in conf folder.
    • xld-wrapper.conf.common
    • Search for jmx_prometheus_javaagent.jar and update it to "jmx_prometheus_javaagent-0.16.1.jar". Note : Either update or delete the file [xld-wrapper.conf.common], it will be downloaded automatically with the latest changes during startup.
          ❯ kubectl exec -it pod/dai-xld-worker-pv-access-nsxld -n nsxld -- sh
      / # cd /opt/xebialabs/deploy-task-engine/conf/
      /opt/xebialabs/deploy-task-engine/conf # rm -rf xld-wrapper.conf.common
      /opt/xebialabs/deploy-task-engine/conf #

A.8. Deploy to the custom namespace

  1. We are using here yaml that was result of the upgrade dry-run in the working directory, so we should apply against following file:
xl apply -f ./xebialabs.yaml
  1. Do the step 9, 10 and 11 from the documentation Step 9—Verify the deployment status

  2. Troubleshooting

    • If cc pod is not initialized, due to below error.
Type     Reason            Age                 From                    Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 55s statefulset-controller create Claim data-dir-dai-xld-nsxld-digitalai-deploy-cc-server-0 Pod dai-xld-nsxld-digitalai-deploy-cc-server-0 in StatefulSet dai-xld-nsxld-digitalai-deploy-cc-server success
Warning FailedCreate 34s (x13 over 55s) statefulset-controller create Pod dai-xld-nsxld-digitalai-deploy-cc-server-0 in StatefulSet dai-xld-nsxld-digitalai-deploy-cc-server failed error: Pod "dai-xld-nsxld-digitalai-deploy-cc-server-0" is invalid: spec.initContainers[0].volumeMounts[0].name: Not found: "source-dir"

  • Workaround
    • Edit the statefulset of dai-xld-nsxld-digitalai-deploy-cc-server
     ❯ kubectl edit statefulset.apps/dai-xld-nsxld-digitalai-deploy-cc-server
    • Update the Volume section as below.
        volumes:      
    - name: source-dir
    persistentVolumeClaim:
    claimName: data-dir-dai-xld-nsxld-digitalai-deploy-master-0

A.9. Apply any custom changes

If you have any custom changes that you collected previously in the step 3.3, you can apply them again in this step in the same way as before on the default namespace.

Check if PVCs and PVs are reused by the new setup in the custom namespace.

A.10. Wrap-up

Wait for all pods to ready and without any errors.

If you used same host in the new custom namespace to the one that is on the default namespace, in that case XLD page is still opening from the default namespace. You need in that case apply step 9.a, after that on the configurated host will be available XLD that is from the new custom namespace.

List of pods should look like following table, if nginx and keycloak is enabled.:

NAMESPACE↑           NAME                                                          READY     RESTARTS STATUS  
nsxld dai-xld-nsxld-digitalai-deploy-cc-server-0 1/1 Running 0 19m
nsxld dai-xld-nsxld-digitalai-deploy-master-0 1/1 Running 5 17m
nsxld dai-xld-nsxld-digitalai-deploy-worker-0 1/1 Running 1 39m
nsxld dai-xld-nsxld-keyclo-0 1/1 Running 0 39m
nsxld dai-xld-nsxld-nginx-ingress-controller-dbb495ccc-ln52s 1/1 Running 0 39m
nsxld dai-xld-nsxld-nginx-ingress-controller-default-backend-54ckc2mk 1/1 Running 0 39m
nsxld dai-xld-nsxld-postgresql-0 1/1 Running 0 19m
nsxld dai-xld-nsxld-postgresql-init-keycloak-db-5rlpx 0/1 Completed 0 39m
nsxld dai-xld-nsxld-rabbitmq-0 1/1 Running 0 39m
nsxld xld-operator-controller-manager-759cb85546-dh2n4 2/2 Running 0 40m

Table could have different entries if you haproxy or using external rabbitmq /external keycloak.

A.11. Destroy XLD in default namespace

If you are sure that everything is up and running on the new custom namespace, you can destroy previous setup on the default namespace.

Do the step from C.3. Stop everything that is using XLD PVC-s

Additionally, you can also cleanup any related PVC in the default namespace and PVs:

# be careful if you would like really to delete all PVC-s and related PV-s, backup before delete
# get pvcs related to XLD on default namespace and delete them (list of the pvcs depends on what is enabled in the deployment)
❯ kubectl get pvc -n default
❯ kubectl delete -n default pvc data-dai-xld-rabbitmq-0 ...

You can also clean up any configmaps or secrets that are in the default namespace and related to the XLD.

You also delete all PVs that were connected to the XLD installation in the default namespace, and are not migrated and used by the custom namespace.