Skip to main content

5 posts tagged with "cluster-operator"

View All Tags

路 2 min read

Requirements#

Documentation is applicable for a version 10.4.0-1209.942 or later.

Pre-requisites#

Below are the prerequisites for the automation to run. You need to:

How the full flow works#

  • Installing a docker based Deploy instance, because we will use Deploy to create necessary resources in kubernetes and to deploy an operator.
  • Checking out Deploy operator and modifying the configuration based on the user input
  • Installing XL CLI to apply YAML files
  • Verifying that deployment was successful and all required resources were created in kubernetes. If something went wrong, you'll be notified about it in logs.

You can also check this operator AWS EKS documentation for more information.

All of this is automated and can be triggered by ./gradlew clean :core:startIntegrationServer --stacktrace with the configuration that is similar to the following example.

When you would like to stop your cluster you can run ./gradlew :core:shutdownIntegrationServer --stacktrace. It will undeploy all CIs, remove all deployed resources on kubernetes and clean all created PVC.

Example#

An example for a complete configuration:

deployIntegrationServer {    cli {        overlays = [                ext: [                        fileTree(dir: "$rootDir/config/cli", includes: ["**/*.py"])                ],                lib: [                        "com.xebialabs.xl-platform.test-utils:py-modules:${testUtilsVersion}@jar"                ]        ]    }    cluster {        enable = true        profile = 'operator'        publicPort = 10001    }    clusterProfiles {        operator {            activeProviderName = "aws-eks"                        awsEks {                        name = 'aws-eks-test-cluster'                        region = 'us-east-1'            }        }    }    servers {        server01 {            dockerImage = "xebialabsunsupported/xl-deploy"            pingRetrySleepTime = 10            pingTotalTries = 120            version = "${xlDeployTrialVersion}"            overlays = [                    conf: [                            fileTree(dir: "$rootDir/config/conf", includes: ["*.*"])                    ],            ]        }        server02 {        }    }    workers {        worker01 {            dockerImage = "xebialabsunsupported/deploy-task-engine"        }        worker02 {        }    }}

The cluster will be created with amount of servers and workers specified in the configuration. For this case, it will create 2 masters and 2 workers. The final URL to connect to UI is: http://deploy.digitalai-testing.com/xl-deploy/#/explorer. In case if you want to update the operator and use your own, you can change operatorImage.

路 3 min read

Requirements#

Documentation is applicable for a version 10.4.0-1209.942 or later.

Pre-requisites#

There are a couple of prerequisites which have to be performed in order to run the automation. You have to:

accountCredFile=...accountName=... 

For the accountName the best is to use a service account. To create a service account and get JSON file that is needed for accountCredFile use the following link: Creating a service account. Assign to a service account following roles on the target project: IAM - Permissions for project for the service account More details about IAM policies for GKE are here: Create IAM policies

How the full flow works#

  • Installing a docker based Deploy instance, because we will use Deploy to create necessary resources in kubernetes and to deploy an operator.
  • Checking out Deploy operator and modifying the configuration based on the user input
  • Installing XL CLI to apply YAML files
  • Verifying that deployment was successful and all required resources were created in kubernetes. If something went wrong, you'll be notified about it in logs.

You can also check this operator GCP GKE documentation for more information.

All of this is automated and can be triggered by ./gradlew clean :core:startIntegrationServer --stacktrace with the configuration that is similar to the following example.

When you would like to stop your cluster you can run ./gradlew :core:shutdownIntegrationServer --stacktrace. It will undeploy all CIs, remove all deployed resources on kubernetes and clean all created PVC.

Example#

An example for a complete configuration:

deployIntegrationServer {    cli {        overlays = [                ext: [                        fileTree(dir: "$rootDir/config/cli", includes: ["**/*.py"])                ],                lib: [                        "com.xebialabs.xl-platform.test-utils:py-modules:${testUtilsVersion}@jar"                ]        ]    }    cluster {        enable = true        profile = 'operator'        publicPort = 10001    }    clusterProfiles {        operator {            activeProviderName = "gcp-gke"            gcpGke {                name = 'gcp-gke-test-cluster'                storageClass = 'standard-rwo'                accountName = 'gcp-gke-test-cluster@apollo-playground.iam.gserviceaccount.com'                projectName = 'apollo-playground'                regionZone = 'us-central1-a'                clusterNodeCount = 3                clusterNodeVmSize = 'e2-standard-2'            }        }    }    servers {        server01 {            dockerImage = "xebialabsunsupported/xl-deploy"            pingRetrySleepTime = 10            pingTotalTries = 120            version = "${xlDeployTrialVersion}"            overlays = [                    conf: [                            fileTree(dir: "$rootDir/config/conf", includes: ["*.*"])                    ],            ]        }        server02 {        }    }    workers {        worker01 {            dockerImage = "xebialabsunsupported/deploy-task-engine"        }        worker02 {        }    }}

The cluster will be created with amount of servers and workers specified in the configuration. For this case, it will create 2 masters and 2 workers. The final URL to connect to UI is: http://gcp-gke-test-cluster.endpoints.apollo-playground.cloud.goog/xl-deploy/#/explorer (composed of operator provider name and location). In case if you want to update the operator and use your own, you can change operatorImage. Cluster will create with 3 cluster nodes with cluster node-vm-size e2-standard-2 with 8GB and 4vCPU.
The geo zone of the cluster will be us-central1-a, check with gcloud compute zones list for other location. Storage class here will be used standard-rwo that is enabled with addon GcePersistentDiskCsiDriver during cluster creation. For details check Using the Compute Engine persistent disk CSI Driver.

路 2 min read

Requirements#

Documentation is applicable for a version 10.4.0-1209.942 or later.

Pre-requisites#

There are a couple of prerequisites which have to be performed in order to run the automation. You have to:

azUsername=...azPassword=... 

How the full flow works#

  • Installing a docker based Deploy instance, because we will use Deploy to create necessary resources in kubernetes and to deploy an operator.
  • Checking out Deploy operator and modifying the configuration based on the user input
  • Installing XL CLI to apply YAML files
  • Verifying that deployment was successful and all required resources were created in kubernetes. If something went wrong, you'll be notified about it in logs.

You can also check this operator Azure AKS documentation for more information.

All of this is automated and can be triggered by ./gradlew clean :core:startIntegrationServer --stacktrace with the configuration that is similar to the following example.

When you would like to stop your cluster you can run ./gradlew :core:shutdownIntegrationServer --stacktrace. It will undeploy all CIs, remove all deployed resources on kubernetes and clean all created PVC.

Example#

An example for a complete configuration:

deployIntegrationServer {    cli {        overlays = [                ext: [                        fileTree(dir: "$rootDir/config/cli", includes: ["**/*.py"])                ],                lib: [                        "com.xebialabs.xl-platform.test-utils:py-modules:${testUtilsVersion}@jar"                ]        ]    }    cluster {        enable = true        profile = 'operator'        publicPort = 10001    }    clusterProfiles {        operator {            activeProviderName = "azure-aks"            azureAks {                name = 'azure-aks-test-cluster'                clusterNodeCount = 3                clusterNodeVmSize = 'Standard_DS2_v2'                location = 'northcentralus'            }        }    }    servers {        server01 {            dockerImage = "xebialabsunsupported/xl-deploy"            pingRetrySleepTime = 10            pingTotalTries = 120            version = "${xlDeployTrialVersion}"            overlays = [                    conf: [                            fileTree(dir: "$rootDir/config/conf", includes: ["*.*"])                    ],            ]        }        server02 {        }    }    workers {        worker01 {            dockerImage = "xebialabsunsupported/deploy-task-engine"        }        worker02 {        }    }}

The cluster will be created with amount of servers and workers specified in the configuration. For this case, it will create 2 masters and 2 workers. The final URL to connect to UI is: http://azure-aks-test-cluster.northcentralus.cloudapp.azure.com/xl-deploy/#/explorer (composed of operator provider name and location). In case if you want to update the operator and use your own, you can change operatorImage. Cluster will create with 3 cluster nodes with default node-vm-size Standard_DS2_v2 with 7GiB and 2vCPU. The location of the cluster will be northcentralus, check with az account list-locations for other location.

路 3 min read

Requirements#

Documentation is applicable for a version 10.4.0-1209.942 or later.

Pre-requisites#

There are a couple of prerequisites which have to be performed in order to run the automation. You have to:

How the full flow works#

  • Installing a docker based Deploy instance, because we will use Deploy to create necessary resources in kubernetes and to deploy an operator.
  • Checking out Deploy operator and modifying the configuration based on the user input
  • Installing XL CLI to apply YAML files
  • Verifying that deployment was successful and all required resources were created in kubernetes. If something went wrong, you'll be notified about it in logs.

You can also check this documentation for more information.

All of this is automated and can be triggered by ./gradlew clean :core:startIntegrationServer --stacktrace with the configuration that is similar to the following example.

note

During cluster setup there will be info line, similar to the following: "Please enter your password if requested for user ${current_username} or give user sudoers permissions '${current_username} ALL=(ALL) NOPASSWD: ${path_to_script}/update_etc_hosts.sh'." where placeholders ${current_username} and ${path_to_script} will be replaced with correct values.

Update your /etc/sudoers file according to the info line that will be provided in the console log. If you didn't do that on time:

  • stop current build and rerun full installation after update of /etc/sudoers
  • or run following script: sudo "${path_to_script}/update_etc_hosts.sh"

When you would like to stop your cluster you can run ./gradlew :core:shutdownIntegrationServer --stacktrace. It will undeploy all CIs, remove all deployed resources on kubernetes and clean all created PVC.

Example#

An example for a complete configuration:

deployIntegrationServer {    cli {        overlays = [                ext: [                        fileTree(dir: "$rootDir/config/cli", includes: ["**/*.py"])                ],                lib: [                        "com.xebialabs.xl-platform.test-utils:py-modules:${testUtilsVersion}@jar"                ]        ]    }    cluster {        enable = true        profile = 'operator'        publicPort = 10001    }    clusterProfiles {        operator {            activeProviderName = "onprem"            onPremise {                name = 'onprem-test-cluster'                clusterNodeCpus = 4                clusterNodeMemory = 15000            }        }    }    servers {        server01 {            dockerImage = "xebialabsunsupported/xl-deploy"            pingRetrySleepTime = 10            pingTotalTries = 120            version = "${xlDeployTrialVersion}"            overlays = [                conf: [                    fileTree(dir: "$rootDir/config/conf", includes: ["*.*"])                ],            ]        }        server02 {        }    }    workers {        worker01 {            dockerImage = "xebialabsunsupported/deploy-task-engine"        }        worker02 {        }    }}

The cluster will be created with amount of servers and workers specified in the configuration. For this case, it will create 2 masters and 2 workers. The final URL to connect to UI is: http://onprem-test-cluster.digitalai-testing.com/xl-deploy/#/explorer. In case if you want to update the operator and use your own, you can change operatorImage. Cluster will use virtualbox with 4 CPUs and 15000MB memory.

路 2 min read

Requirements#

Documentation is applicable for a version 10.4.0-1209.942 or later.

Pre-requisites#

There are a couple of prerequisites which have to be performed in order to run the automation. You have to:

ocLogin=...ocPassword=... 

How the full flow works#

  • Installing a docker based Deploy instance, because we will use Deploy to create necessary resources in kubernetes and to deploy an operator.
  • Checking out Deploy operator and modifying the configuration based on the user input
  • Installing XL CLI to apply YAML files
  • Verifying that deployment was successful and all required resources were created in kubernetes. If something went wrong, you'll be notified about it in logs.

You can also check this documentation for more information.

All of this is automated and can be triggered by ./gradlew clean :core:startIntegrationServer --stacktrace with the configuration that is similar to the following example.

When you would like to stop your cluster you can run ./gradlew :core:shutdownIntegrationServer --stacktrace. It will undeploy all CIs, remove all deployed resources on kubernetes and clean all created PVC.

Example#

An example for a complete configuration:

deployIntegrationServer {    cli {        overlays = [                ext: [                        fileTree(dir: "$rootDir/config/cli", includes: ["**/*.py"])                ],                lib: [                        "com.xebialabs.xl-platform.test-utils:py-modules:${testUtilsVersion}@jar"                ]        ]    }    cluster {        enable = true        profile = 'operator'        publicPort = 10001    }    clusterProfiles {        operator {            activeProviderName = "aws-openshift"            awsOpenshift {                apiServerURL = 'https://api.yourhost.lnfl.p1.openshiftapps.com:6443'                host = 'router-default.apps.yourhost.lnfl.p1.openshiftapps.com'                name = 'aws-openshift-test-cluster'                oauthHostName = "oauth-openshift.apps.yourhost.lnfl.p1.openshiftapps.com"                operatorImage = 'acierto/deploy-operator:1.0.6-openshift'                operatorPackageVersion = "1.0.1"            }        }    }    servers {        server01 {            dockerImage = "xebialabsunsupported/xl-deploy"            pingRetrySleepTime = 10            pingTotalTries = 120            version = "${xlDeployTrialVersion}"        }        server02 {
        }    }    workers {        worker01 {            dockerImage = "xebialabsunsupported/deploy-task-engine"        }        worker02 {        }    }}

The cluster will be created with amount of servers and workers specified in the configuration. For this case, it will create 2 masters and 2 workers. The final URL to connect to UI is: router-default.apps.yourhost.lnfl.p1.openshiftapps.com. In case if you want to update the operator and use your own, you can change operatorImage. As you can see from this example, that's exactly what happened. acierto/deploy-operator:1.0.6-openshift is not the official operator. Information about apiServerURL, host and oauthHostName you should check in your OpenShift Cluster console.