Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Install kubectl as described in https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/.
  2. Install the helm to the K8S cluster admin host as described in https://helm.sh/docs/intro/install.
  3. Use the following command (or another way) to copy cluster config files to the K8S cluster admin host:

    Code Block
    languagebash
    themeRDark
    scp -r <user@control_plane_address>:.kube ~
    

    Then open to edit ~/.kube/config and replace the localhost IP address with your control plane IP address or hostname, e.g., https://127.0.0.1:6443.

  4. Prepare the required files and directories:

    1. Download and extract the twc-services-charts.zip file.
    2. Download the installation package for your product version. For more information, refer to Downloading installation files.
    3. Extract the packaged files, and inside the ~/twc-services-charts directory (where you extracted the twc-services-charts.zip file), extract the twcloud_<version>_no_install_linux64.zip file. After extracting the files, you should have the following folder structure: ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/.

      Note

      You will find all files mentioned in this chapter in twcloud_<version>_no_install_linux64.zip or on the software download website (except for Teamwork Cloud keystore and SSL certificate):

      • The ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps directory contains application .war files.
      • The ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/shared/conf directory contains other configuration files.
      • The ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/conf directory contains server.xml.
      • The ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/shared/conf/data directory contains files for Cameo Collaborator and document-exporter.
      • The ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/configuration directory contains Teamwork Cloud configuration files.
      • The ~/twc-services-charts/CATIANoMagicServices/Utilities/Cassandra contains the .jar file for Cassandra's communication with Zookeeper.


  5. Go to the the ~/twc-services-charts directory and execute the following command to generate a private key and self-signed certificate. Change the command by entering your DN identifiers.
    Code Block
    languagebash
    themeRDark
    openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout tls.key -out tls.crt -subj "/C=LT/ST=State/L=City/O=Organization/OU=Department/CN=example.com"
  6. Execute the following command to create a PKCS#12 keystore. If you change the name or password, do not forget to change them in the configuration files. For more information about the CA certificate and P12 or PFX files, see Managing SSL certificate
    Code Block
    languagebash
    themeRDark
    openssl pkcs12 -export -name teamworkcloud -in tls.crt -inkey tls.key -out keystore.p12 -password pass:nomagic
    Note
    The tls.crt and tls.key files will be used for ingress-nginx, so file names in ingress-nginx must be the same.
  7. Execute the following commands to prepare the helm-charts root directory:

    Code Block
    languagebash
    themeRDark
    cd ~/twc-services-charts/helm-charts
    mkdir -p configs/{auth,cassandra,ssl,tomcat,twcloud,wap} configs/twcloud/ssl


  8. Copy the generated certificate files to the appropriate locations by executing the following commands:
    Code Block
    languagebash
    themeRDark
    cd ~/twc-services-charts
    cp tls.crt tls.key ~/twc-services-charts/helm-charts/configs/ssl
    cp keystore.p12 ~/twc-services-charts/helm-charts/configs/twcloud/ssl
    cp tls.crt ~/twc-services-charts/helm-charts/configs/twcloud/ssl/teamworkcloud.crt
  9. To get Cassandra files, go to https://archive.apache.org/dist/cassandra/4.1.4/apache-cassandra-4.1.4-bin.tar.gz (tested with 4.1.4 version) and download the Apatche Cassandra to the  twc-services-charts directory.
  10. Execute the following commands to extract the Cassandra .tar file:

    Code Block
    languagebash
    themeRDark
    tar xzf apache-cassandra-4.1.4-bin.tar.gz
    cd ~/twc-services-charts/apache-cassandra-4.1.4/conf


  11. In the cassandra.yaml file edit Cassandra settings as shown below.

    Code Block
    sed -i "s/# commitlog_total_space:.*/commitlog_total_space: 8192MiB/g" cassandra.yaml
    sed -i "s/commitlog_segment_size:.*/commitlog_segment_size: 192MiB/g" cassandra.yaml
    sed -i "s/read_request_timeout:.*/read_request_timeout: 1800000ms/g" cassandra.yaml
    sed -i "s/range_request_timeout:.*/range_request_timeout: 1800000ms/g" cassandra.yaml
    sed -i "s/^write_request_timeout:.*/write_request_timeout: 1800000ms/g" cassandra.yaml
    sed -i "s/cas_contention_timeout:.*/cas_contention_timeout: 1000ms/g" cassandra.yaml
    sed -i "s/truncate_request_timeout:.*/truncate_request_timeout: 1800000ms/g" cassandra.yaml
    sed -i "s/request_timeout:.*/request_timeout: 1800000ms/g" cassandra.yaml
    sed -i "s/batch_size_warn_threshold:.*/batch_size_warn_threshold: 3000KiB/g" cassandra.yaml
    sed -i "s/batch_size_fail_threshold:.*/batch_size_fail_threshold: 5000KiB/g" cassandra.yaml
    sed -i "s/# internode_application_send_queue_reserve_endpoint_capacity:.*/internode_application_send_queue_reserve_endpoint_capacity: 512MiB/g" cassandra.yaml
    sed -i "s/# internode_application_receive_queue_reserve_endpoint_capacity:.*/internode_application_receive_queue_reserve_endpoint_capacity: 512MiB/g" cassandra.yaml

     

  12. Execute the following command to copy files to the twc-services-charts/helm-charts/configs/cassandra directory:

    Code Block
    languagebash
    themeRDark
    cp cassandra.yaml logback.xml ~/twc-services-charts/helm-charts/configs/cassandra
  13. Execute the following command to copy the Authserver file to the appropriate location:
    Code Block
    languagebash
    themeRDark
    cp ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/shared/conf/authserver.properties ~/twc-services-charts/helm-charts/configs/auth
  14. Execute the following commands to copy the Tomcat's file to the appropriate location:
    Code Block
    languagebash
    themeRDark
    cp ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/conf/catalina.properties ~/twc-services-charts/helm-charts/configs/tomcat
    cp ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/conf/server.xml ~/twc-services-charts/helm-charts/configs/tomcat
    sed -i "s/..\/TeamworkCloud\/configuration\/keystore.p12/.\/shared\/conf\/keystore.p12/g" ~/twc-services-charts/helm-charts/configs/tomcat/server.xml
  15. Edit the application.conf file as shown below.
    Code Block
    cp ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/configuration/application.conf ~/twc-services-charts/helm-charts/configs/twcloud/
    cp ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/configuration/logback.xml ~/twc-services-charts/helm-charts/configs/twcloud/
    sed -i 's/contact-points.*/contact-points = [${?CASSANDRA_SEED0}][${?CASSANDRA_SEED1}][${?CASSANDRA_SEED2}]/g' ~/twc-services-charts/helm-charts/configs/twcloud/application.conf
    sed -i "s/local.size.*/local.size = 3/g" ~/twc-services-charts/helm-charts/configs/twcloud/application.conf
    sed -i "s/remote.size.*/remote.size = 3/g" ~/twc-services-charts/helm-charts/configs/twcloud/application.conf
    sed -i "s/replication-factor.*/replication-factor = 3/g" ~/twc-services-charts/helm-charts/configs/twcloud/application.conf
    
  16. Execute the following commands to edit the jvm.options file:
    Code Block
    languagebash
    themeRDark
    cp ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/jvm.options ~/twc-services-charts/helm-charts/configs/twcloud/
    sed -i 's/-Xmx.*/-Xmx${TWC_XMX}/g' ~/twc-services-charts/helm-charts/configs/twcloud/jvm.options
  17. Execute the following commands to copy Web Application Platform files to the appropriate location:
    Code Block
    languagebash
    themeRDark
    cd ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/shared/conf
    cp revision.txt log4j2.properties webappplatform.properties ~/twc-services-charts/helm-charts/configs/wap/
  18. Copy all files listed below into their directories:

    auth
      • authserver.properties
    cassandra
      • cassandra.yaml
      • logback.xml
    ssl
      • tls.crt
      • tls.key
    tomcat
      • catalina.properties
      • server.xml
    twcloud
      • application.conf
      • logback.xml
      • jvm.options
    twcloud/ssl
      • keystore.p12
      • teamworkcloud.crt
    web
      • revision.txt
      • log4j2.properties
      • webappplatform.properties


  19.  Execute the following command to create a directory for each application you are going to use:

    Code Block
    languagebash
    themeRDark
    cp ~/twc-services-charts/dockerfiles ~/twc-services-charts/imagebuild
    
  20. Copy all files listed below to their directories (except the artemis directory).

    admin-console
      • admin.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    authentication
      • authentication.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    collaborator
      • collaborator.war
      • Dockerfile
      • data (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/shared/conf/. Before copying, remove the document-exporter directory from the data directory.)
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    document-exporter
      • document-exporter.war
      • Dockerfile
      • data (This directory is located in <webapp-charts_root>/CATIANoMagicServices/WebAppPlatform/shared/conf/. Remove the collaborator directory from the data directory.)
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    oslc
      • oslc.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    reports
      • reports.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    resource-usage-map
      • resource-usage-map.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    resources
      • resources.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    simulation
      • simulation.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    webapp
      • webapp.war
      • Dockerfile
      • ROOT (The directory is located in ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/webapps.)
    twcloud
      • Dockerfile
      • TeamworkCloud (The directory is located in ~/twc-services-charts/CATIANoMagicServices.)


  21. Execute the following command to start the local Docker registry:

    Code Block
    languagebash
    themeRDark
    docker run -d -p 5000:5000 --restart=always --name registry registry:2
    Note
    In this case, a local Docker registry is run on the K8S cluster admin host. It uses port 5000, so make sure this port is open on your host firewall and accessible to your Kubernetes cluster.
    Warning
    This local Docker registry should be used for testing purposes only and is not considered production-ready. In the production environment, run the image registry with TLS and authentication.


  22. Add your repo URL (FQDN of the K8S cluster admin host) to the Docker /etc/docker/daemon.json file.
  23. If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following content:

    Code Block
    {
    "insecure-registries": ["admin.example.com:5000"]
    }


  24. Execute the following command to restart the Docker service:

    Code Block
    languagebash
    themeRDark
    sudo systemctl restart docker


  25. Execute the commands below for each service that you want to build images for inside their own directories, as shown in the examples in substeps a, b, and c.

    Code Block
    languagebash
    themeRDark
    docker build -f Dockerfile -t {APP_NAME} .
    docker tag {APP_NAME}:{VERSION} {IMAGE_REPO_URL}:5000/{APP_NAME}:{VERSION}
    docker push {IMAGE_REPO_URL}/{APP_NAME}:{VERSION}


    Note
    • {IMAGE_REPO_URL} - your image repository URL. It can differ depending on the image registry provider.
    • {APP_NAME} - application name.
    • {VERSION} - version to be used for the tag.


    1. The following command example builds images for Admin Console:

      Code Block
      languagebash
      themeRDark
      cd imagebuild/admin-console
      docker build -f Dockerfile -t admin .
    2. The following command example tags the built image:

      Code Block
      languagebash
      themeRDark
      docker tag admin: latest admin.example.com:5000/admin:latest
    3. The following command example pushes the image to the registry:

      Code Block
      languagebash
      themeRDark
      docker push admin.example.com:5000/admin:latest
      Info

      Check the names of the local repositories after Docker images are pushed.
      curl -s http://admin.example.com:5000/v2/_catalog
      curl -s http://admin.example.com:5000/v2/<repository_name>/tags/list
      Example:
      curl -s http://admin.example.com:5000/v2/webapp/tags/list
      REGISTRY_URL="http://admin.example.com:5000:5000/v2"; curl -s $REGISTRY_URL/_catalog | jq -r '.repositories[]' | xargs -I {} sh -c 'echo "Repository: {}"; curl -s '"$REGISTRY_URL"'/{}/tags/list | jq -r ".tags[]"'

  26. Execute the commands below for ActiveMQ Artemis (this is a working example):
    Code Block
    languagebash
    themeRDark
    docker pull apache/activemq-artemis:2.32.0
    docker tag apache/activemq-artemis:2.32.0 admin.example.com:5000/artemis
    docker push admin.example.com:5000/artemis
  27. To be able to pull images to your Kubernetes cluster nodes, add your image repository to the Containerd configuration on all cluster nodes:
    1. Edit the /etc/containerd/config.toml file by adding the following lines to the "mirrors" part:

      Code Block
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."admin.exmple.com:5000"]
                    endpoint = ["http://admin.example.com:5000"]
      
            [plugins."io.containerd.grpc.v1.cri".registry.configs." admin.example.com"]
              [plugins."io.containerd.grpc.v1.cri".registry.configs." admin.example.com".tls]
               insecure_skip_verify = true
      Note

      Keep in mind that alignment is very important, as shown in the example below.

    2. Execute the following command to restart Containerd services on all cluster nodes:

      Code Block
      languagebash
      themeRDark
      sudo systemctl restart containerd
  28. Execute the following commands to add dependency repos:

    Code Block
    languagebash
    themeRDark
    helm repo add kedacore https://kedacore.github.io/charts
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo update
    
    ~/twc-services-charts/helm-charts
  29. Execute the following command to update chart dependencies:

    Code Block
    languagebash
    themeRDark
    helm dependency update
    
  30. Execute the following command to install Custom Resource Definitions (CRD). This will create all services and deployments.

    Code Block
    languagebash
    themeRDark
    helm install keda kedacore/keda --namespace keda --create-namespace --wait --version 2.13.2
    Tip

    You can check the status of the services and pods with this command: kubectl get all -n keda

    Info

    Keda and MetalLB should be added as separate resources.

    Since CRDs (being a globally shared resource) are fragile, you have to assume that once a CRD is installed, it is shared across multiple namespaces and groups of users. For that reason, installing, modifying, and deleting CRDs is a process that has ramifications for all users and systems of that cluster.

  31. Execute the following command to deploy MetalLB:

    Code Block
    languagebash
    themeRDark
    helm install metallb bitnami/metallb --namespace metallb-system --create-namespace --wait
  32. To check the MetalLB deployment status, execute the following command on the control plane node or the K8S cluster admin host:

    Code Block
    languagebash
    themeRDark
    kubectl get -n metallb-system all

    You should see an output similar to this:

    Note

    If you see that the MetalLB status is 'Running,' but READY is 0/1, give it some time to start and repeat the command.

  33. Create configuration files for MetalLB as instructed below. Make sure to complete all the substeps of this step to create and apply both configurations. Otherwise, the product will not work properly.

    1. Edit the ~/twc-services-charts/helm-charts/crd/metalLB/metallb_ipadresspool.yaml file as shown below:

      Note
      • Your IT department should give you a reserved IP address or a range of IP addresses (depending on your needs) with a DNS A record that points to this IP address.
      • At least one IP address with a domain name should be reserved for Ingress.
      • Configuration files should be formatted in "yaml".
      • The first IP address configured in metallb_ipadresspool.yaml will be assigned for Ingress.


      Code Block
      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: first-pool
        namespace: metallb-system
      spec:
        addresses:
        - 192.168.1.1/32
        #- 192.168.10.0/24
        #- 192.168.9.1-192.168.9.5
        #- fc00:f853:0ccd:e799::/124
    2. Edit (if necessary) the ~/twc-services-charts/helm-charts/crd/metalLB/l2advertisement.yaml file as shown below. You can change the metadata name as needed. Keep in mind that the ipAddressPool name should match the specifications.

      Code Block
      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: example
        namespace: metallb-system
      spec:
        ipAddressPools:
        - first-pool
    3. Execute the following command to apply the MetalLB configuration:

      Code Block
      languagebash
      themeRDark
      kubectl apply -f ~/twc-services-charts/helm-charts/crd/metalLB/metallb_ipaddresspool.yaml
    4. Execute the following command to check the applied configuration:

      Code Block
      languagebash
      themeRDark
      kubectl describe ipaddresspools.metallb.io first-pool -n metallb-system
    5. Execute the following command to create the MetalLB advertisement:

      Code Block
      languagebash
      themeRDark
      kubectl apply -f ~/twc-services-charts/helm-charts/crd/metalLB/l2advertisement.yaml
    6. Execute the following command to check the advertisement configuration:

      Code Block
      languagebash
      themeRDark
      kubectl describe l2advertisements.metallb.io example -n metallb-system
  34. Find the values.yaml file in the parent helm chart (~/twc-services-charts/helm-chart) and provide the values of the parameters in the file to enable or disable specific applications or parts of the configuration.
  35. Do one of the following:

    • To deploy services to the default namespace, execute this command in the helm parent chart directory (you can change "twc" to any release name):

      Code Block
      languagebash
      themeRDark
      helm install twc .
    • To create a namespace and deploy services in this namespace, execute this command in the helm parent chart directory :

      Code Block
      languagebash
      themeRDark
      helm install twc . --namespace=twc --create-namespace --wait
      Note

      This helm chart includes Zookeeper and Ingress-nginx as dependencies. They will be deployed automatically.

  36. After all web applications are deployed, execute the following command to check their status:

    Code Block
    languagebash
    themeRDark
    kubectl get all
    Note
    • All pods should be READY:1/1, STATUS:Running.
    • The ingress-nginx service should be Type:LoadBalancer.
    • EXTERNAL-IP should match the address that you provided in the MetalLB configuration.
    • If pods do not run, check for problems by executing this command:
    		kubectl describe pod pod_name
  37. Execute the following command to check Ingress rules:

    Code Block
    languagebash
    themeRDark
    kubectl describe ingress

    You should get an output similar to this:

    Code Block
    Name:             ingress.resource
    Namespace:        default
    Address:          **.**.***.***
    Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
    TLS:
      webapp-tls-secret terminates ingress.example.com
    Rules:
      Host                       Path  Backends
      ----                       ----  --------
      ingress.example.com
                                 /admin                webapp-adminconsole:8443 (10.233.105.39:8443)
                                 /authentication       webapp-authentication:8443 (10.233.88.69:8443)
                                 /collaborator         webapp-collaborator:8443 (10.233.88.75:8443)
                                 /document-exporter    webapp-docexporter:8443 (10.233.105.22:8443)
                                 /oslc                 webapp-oslc:8443 (10.233.105.23:8443)
                                 /reports              webapp-reports:8443 (10.233.73.246:8443)
                                 /resources            webapp-resources:8443 (10.233.88.67:8443)
                                 /resource-usage-map   webapp-rum:8443 (10.233.105.40:8443)
                                 /simulation           webapp-simulation:8443 (10.233.88.123:8443)
                                 /webapp               webapp-webapp:8443 (10.233.105.29:8443)
    Annotations:                 meta.helm.sh/release-name: webapp
                                 meta.helm.sh/release-namespace: default
                                 nginx.ingress.kubernetes.io/affinity: cookie
                                 nginx.ingress.kubernetes.io/affinity-canary-behavior: sticky
                                 nginx.ingress.kubernetes.io/affinity-mode: persistent
                                 nginx.ingress.kubernetes.io/backend-protocol: https
                                 nginx.ingress.kubernetes.io/proxy-body-size: 100m
                                 nginx.ingress.kubernetes.io/proxy-connect-timeout: 600
                                 nginx.ingress.kubernetes.io/proxy-read-timeout: 600
                                 nginx.ingress.kubernetes.io/proxy-send-timeout: 600
                                 nginx.ingress.kubernetes.io/send-timeout: 600
                                 nginx.ingress.kubernetes.io/session-cookie-name: COOKIE
    Events:                      <none>
    
    

     

  38. Test the product deployment:
    1. In an internet browser, go to https://ingress.example.com/webapp (the DNS A record that points to the Ingress external IP reserver earlier).

      Note

      If you added a DNS record to bind the domain name to the IP address, you can use the domain name instead of the IP. The browser should show a warning because of the self-signed certificate. Accept it to proceed, and you will be redirected to the Teamwork Cloud Authentication web page.



    2. Log in with your credentials, and you will be redirected back to Web Application Platform.