This deployment example is not intended for production use. Use it for testing purposes only. |
This chapter provides information on how to deploy the product for testing purposes. Keep in mind that the guidelines provided below are not considered the best practices and do not take into account security settings and password encryption. This deployment has been tested with a specific Kubernetes version, CNI, container runtime, and OS. If you use other solutions, use them at your own risk.
|
The following figure displays the environment used in this deployment example.
In the workflow below:
|
To deploy the product for testing purposes
Execute the following command to copy cluster config files to the K8S cluster admin host:
scp -r <user@control_plane_address>:.kube ~ Edit ~/.kube/config |
Replace the localhost IP address with your control plane IP address or hostname, e.g., https://127.0.0.1:6443. |
Build Docker images:
Extract the webapp-charts.zip file, and inside the webapp-charts root directory, extract the Web_App_Platform_<version>_linux_no_install.zip file. After extracting the files, you should have the following folder structure: <webapp-charts_root>/CATIANoMagicServices/WebAppPlatform/.
You will find all the files mentioned in this chapter in Web_App_Platform_<version>_linux_no_install.zip or on the software download website (except for Teamwork Cloud keystore and SSL certificate):
|
If you use only the .p12 file on the Teamwork Cloud server, execute the following command to extract the certificate and key from the .p12 file:
openssl pkcs12 -in INFILE.p12 -out twcloud.crt -nokeys openssl pkcs12 -in INFILE.p12 -out twcloud.key -nodes –nocerts |
Execute the following command to create a directory for each application you are going to use:
mkdir -p imagebuild/{admin-console,authentication,artemis,collaborator,document-exporter,oslc,reports,resources,resource-usage-map,simulation,webapp} |
Copy all files listed below to their directories (except the artemis directory). You will find the Docker files for each service in dockerfiles.zip. After placing the Docker files in their appropriate directories, rename them to Dockerfile as shown below:
admin-console |
---|
|
authentication |
|
collaborator |
|
document-exporter |
|
oslc |
|
reports |
|
resource-usage-map |
|
resources |
|
simulation |
|
webapp |
|
Execute the following command to start the local Docker registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2 |
In this case, a local Docker registry is run on the K8S cluster admin host. It uses port 5000, so make sure this port is open on your host firewall and accessible to your Kubernetes cluster. |
This local Docker registry should be used for testing purposes only and is not considered production-ready. In the production environment, run the image registry with TLS and authentication. |
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following content:
{ "insecure-registries": ["admin.example.com:5000"] } |
Execute the following command to restart the Docker service:
sudo systemctl restart docker |
If the registry container has started, proceed with building the Artemis image (for more information, refer to https://github.com/apache/activemq-artemis/tree/main/artemis-docker):
Execute the following command:
cd imagebuild git clone https://github.com/apache/activemq-artemis.git cd activemq-artemis/artemis-docker |
Execute the following command to download Artemis binaries:
wget https://archive.apache.org/dist/activemq/activemq-artemis/2.28.0/apache-artemis-2.28.0-bin.zip |
Extract the following file:
unzip apache-artemis-2.28.0-bin.zip |
Execute the following commands to build the image:
./prepare-docker.sh --from-local-dist --local-dist-path ./apache-artemis-2.20.0 cd apache-artemis-2.20.0 docker build -f ./docker/Dockerfile-centos7-11 -t artemis-centos . |
Execute the following command to tag the built image:
docker tag artemis-centos admin.example.com:5000/artemis:2.20 |
Execute the following command to push the image to your repository:
docker push admin.example.com:5000/artemis:2.20 |
Execute the commands below for each service that you want to build images for inside their own directories, as shown in the examples in steps 15a, 15b, and 15c.
docker build -f Dockerfile -t {APP_NAME} . docker tag {APP_NAME}:{VERSION} {IMAGE_REPO_URL}:5000/{APP_NAME}:{VERSION} docker push {IMAGE_REPO_URL}/{APP_NAME}:{VERSION} |
|
The following command example builds images for Admin Console:
cd imagebuild/admin-console docker build -f Dockerfile -t admin . |
The following command example tags the built image:
docker tag admin: latest admin.example.com:5000/admin:latest |
The following command example pushes the image to the registry:
docker push admin.example.com:5000/admin:latest |
Edit the /etc/containerd/config.toml file by adding the following lines to the "mirrors" part:
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."admin.exmple.com:5000"] endpoint = ["http://admin.example.com:5000"] [plugins."io.containerd.grpc.v1.cri".registry.configs." admin.example.com"] [plugins."io.containerd.grpc.v1.cri".registry.configs." admin.example.com".tls] insecure_skip_verify = true |
Keep in mind that alignment is very important, as shown in the example below. |
Execute the following command to restart Containerd services on all cluster nodes:
sudo systemctl restart containerd |
Execute the following commands to add dependency repos:
helm repo add kedacore https://kedacore.github.io/charts helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update cd ${webapp-charts_directory} |
Execute the following command to update chart dependencies:
helm dependency update |
Execute the command below to install Custom Resource Definitions (CRD). This will create all services and deployments.
helm install keda kedacore/keda --namespace keda --create-namespace --wait --version 2.6.0 |
|
Keda and MetalLB should be added as separate resources. Since CRDs (being a globally shared resource) are fragile, you have to assume that it is shared across multiple namespaces and groups of users once a CRD is installed. For that reason, installing, modifying, and deleting CRDs is a process that has ramifications for all users and systems of that cluster. |
Execute the following command to deploy MetalLB:
helm install metallb bitnami/metallb --namespace metallb-system --create-namespace --wait |
To check the MetalLB deployment status, execute the following command on the control plane node or the K8S cluster admin host:
kubectl get -n metallb-system all |
You should see an output similar to this:
If you see that the MetalLB status is 'Running,' but READY is 0/1, give it some time to start and repeat the command. |
Create configuration files for MetalLB:
Make sure to complete all the substeps of this step to create and apply both configurations. Otherwise, the product will not work properly. |
Go the the directory where you extracted the webapp-charts.zip file:
cd <webapp-charts> |
Edit the metalLB/metallb_ipadresspool.yaml file as shown below:
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.168.1.1/32 #- 192.168.10.0/24 #- 192.168.9.1-192.168.9.5 #- fc00:f853:0ccd:e799::/124 |
|
Edit the crd/metalLB/l2advertisement.yaml file as shown below. You can change the metadata name as needed. Keep in mind that the ipAddressPool name should match the specifications.
apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: example namespace: metallb-system spec: ipAddressPools: - first-poolkubectl describe l2advertisements |
Execute the following command to apply the MetalLB configuration:
kubectl apply -f metalLB/metallb_ipaddresspool.yaml |
Execute the following command to check the applied configuration:
kubectl describe ipaddresspools.metallb.io first-pool -n metallb-system |
Execute the following command to create the MetalLB advertisement:
kubectl apply -f metalLB/l2advertisement.yaml |
Execute the following command to check the advertisement configuration:
kubectl describe l2advertisements.metallb.io example -n metallb-system |
Execute the commands below to collect configuration files for all applications and copy them to their directories. These files will be used to create config maps and mount them into pods.
Make sure that you work in the directory where you extracted the webapp-charts.zip file (<webapp-charts_root> directory). |
mkdir -p configs/{auth,cassandra,ssl,tomcat,twcloud,wap} cp <webapp-charts_root>/CATIANoMagicServices/WebAppPlatform/shared/conf/authserver.properties configs/auth cp <webapp-charts_root>/CATIANoMagicServices/WebAppPlatform/shared/conf/authserver.properties configs/logback.xml revision.txt webappplatform.properties configs/wap cp <webapp-charts_root>/CATIANoMagicServices/WebAppPlatform/shared/conf/webappplatform.properties configs/wap cp <webapp-charts_root>/CATIANoMagicServices/WebAppPlatform/shared/conf/ logback.xml configs/wap cp <webapp-charts_root>/CATIANoMagicServices/WebAppPlatform/shared/conf/revision.txt configs/wap cp <webapp-charts_root>/CATIANoMagicServices/conf/catalina.properties server.xml configs/tomcat <webapp-charts_root>/keystore.p12 configs/twcloud cp <webapp-charts_root>/teamworkcloud.crt configs/twcloud |
Do one of the following:
Execute this command in the webapp-charts root directory to deploy services to the default namespace:
helm install webapp . |
Execute this command in the helm parent chart directory to create a namespace and deploy services in this namespace:
helm install webapp . --namespace=wap --create-namespace --wait |
This helm chart includes Zookeeper and Ingress-nginx as dependencies. They will be deployed automatically. |
After all web applications are deployed, execute the following command to check their status:
kubectl get all |
If pods do not run, check for problems by executing this command: kubectl describe pod pod_name |
Execute the following command to check Ingress rules:
kubectl describe ingress |
You should get an output similar to this:
Name: ingress.resource Namespace: default Address: **.**.***.*** Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) TLS: webapp-tls-secret terminates nm-wap10639.dsone.3ds.com Rules: Host Path Backends ---- ---- -------- nm-wap10639.dsone.3ds.com /admin webapp-adminconsole:8443 (10.233.105.39:8443) /authentication webapp-authentication:8443 (10.233.88.69:8443) /collaborator webapp-collaborator:8443 (10.233.88.75:8443) /document-exporter webapp-docexporter:8443 (10.233.105.22:8443) /oslc webapp-oslc:8443 (10.233.105.23:8443) /reports webapp-reports:8443 (10.233.73.246:8443) /resources webapp-resources:8443 (10.233.88.67:8443) /resource-usage-map webapp-rum:8443 (10.233.105.40:8443) /simulation webapp-simulation:8443 (10.233.88.123:8443) /webapp webapp-webapp:8443 (10.233.105.29:8443) Annotations: meta.helm.sh/release-name: webapp meta.helm.sh/release-namespace: default nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/affinity-canary-behavior: sticky nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/backend-protocol: https nginx.ingress.kubernetes.io/proxy-body-size: 100m nginx.ingress.kubernetes.io/proxy-connect-timeout: 600 nginx.ingress.kubernetes.io/proxy-read-timeout: 600 nginx.ingress.kubernetes.io/proxy-send-timeout: 600 nginx.ingress.kubernetes.io/send-timeout: 600 nginx.ingress.kubernetes.io/session-cookie-name: COOKIE Events: <none> |
In an internet browser, go to domain_name/webapp (the DNS A record that points to the Ingress external IP).
If you added a DNS record to bind the domain name to the IP address, you can use the domain name instead of the IP. The browser should show a warning because of the self-signed certificate. Accept it to proceed, and you will be redirected to the Teamwork Cloud Authentication web page. |