This deployment example is not intended for production use. Use it for testing purposes only. |
This chapter provides information on how to deploy the product for testing purposes. Keep in mind that the guidelines provided below are not considered the best practices and do not take into account security settings and password encryption. This deployment has been tested with a specific Kubernetes version, CNI, container runtime, and OS. If you use other solutions, use them at your own risk.
|
The following figure displays the environment used in this deployment example.
In the workflow below:
|
To deploy the product for testing purposes
Use the following command (or another way) to copy cluster config files to the K8S cluster admin host:
scp -r <user@control_plane_address>:.kube ~ |
Then open to edit ~/.kube/config and replace the localhost IP address with your control plane IP address or hostname, e.g., https://127.0.0.1:6443.
Prepare the required files and directories:
Extract the packaged files, and inside the ~/twc-services-charts directory (where you extracted the twc-services-charts.zip file), extract the twcloud_<version>_no_install_linux64.zip file. After extracting the files, you should have the following folder structure: ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/.
You will find all files mentioned in this chapter in twcloud_<version>_no_install_linux64.zip or on the software download website (except for Magic Collaboration Studio keystore and SSL certificate):
|
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout tls.key -out tls.crt -subj "/C=LT/ST=State/L=City/O=Organization/OU=Department/CN=example.com" |
openssl pkcs12 -export -name teamworkcloud -in tls.crt -inkey tls.key -out keystore.p12 -password pass:nomagic |
The tls.crt and tls.key files will be used for ingress-nginx, so file names in ingress-nginx must be the same. |
Execute the following commands to prepare the helm-charts root directory:
cd ~/twc-services-charts/helm-charts mkdir -p configs/{auth,cassandra,ssl,tomcat,twcloud,wap} configs/twcloud/ssl |
cd ~/twc-services-charts cp tls.crt tls.key ~/twc-services-charts/helm-charts/configs/ssl cp keystore.p12 ~/twc-services-charts/helm-charts/configs/twcloud/ssl cp tls.crt ~/twc-services-charts/helm-charts/configs/twcloud/ssl/teamworkcloud.crt |
Execute the following commands to extract the Cassandra .tar file:
tar xzf apache-cassandra-4.1.4-bin.tar.gz cd ~/twc-services-charts/apache-cassandra-4.1.4/conf |
In the cassandra.yaml file edit Cassandra settings as shown below.
sed -i "s/# commitlog_total_space:.*/commitlog_total_space: 8192MiB/g" cassandra.yaml sed -i "s/commitlog_segment_size:.*/commitlog_segment_size: 192MiB/g" cassandra.yaml sed -i "s/read_request_timeout:.*/read_request_timeout: 1800000ms/g" cassandra.yaml sed -i "s/range_request_timeout:.*/range_request_timeout: 1800000ms/g" cassandra.yaml sed -i "s/^write_request_timeout:.*/write_request_timeout: 1800000ms/g" cassandra.yaml sed -i "s/cas_contention_timeout:.*/cas_contention_timeout: 1000ms/g" cassandra.yaml sed -i "s/truncate_request_timeout:.*/truncate_request_timeout: 1800000ms/g" cassandra.yaml sed -i "s/request_timeout:.*/request_timeout: 1800000ms/g" cassandra.yaml sed -i "s/batch_size_warn_threshold:.*/batch_size_warn_threshold: 3000KiB/g" cassandra.yaml sed -i "s/batch_size_fail_threshold:.*/batch_size_fail_threshold: 5000KiB/g" cassandra.yaml sed -i "s/# internode_application_send_queue_reserve_endpoint_capacity:.*/internode_application_send_queue_reserve_endpoint_capacity: 512MiB/g" cassandra.yaml sed -i "s/# internode_application_receive_queue_reserve_endpoint_capacity:.*/internode_application_receive_queue_reserve_endpoint_capacity: 512MiB/g" cassandra.yaml |
Execute the following command to copy files to the twc-services-charts/helm-charts/configs/cassandra directory:
cp cassandra.yaml logback.xml ~/twc-services-charts/helm-charts/configs/cassandra |
cp ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/shared/conf/authserver.properties ~/twc-services-charts/helm-charts/configs/auth |
cp ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/conf/catalina.properties ~/twc-services-charts/helm-charts/configs/tomcat cp ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/conf/server.xml ~/twc-services-charts/helm-charts/configs/tomcat sed -i "s/..\/TeamworkCloud\/configuration\/keystore.p12/.\/shared\/conf\/keystore.p12/g" ~/twc-services-charts/helm-charts/configs/tomcat/server.xml |
cp ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/configuration/application.conf ~/twc-services-charts/helm-charts/configs/twcloud/ cp ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/configuration/logback.xml ~/twc-services-charts/helm-charts/configs/twcloud/ sed -i 's/contact-points.*/contact-points = [${?CASSANDRA_SEED0}][${?CASSANDRA_SEED1}][${?CASSANDRA_SEED2}]/g' ~/twc-services-charts/helm-charts/configs/twcloud/application.conf sed -i "s/local.size.*/local.size = 3/g" ~/twc-services-charts/helm-charts/configs/twcloud/application.conf sed -i "s/remote.size.*/remote.size = 3/g" ~/twc-services-charts/helm-charts/configs/twcloud/application.conf sed -i "s/replication-factor.*/replication-factor = 3/g" ~/twc-services-charts/helm-charts/configs/twcloud/application.conf |
cp ~/twc-services-charts/CATIANoMagicServices/TeamworkCloud/jvm.options ~/twc-services-charts/helm-charts/configs/twcloud/ sed -i 's/-Xmx.*/-Xmx${TWC_XMX}/g' ~/twc-services-charts/helm-charts/configs/twcloud/jvm.options |
cd ~/twc-services-charts/CATIANoMagicServices/WebAppPlatform/shared/conf cp revision.txt log4j2.properties webappplatform.properties ~/twc-services-charts/helm-charts/configs/wap/ |
Copy all files listed below into their directories:
auth |
---|
|
cassandra |
|
ssl |
|
tomcat |
|
twcloud |
|
twcloud/ssl |
|
web |
|
Execute the following command to create a directory for each application you are going to use:
cp ~/twc-services-charts/dockerfiles ~/twc-services-charts/imagebuild |
Copy all files listed below to their directories (except the artemis directory).
admin-console |
---|
|
authentication |
|
collaborator |
|
document-exporter |
|
oslc |
|
reports |
|
resource-usage-map |
|
resources |
|
sgi-crawler |
|
simulation |
|
sysmlv2-api |
|
webapp |
|
twcloud |
|
Execute the following command to start the local Docker registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2 |
In this case, a local Docker registry is run on the K8S cluster admin host. It uses port 5000, so make sure this port is open on your host firewall and accessible to your Kubernetes cluster. |
This local Docker registry should be used for testing purposes only and is not considered production-ready. In the production environment, run the image registry with TLS and authentication. |
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following content:
{ "insecure-registries": ["admin.example.com:5000"] } |
Execute the following command to restart the Docker service:
sudo systemctl restart docker |
Execute the commands below for each service that you want to build images for inside their own directories, as shown in the examples in substeps a, b, and c.
docker build -f Dockerfile -t {APP_NAME} . docker tag {APP_NAME}:{VERSION} {IMAGE_REPO_URL}:5000/{APP_NAME}:{VERSION} docker push {IMAGE_REPO_URL}/{APP_NAME}:{VERSION} |
|
The following command example builds images for Admin Console:
cd imagebuild/admin-console docker build -f Dockerfile -t admin . |
The following command example tags the built image:
docker tag admin: latest admin.example.com:5000/admin:latest |
The following command example pushes the image to the registry:
docker push admin.example.com:5000/admin:latest |
Check the names of the local repositories after Docker images are pushed. |
docker pull apache/activemq-artemis:2.32.0 docker tag apache/activemq-artemis:2.32.0 admin.example.com:5000/artemis docker push admin.example.com:5000/artemis |
Edit the /etc/containerd/config.toml file by adding the following lines to the "mirrors" part:
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."admin.exmple.com:5000"] endpoint = ["http://admin.example.com:5000"] [plugins."io.containerd.grpc.v1.cri".registry.configs." admin.example.com"] [plugins."io.containerd.grpc.v1.cri".registry.configs." admin.example.com".tls] insecure_skip_verify = true |
Keep in mind that alignment is very important, as shown in the example below. |
Execute the following command to restart Containerd services on all cluster nodes:
sudo systemctl restart containerd |
Execute the following commands to add dependency repos:
helm repo add kedacore https://kedacore.github.io/charts helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update ~/twc-services-charts/helm-charts |
Execute the following command to update chart dependencies:
helm dependency update |
Execute the following command to install Custom Resource Definitions (CRD). This will create all services and deployments.
helm install keda kedacore/keda --namespace keda --create-namespace --wait --version 2.13.2 |
You can check the status of the services and pods with this command: kubectl get all -n keda |
Keda and MetalLB should be added as separate resources. Since CRDs (being a globally shared resource) are fragile, you have to assume that once a CRD is installed, it is shared across multiple namespaces and groups of users. For that reason, installing, modifying, and deleting CRDs is a process that has ramifications for all users and systems of that cluster. |
Execute the following command to deploy MetalLB:
helm install metallb bitnami/metallb --namespace metallb-system --create-namespace --wait |
To check the MetalLB deployment status, execute the following command on the control plane node or the K8S cluster admin host:
kubectl get -n metallb-system all |
You should see an output similar to this:
If you see that the MetalLB status is 'Running,' but READY is 0/1, give it some time to start and repeat the command. |
Create configuration files for MetalLB as instructed below. Make sure to complete all the substeps of this step to create and apply both configurations. Otherwise, the product will not work properly.
Edit the ~/twc-services-charts/helm-charts/crd/metalLB/metallb_ipadresspool.yaml file as shown below:
|
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.168.1.1/32 #- 192.168.10.0/24 #- 192.168.9.1-192.168.9.5 #- fc00:f853:0ccd:e799::/124 |
Edit (if necessary) the ~/twc-services-charts/helm-charts/crd/metalLB/l2advertisement.yaml file as shown below. You can change the metadata name as needed. Keep in mind that the ipAddressPool name should match the specifications.
apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: example namespace: metallb-system spec: ipAddressPools: - first-pool |
Execute the following command to apply the MetalLB configuration:
kubectl apply -f ~/twc-services-charts/helm-charts/crd/metalLB/metallb_ipaddresspool.yaml |
Execute the following command to check the applied configuration:
kubectl describe ipaddresspools.metallb.io first-pool -n metallb-system |
Execute the following command to create the MetalLB advertisement:
kubectl apply -f ~/twc-services-charts/helm-charts/crd/metalLB/l2advertisement.yaml |
Execute the following command to check the advertisement configuration:
kubectl describe l2advertisements.metallb.io example -n metallb-system |
Do one of the following:
To deploy services to the default namespace, execute this command in the helm parent chart directory (you can change "twc" to any release name):
helm install twc . |
To create a namespace and deploy services in this namespace, execute this command in the helm parent chart directory :
helm install twc . --namespace=twc --create-namespace --wait |
This helm chart includes Zookeeper and Ingress-nginx as dependencies. They will be deployed automatically. |
After all web applications are deployed, execute the following command to check their status:
kubectl get all |
kubectl describe pod pod_name |
Execute the following command to check Ingress rules:
kubectl describe ingress |
You should get an output similar to this:
Name: ingress.resource Namespace: default Address: **.**.***.*** Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) TLS: webapp-tls-secret terminates ingress.example.com Rules: Host Path Backends ---- ---- -------- ingress.example.com /admin webapp-adminconsole:8443 (10.233.105.39:8443) /authentication webapp-authentication:8443 (10.233.88.69:8443) /collaborator webapp-collaborator:8443 (10.233.88.75:8443) /document-exporter webapp-docexporter:8443 (10.233.105.22:8443) /oslc webapp-oslc:8443 (10.233.105.23:8443) /reports webapp-reports:8443 (10.233.73.246:8443) /resources webapp-resources:8443 (10.233.88.67:8443) /resource-usage-map webapp-rum:8443 (10.233.105.40:8443) /simulation webapp-simulation:8443 (10.233.88.123:8443) /webapp webapp-webapp:8443 (10.233.105.29:8443) Annotations: meta.helm.sh/release-name: webapp meta.helm.sh/release-namespace: default nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/affinity-canary-behavior: sticky nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/backend-protocol: https nginx.ingress.kubernetes.io/proxy-body-size: 100m nginx.ingress.kubernetes.io/proxy-connect-timeout: 600 nginx.ingress.kubernetes.io/proxy-read-timeout: 600 nginx.ingress.kubernetes.io/proxy-send-timeout: 600 nginx.ingress.kubernetes.io/send-timeout: 600 nginx.ingress.kubernetes.io/session-cookie-name: COOKIE Events: <none> |
In an internet browser, go to https://ingress.example.com/webapp (the DNS A record that points to the Ingress external IP reserver earlier).
If you added a DNS record to bind the domain name to the IP address, you can use the domain name instead of the IP. The browser should show a warning because of the self-signed certificate. Accept it to proceed, and you will be redirected to the Magic Collaboration Studio Authentication web page. |