Kubernetes and Docker allow you to containerize the Simulation web application, enabling you to run multiple simulations at one time. This document describes the steps for installing a Kubernetes cluster using the kubeadm command and deploying a simulation web application to it. The following commands must be executed on all nodes, unless specifically directed to act differently.
The instructions below have been tested with CentOS 7. Some steps might differ if you are using a different Linux distribution. |
To install the Kubernetes cluster and deploy the Simulation web application
- If you already have the Kubernetes cluster setup and running (or you are on a cloud which provides it), you can skip to step 10. Steps 2-9 are required if you are working in an environment (such as a bare-metal server or raw VM) where Kubernetes is not yet installed.
- Configure the local IP tables to see the br
Enable the bridged traffic by executing the following command: lsmod | grep br_netfilter
modprobe br_netfilter |
Copy the contents below into
Copy the contents below net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 |
Reload settings by executing the following command:
- Install Docker as a c.
Uninstall any o versions by executing the following command: yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine |
Install y
Set up the Docker r yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo |
Install the Docker Engine, yum install -y docker-ce docker-ce-cli containerd.io |
If you are having problems installing some packages, try adding the following content at the beginning of the [centos-extras]
name=Centos extras - $basearch
baseurl=http://mirror.centos.org/centos/7/extras/x86_64
enabled=1
gpgcheck=1
gpgkey=http://centos.org/keys/RPM-GPG-KEY-CentOS-7 |
|
- Configure the Docker for cgroups management
Create a directory by executing the following command:
Copy the contents below into {
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
} |
Reload settings and start Docker by executing the following : systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl status docker
|
- Install kubeadm, :
kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl |
Set SELinux in permissive mode (effectively disabling it) by executing the following command: setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet |
If you do not want to disable security (SELinux), find out what should be configured to work with Kubernetes. |
- Deploy the Kubernetes cluster using kubead
Initialize a cluster in the master node as a root by executing the following command . Replace <master_nodeIP> with the IP address of the master node:
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<master_nodeIP> |
After seeing a successful cluster initialization message, you should see a “kubeadm join” command with a token next to it. Copy the command and save it. It will be used to join other nodes to the cluster. Create a Kubernetes configuration directory by executing the following command: mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config |
Install CNI for POD Networking in the master node as a root user by executing the following command :
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.17.0/Documentation/kube-flannel.yml |
Join any number of worker nodes to the cluster as a root user by running kubeadm join 192.168.74.10:6443 --token s2ld5q.pm8vflxp90mtm416 \
--discovery-token-ca-cert-hash sha256:967ebd0b2c6c4c18c2833ec1a61b224696d87f5ee78bc31014830d0dcb3e0a88 |
kubectl taint node master node-role.kubernetes.io/master:NoSchedule- master node-role.kubernetes.io/master:NoSchedule- |
|
Confirm the cluster status by executing the following command: If everything is correct, you should see a “Ready” status on all cluster nodes. - Build a Docker
- Create a Docker image. For more information, see Creating a Docker image.
Push the image to the image registry, by executing the following command: docker push <image_name:version> |
- Apply Kubernetes configurations:
Prepare configuration files according to your environment. - You may need to change the “spec.template.spec.containers.image” property to point to the Docker image you created in step 10.
- You may need to change the “spec. jobTargetRef.template.spec.containers.image” property to point to the Docker image you created in step 10.
- You need to create a Docker image for the Apache ActiveMQ Artemis broker. For more information, see https://github.com/apache/activemq-artemis/tree/main/artemis-docker.
|
Apply configurations in the master node by executing the following : kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.6.0/keda-2.6.0.yaml
kubectl apply -f simulation.deployment.yml
kubectl apply -f simulation.service.yml
kubectl apply -f simulation.job.yml
kubectl apply -f artemis.deployment.yml
kubectl apply -f artemis.service.yml |
After the load balancer configuration is set up, simulation service is available on default port (80) on the node, on which the load balancer is deployed. To test if simulation service is available, query one of the endpoints. R: http://<node_hostname>/simulation/api/running
{"running":[]} |
|