You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 29 Next »

On this page:


Scripts

The following are the script files used in this hardening guide:

harden_cassandra_ports.sh

twc.java.security

upgrade_tomcat_webapp.sh

upgrade_jdk_webapp.sh

The default shipping configuration of Teamwork Cloud is not a hardened configuration.

When hardening an installation, there are variables that can render the installation inoperative, such as incompatibility of the supported ciphers in a certificate and the supported ciphers in the hardened configuration.

Furthermore, the default configurations assume that the deployment is behind a secure infrastructure, and therefore required ports are globally allowed.

Since some of Teamwork Cloud's infrastructure relies on available components, newly discovered vulnerabilities need to be mitigated during the life-cycle of the installation.

Below, we will cover the potentially exploitable vulnerabilities of the different components, as well as various steps to mitigate depending on the policies of the deploying organization.

Cassandra Port Access

When installing on Linux using our deployment scripts, all of the ports required by Cassandra for inter-node communication, as well as for the Teamwork Cloud nodes to communicate with Cassandra nodes are opened globally. This configuration is deployed mostly to facilitate testing of the environment upon installation, prior to taking any measures to harden the installation. If we check the firewall upon installation, we will see an output similar to the one below:

# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: cassandra lmadmin ssh twcloud
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

In our deployment we create a firewall service definition to facilitate management of the rules. This file is located in /etc/firewalld/services/cassandra.xml, and contains the following:

# cat /etc/firewalld/services/cassandra.xml
<?xml version="1.0" encoding="utf-8"?>
<service version="1.0">
    <short>cassandra</short>
    <description>cassandra</description>
    <port port="7000" protocol="tcp"/>
    <port port="7001" protocol="tcp"/>
        <port port="9042" protocol="tcp"/>
        <port port="9160" protocol="tcp"/>
        <port port="9142" protocol="tcp"/>
</service>

The first step in securing Cassandra is to limit traffic only to Cassandra and Teamwork Cloud Nodes. In the example below, we have a single node Cassandra/Teamwork Cloud installation, with its IP address set to 10.254.254.56.

As can be seen below, the firewall configuration has been modified to only allow access from itself.

# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: lmadmin ssh twcloud
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
        rule family="ipv4" source address="10.254.254.56" service name="cassandra" accept

The process to follow is to remove the general firewall allowance to the Cassandra service. After that, we want to ensure that if direct port rule assignments were made, they are removed. Finally, we want to create a set of rich rules which will allow access only to the required nodes.

If your deployment consists of a single-node or multi-node cluster where both Teamwork Cloud and Cassandra reside on the same nodes, we can automate this process by using the following script.

harden_cassandra_ports.sh
#!/bin/bash
echo "Hardening Cassandra Ports"
echo "Creating Cassandra firewall service profile"
cat <<EOF |  tee /etc/firewalld/services/cassandra.xml  &> /dev/null
<?xml version="1.0" encoding="utf-8"?>
<service version="1.0">
    <short>cassandra</short>
    <description>cassandra</description>
    <port port="7000" protocol="tcp"/>
    <port port="7001" protocol="tcp"/>
        <port port="9042" protocol="tcp"/>
        <port port="9160" protocol="tcp"/>
        <port port="9142" protocol="tcp"/>
</service>
EOF
echo "Removing existing firewall rules on the cassandra ports or service profile"
sleep 5
zone=$(firewall-cmd --get-default) &> /dev/null
firewall-cmd --zone=$zone --remove-port=7000/tcp --permanent  &> /dev/null
firewall-cmd --zone=$zone --remove-port=7001/tcp --permanent  &> /dev/null
firewall-cmd --zone=$zone --remove-port=7199/tcp --permanent  &> /dev/null
firewall-cmd --zone=$zone --remove-port=9042/tcp --permanent  &> /dev/null
firewall-cmd --zone=$zone --remove-port=9160/tcp --permanent  &> /dev/null
firewall-cmd --zone=$zone --remove-port=9142/tcp --permanent  &> /dev/null
firewall-cmd --zone=$zone --remove-service=cassandra --permanent  &> /dev/null
echo "Creating ruch rules for Cassandra nodes discovered via nodetool gossipinfo"
set -f
local_list=($(nodetool gossipinfo | grep '/' | cut -f 2 -d /))
set +f
for i in "${local_list[@]}" ; do
        cmd=" firewall-cmd --zone=$zone --add-rich-rule='rule family=\"ipv4\" source address=\"$i\" service name=\"cassandra\" accept' --permanent  &> /dev/null"
        echo $cmd
        eval $cmd
done
firewall-cmd --reload  &> /dev/null

The above script is structured in such a way that it will output the rich rule commands to the screen to facilitate copying and modifying to add nodes, which is needed if you have Teamwork Cloud nodes which are not part of the Cassandra cluster.

The above script should be executed on all nodes of a multi-node Cassandra cluster, with all nodes being in an operational state.

Windows Users: Create firewall rules restricting access on the aforementioned ports only to authorized nodes - Cassandra nodes and Teamwork Cloud Nodes.

Replication Strategy

Teamwork Cloud presently does not support Multi-DC replication strategies. As such, the environment must be configured with SimpleStrategy.

Cassandra Authentication

By default, Cassandra is deployed with the AllowAllAuthenticator. This authenticator, as the name implies, is an anonymous authenticator which performs no checks.

If you require authenticated connections, you will need to make changes to your cassandra.yaml and change the authenticator to PasswordAuthenticator. If you are running a multi-node Cassandra cluster, you need to change the replication factor of the system_auth keyspace.

Detailed instructions can be found at https://docs.datastax.com/en/ddacsecurity/doc/ddacsecurity/secureConfigNativeAuth.html.

After making the changes to Cassandra, you will need to update the Teamwork Cloud configurations to utilize authentication.

The required changes are as follows:

application.conf
# Enable this section to enable the cassandra authentication.
         authentication-enabled = true
         username = newcassandrauser
         password = newcassandrapassword
authserver.properties
cassandra.username=newcassandrauser
cassandra.password=newcassandrapassword

where newcassandrauser and newcassandrapassword correspond to the user and credentials which you configured in Cassandra.

SSL authentication for Cassandra

To enable SSL authentication for Cassandra


  1. Set the following property in the authserver.properties file:

    cassandra.ssl.enabled=true
  2. Add Cassandra certificate file to AuthServer/config/truststore directory.
  3. Delete AuthServer/config/truststore.jks file.
  4. Restart Authentication server.

Data Encryption at Rest

Open source Cassandra does not support encryption at rest. If you require encryption at rest, you need to use a disk encryption layer.

Teamwork Cloud Protocols and Ciphers

Teamwork Cloud consists of 3 Java based services - Teamwork Cloud (twcloud), Authserver (authserver) and WebApp (webapp).

twcloud and authserver require Java 11 (its location varies depending on how it was deployed), whereas webapp uses a bundled Java 14, located in <install_root>/WebAppPlatform/jre/.

Therefore, in order to harden these services we must begin by hardening the JVM. The default settings for the JVM are located in java.security.

We can check the ciphers/protocols being used by the applications using nmap (version 7.x) or TestSSLServer.jar, available from https://community.rsa.com/docs/DOC-45511

As an example, below is a scan using both tools against a default installation. In this example, we will be testing port 8111, the TWCloud port.

# nmap --script ssl-enum-ciphers -p 8111 127.0.0.1
Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-21 17:32 MDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00014s latency).
 
PORT     STATE SERVICE
8111/tcp open  unknown
| ssl-enum-ciphers:
|   TLSv1.0:
|     ciphers:
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 1024) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|     compressors:
|       NULL
|     cipher preference: client
|     warnings:
|       Key exchange (dh 1024) of lower strength than certificate key
|   TLSv1.1:
|     ciphers:
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 1024) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|     compressors:
|       NULL
|     cipher preference: client
|     warnings:
|       Key exchange (dh 1024) of lower strength than certificate key
|   TLSv1.2:
|     ciphers:
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 1024) - A
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 1024) - A
|       TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 1024) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A
|       TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
|     compressors:
|       NULL
|     cipher preference: client
|     warnings:
|       Key exchange (dh 1024) of lower strength than certificate key
|_  least strength: A
 
Nmap done: 1 IP address (1 host up) scanned in 0.76 seconds
 
# java -jar TestSSLServer.jar 127.0.0.1 8111
Supported versions: TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  TLSv1.0
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
  (TLSv1.1: idem)
  TLSv1.2
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA256
     DHE_RSA_WITH_AES_128_CBC_SHA256
     TLS_RSA_WITH_AES_128_GCM_SHA256
     TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
     TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
----------------------
Server certificate(s):
  a241e1c3b957bd14e6e0242fd012f75853eef243: CN=X.X.X.X
----------------------
Minimal encryption strength:     strong encryption (96-bit or more)
Achievable encryption strength:  strong encryption (96-bit or more)
BEAST status: vulnerable
CRIME status: protected

As can be observed above, the default configuration using OpenJDK 1.8.0_242 is allowing TLS v1.0 and v1.1, which are deprecated. Additionally, we can see that several key exchanges are taking place using dh1024.

We then proceed to harden the configuration.

Since we are dealing with ciphers, you need to make sure that you do not disable a cipher required by your certificates.

After hardening the VM, we end up with a different set of allowed ciphers and protocols, as shown below.

# nmap --script ssl-enum-ciphers -p 8111 127.0.0.1
Starting Nmap 7.70 ( https://nmap.org ) at 2020-04-21 17:44 MDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00015s latency).
 
PORT     STATE SERVICE
8111/tcp open  unknown
| ssl-enum-ciphers:
|   TLSv1.2:
|     ciphers:
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A
|       TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
|     compressors:
|       NULL
|     cipher preference: client
|_  least strength: A
 
Nmap done: 1 IP address (1 host up) scanned in 0.95 seconds
 
# java -jar TestSSLServer.jar 127.0.0.1 8111
Supported versions: TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  TLSv1.2
     RSA_WITH_AES_128_CBC_SHA256
     TLS_RSA_WITH_AES_128_GCM_SHA256
----------------------
Server certificate(s):
  71ed3969e41a94877c51aacb87d995af4a12b6d9: CN=x.x.x.x
----------------------
Minimal encryption strength:     strong encryption (96-bit or more)
Achievable encryption strength:  strong encryption (96-bit or more)
BEAST status: protected
CRIME status: protected

The process of hardening the JVM requires making some changes to the java.security file. While these can be made directly, the downside is that if you upgrade your JVM, you will have to reapply your changes.

However, we can place our modifications in our own file, and simply pass a parameter to the JVM upon invocation so that it will apply our changes.

For example, we can create a file /home/twcloud/twc.java.security, and pass a parameter to the JVM in the form of -Djava.security.properties=/home/twcloud/twc.java.security.

Our hardened security settings are as shown below:

twc.java.security
jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, DES, MD5withRSA, DH keySize < 2048, \
    EC keySize < 224, 3DES_EDE_CBC, anon, RSA keySize < 2048, SHA1, DHE, NULL
jdk.tls.ephemeralDHKeySize=2048
jdk.tls.rejectClientInitiatedRenegotiation=true

To apply these settings we need to make changes in 3 locations.

For the Teamwork Cloud service, under Linux, you need to edit <install_root>/jvm.options and add a line as shown below:

.
.
-Dorg.jboss.netty.epollBugWorkaround=true
-Dio.netty.epollBugWorkaround=true
-Djava.security.properties=/home/twcloud/twc.java.security

On Windows, you need to edit the registry key Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Apache Software Foundation\Procrun 2.0\TeamworkCloud\Parameters\Java\Options and append the setting pointing to your security overrides to the bottom of the settings.

For the Authserver service, you need to edit <install_root>/AuthServer/authserver-run by inserting the directive before the call to the authentication-server-XXXXXX.jar as shown below: