System requirements are dictated by the intended deployment, taking into account the overall load which the environment will experience including:
Number of concurrent users
Level of activity (commits) per day
Overall number and size of the projects stored in Teamwork Cloudmagic Collaboration Studio.
The database (Cassandra) can be located on the same server as Magic Collaboration Studio or on a separate server. Storage requirements apply only to the node where the database is located. Magic Collaboration Studio hosting nodes can be virtualized without any issues if the host is not oversubscribed on its resources.
Nodes containing both Magic Collaboration Studio and Cassandra
96 -128 GB ECC RAM
>=16 processor threads (such as E5-1660)
>1TB SSD DAS storage
Nodes containing only Cassandra
48 - 64 GB ECC RAM
>=8 processor threads (such as E5-1620)
>1TB SSD DAS storage
Nodes containing only Magic Collaboration Studio
48 - 64 GB ECC RAM
>=8 processor threads (such as E5-1620)
>250GB storage
Info
title
Multi-Node Clusters
Recommended minimum sizing stated above applies to each node in a multi-node cluster
Warning
title
SAN Storage
SAN Storage should not be used on Cassandra nodes for data or commit log volumes. This will result in severe performance degradation. There is no amount of SAN tuning and OS tuning which could mitigate this.
Minimal hardware requirements
8 Processor Cores - i.e. Quad-Core Hyper-threaded CPU (such as Intel E3-1230 or faster).
32 GB RAM (Motherboard with an ECC RAM is recommended) and 8 GB RAM dedicated for TeamworkCloud
At least 3 separate disks, preferably SSD (NVMe), (OS/Application, Data, and Commit logs). Depending on company backup procedures and infrastructure, an additional disk, equal to the data disk in size, may be required for storing the backup snapshots.
Software requirements
Teamwork Cloud Magic Collaboration Studio supports the following operating systems:
For a full working environment, you will also need:
OpenJDK 1.8.0_202 or later used with Cassandra (202 or later or Oracle Java (for Cassandra 3.11.x). Java 1.8 updated later than 251 cannot be used with Cassandra on the Windows platform.
OpenJDK 11.0.12 used with Teamwork Cloud(for Magic Collaboration Studio).
FlexNet License Server.
Cassandra 3.11.x.
Open ports 2552, 7000, 70017001, 71997199, 90429042, 91609160, and 9142 between and 9142 between servers in a cluster, and open ports 3579ports 3579, 8111, 8443, and 8555 (default) for for clients. Also, a port a port number assigned to secure connections between the client software and Teamwork Cloud Magic Collaboration Studio is required.
a A static IP address for each node.
The following table lists the ports that Teamwork Cloud services Magic Collaboration Studio services use and their descriptions:
Service
Port
Description
FlexNet server (lmadmin)
1101
FLEXnet server port
8090
Default vendor daemon port (web browser management port)
27000-27009
License server manager port
Cassandra
7199
Cassandra JMX port
9042
CQL native transport port (used with 2021x version and later)
9160
Thrift client API port (used until 2021x version, still required for migrations from 19.0 version)
9142
DSE client port when SSL is enabled
Teamwork Cloud
2552
Teamwork Cloud default remote server port
3579
Default Teamwork Cloud port when SSL is not enabled
7000
Internode communication port (not used if TLS is enabled)
7001
TLS Internode communication port (used if TLS is enabled)
8111
Teamwork Cloud REST API port
10002
Default port when SSL is enabled
Web Application Platform
8443
Web Application Platform port (TWCloud Admin, Collaborator…)
In the case of deploying on Amazon EC2, we recommend using the m5-2xlarge, r5-2xlarge, or i3-2xlarge instances. Depending on the workloads, you may want to go to the -4xlarge instances, but for the vast majority of users, the -2xlarge will suffice. The m5 instances meet the minimum system requirements and will be acceptable for small deployments. The r5 instances provide more memory for the same CPU density. The i3 instances should be used when workloads have a higher level of user concurrency due to the significantly improved performance of the ephemeral NVMe storage.