System requirements are dictated by the intended deployment, taking into account the overall load which the environment will experience, including:
Number of concurrent users
Level of activity (commits) per day
Overall number and size of the projects stored in Magic Collaboration Studio.
The database (Cassandra) can be located on the same server as Magic Collaboration Studio or on a separate server. Storage requirements apply only to the node where the database is located. Magic Collaboration Studio hosting nodes can be virtualized without any issues if the host is not oversubscribed on its resources.
Nodes containing both Magic Collaboration Studio and Cassandra:
96 -128 GB ECC RAM
>=16 processor threads (such as E5-1660)
>1TB SSD DAS storage
Nodes containing only Cassandra:
48 - 64 GB ECC RAM
>=8 processor threads (such as E5-1620)
>1TB SSD DAS storage
Nodes containing only Magic Collaboration Studio:
48 - 64 GB ECC RAM
>=8 processor threads (such as E5-1620)
>250GB storage
Info
title
Multi-Node Clusters
Recommended minimum sizing stated above applies to each node in a multi-node cluster.
Warning
title
SAN Storage
SAN Storage should not be used for data or commit log volumes on Cassandra nodes. This will result in severe performance degradation. There is no amount of SAN tuning and OS tuning which could mitigate this.
Minimal hardware requirements
For adequate Magic Collaboration Studio operation, your hardware should meet the following requirements:
8 Processor Cores - i.e. Quad-Core Hyper-threaded CPU (such as Intel E3-1230 or faster).
32 GB RAM (Motherboard with an ECC RAM is recommended) and 8 GB RAM dedicated for TeamworkCloud
At least 3 separate disks, preferably SSD (NVMe), (OS/Application, Data, and Commit logs). Depending on company backup procedures and infrastructure, an additional disk, equal to the data disk size, may be required to store the backup snapshots.
Software requirements
Magic Collaboration Studio supports the following operating systems:
Linux 64-bit RedHat8, Oracle Linux /CentOS 7, RedHat 8.
For a fully working environment, you will also need the following:
OpenJDK (Eclipse Temurin™ by ADOPTIUM) 11.0.14
FlexNet license server
Cassandra 4.0.3
A static IP address for each node.
Open ports 1101, 2181, 2552, 7000, 7001, 7199, 9042, and 9142 between servers in a cluster.
Open ports 3579, 8111, 8443, and 10002 (default) for clients. The port number 10002 can be changed according to the port assigned to secure connections between the client software and Magic Collaboration Studio.
The following table lists the ports that Magic Collaboration Studio services use and their descriptions:
Service
Port
Description
FlexNet server (lmadmin)
1101
FLEXnet server port
8090
Default vendor daemon port (web browser management port)
27000-27009
Internal license server manager port
Cassandra
7000
Internode cluster communication port (not used if TLS is enabled)
7001
Encrypted internode cluster communication port (used if TLS is enabled)
7199
JMX monitoring port of the Cassandra node
9042
Native client port used to connect to Cassandra and perform operations (used with 2021x version and later)
9142
Native client port when SSL encryption is enabled (used when Cassandra is on a separate server or Cassandra is deployed as a multinode cluster)
Magic Collaboration Studio
2552
Magic Collaboration Studio default remote server port
3579
Default Magic Collaboration Studio port when SSL is not enabled
8111
Magic Collaboration Studio REST API port
10002
Default port when SSL is enabled
Web Application Platform
8443
Web Application Platform port (Teamwork Cloud Admin, Collaborator…)
When deploying on Amazon EC2, we recommend using the m5-2xlarge, r5-2xlarge, or i3-2xlarge instances. Depending on the workload, you may want to go to the -4xlarge instances, but for most users, the -2xlarge will suffice. The m5 instances meet the minimum system requirements and will be acceptable for small deployments. The r5 instances provide more memory for the same CPU density. The i3 instances should be used when workloads have a higher level of user concurrency due to the significantly improved performance of the ephemeral NVMe storage.