Contact usRequest a demo

Cluster hardware requirements

Sizing the Unblu cluster

Sizing strategy requires not only a good knowledge of the intended processes but a degree of risk assessment. Both can be fine-tuned as usage evolves. For more on sizing strategy and risk assessment, refer to Application memory sizing strategy in the OpenShift documentation.

Baseline hardware requirements

Total requirements represent a combination of CPU and RAM. These are minimum requirements and don’t guarantee high availability of the application.

None of the components require persistent storage. See Persistent volumes below for more information.

Number of pods Requests RAM Requests CPU Limits RAM Limits CPU

Collaboration Server

1

3000Mi

500m

3Gi

6000m

HAProxy

1

256Mi

100m

256Mi

1000m

Kafka

3

3 x 1Gi

3 x 100m

3 x 1Gi

3 x 1000m

NGINX

1

128Mi

100m

256Mi

1000m

Rendering Service

2

2 x 1Gi

2 x 500m

2 x 1Gi

2 x 2500m

ZooKeeper

3

3 x 512Mi

3 x 50m

3 x 512Mi

3 x 1000m

Total

11

10040Mi

2150m

10240Mi

8000m

Persistent volumes

Unblu and the required third-party components (HAProxy, Kafka, NGINX, and ZooKeeper) don’t require any persistent volumes. However, if you want to use Grafana and Promotheus to monitor your Unblu setup, you will need to provide persistent volumes:

  • Prometheus needs a persistent volume to store the time-series data it collects.

  • For Grafana, Unblu provisions the Prometheus data source, a default admin user, and the dashboards automatically. The configuration for these are part of the deployment files in unblu-kubernetes-base, so Grafana doesn’t hold any data that needs to be persisted.

Failover considerations

With the baseline requirements you have minimal failover capacity. The Collaboration Server may fail and restart, which would cause interruptions.

To enjoy failover and full scalability, you should scale the following pods:

  • Collaboration Server: Scale to avoid interruptions as well as to improve secure messenger performance and user experience

  • Rendering Service: Scales automatically for every conversation recording, universal co-browsing, and server-based document co-browsing session

It’s possible to scale vertically first by adding more resources (RAM, CPU) to the worker host.

In a single-node setup, the host remains the weakest link.

Scaling options

The following table illustrates your scaling options, regardless of whether you have a single or multiple node setup.

Number of pods Requests RAM Requests CPU Limits RAM Limits CPU

Collaboration Server

m

m x 1Gi

m x 500m

m x 2Gi

m x 2000m

Rendering Service

n

n x 512Mi

n x 450m

n x 1Gi

n x 1000m

  • For ZooKeeper and Kafka, you can increase RAM and CPU if you believe this is necessary, but you can’t increase the number of pods.

  • You should scale NGINX and HAProxy so that they both run on the same number of pods as the Collaboration Server.

If you start with a single-node cluster and the setup hits the vertical limit of the host in terms of memory and/or CPU, extending to a multiple-node cluster requires some additional steps, such as Kubernetes or Openshift anti-affinity settings for ZooKeeper and Kafka.

Typical cluster hardware setup

The setup below is intended for a maximum of three Collaboration Servers and eight concurrent universal co-browsing sessions. The number of Rendering Service pods your setup requires depends on the number of concurrent conversation recording, universal co-browsing, and server-based document co-browsing sessions you want to support.

None of the components require persistent storage. See Persistent volumes above for more information.

Number of pods Requests RAM Requests CPU Limits RAM Limits CPU

Collaboration Server

3

3 x 3000Mi

3 x 500m

3 x 3000Mi

3 x 6000m

HAProxy

3

3 x 256Mi

3 x 100m

3 x 256Mi

3 x 1000m

Kafka

3

3 x 1Gi

3 x 100m

3 x 1Gi

3 x 1000m

NGINX

3

3 x 256Mi

3 x 100m

3 x 256Mi

3 x 1000m

Rendering Service

8

8 x 1Gi

8 x 500m

8 x 1Gi

8 x 2500m

ZooKeeper

3

3 x 512Mi

3 x 50m

3 x 512Mi

3 x 1000m

Metrics

3

570Mi

70m

2140Mi

1600m

Total

26

23906Mi

6620m

25476Mi

51600m

The Metrics line summarizes the requirements for Grafana, Prometheus, and kube-state-metrics.

Multiple node hardware setup — example

Below is an example list of the hardware required for a setup with multiple nodes. For an example of typical hardware required for a single-node cluster setup see Single-node cluster setup for 100 concurrent visitor sessions below.

Multi-node cluster setup for 100 concurrent visitor sessions

This type of setup offers failover and better performance.

OpenShift master node

  • 2-core CPU

  • 16 GiB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

OpenShift worker nodes (3 nodes)

Each worker node requires the following resources:

  • 4-core CPU

  • 16 GiB RAM

  • SSD (min 10GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

Single-host installation on a single-node cluster

OpenShift and Kubernetes can be run as a single-node cluster. There are a number of reasons not to do so (see below), but the reason you may want to run Unblu this way is to manage costs without losing the benefits of scalability. A single host on a single-node cluster still allows you to scale Unblu horizontally while keeping the option open to switch to a multi-node cluster in the future.

If you don’t already have an implementation of OpenShift or Kubernetes in place, a single-host installation on a single cluster can be a cost-effective entry point.

Limitations of a single-host installation on a single-node cluster

  • No failover

  • Limited scaling (without an interrupt): Adding Openshift/Kubernetes master nodes require Openshift/Kubernetes to be set up (almost) from scratch again. Note that you should only have to add more master nodes for very large cluster installations.

  • While a single-host installation on a single-node cluster isn’t recommended by RedHat/Openshift (as only multi-node clusters offer failover on the cluster itself), it still may be the smart option at the outset.

These limitations notwithstanding, a single-host installation on a single-node cluster provides an easy start, and the ability to scale if and when it’s required, without undue further effort.

Single-node cluster setup for 100 concurrent visitor sessions

These recommendations represent the baseline limit requirements + 5 Rendering Service + Openshift Master (2 CPU, 4 GiB RAM) + Hosting Server (additional 2 GiB RAM base resources are required to run Linux etc.)

  • 16-core CPU

  • 24 GB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

See also