Contact usRequest a demo

This document describes version 5 of Unblu. If you’re using the latest major version of Unblu, go to the documentation of the latest version.

The support period for version 5 ended on 22 November 2021. We no longer provide support or updates for this version. You should upgrade to the latest version of Unblu.

Cluster Hardware Requirements

Sizing the Unblu Cluster

Sizing Strategy

Sizing strategy requires not only a good knowledge of the intended processes but some level of risk-assessment (which can be fine-tuned as usage evolves). For more on strategy and risk assessment see Application Memory Sizing Strategy.

Baseline Hardware Requirements

Total requirements represent a combination of CPU and RAM.

Name Number of PODs Requests RAM Requests CPU Limits RAM Limits CPU Persistent Storage

Collaboration Server

1

1Gi

500m

2Gi

2000m

None

HAProxy

1

128Mi

100m

256Mi

250m

None

Kafka

3

3 x 1Gi

3 x 500m

3 x 1Gi

3 x 1000m

None

nginx

1

128Mi

100m

256Mi

250m

None

Rendering Service

1

512Mi

450m

1Gi

1000m

None

Zookeeper

3

3 x 512Mi

3 x 200m

3 x 512Mi

3 x 500m

None

Total

10

6400 Mi

3250m

8Gi

8000m

None

Failover Considerations

With the baseline requirements you have minimal failover capacity. The Collaboration server may fail and restart (which would cause interruptions). In order to enjoy failover and full scalability the following pods should be scaled:

  • Collaboration Server: Scale to avoid interruptions, improve secure messenger performance and user experience.

  • Rendering Service: Scales automatically for every universal or document co-browsing session.

It is possible to scale vertically first, by adding additional resources (RAM, CPU) to the worker host.

In a single-node setup the host would remain the weakest link.

Scaling Options

The following table illustrates your scaling options, regardless of whether you have a single or multiple node setup.

Name Number of PODs Requests RAM Requests CPU Limits RAM Limits CPU Persistent Storage

Collaboration Server

m

m x 1Gi

m x 500m

m x 2Gi

m x 2000m

None

Rendering Service

n

n x 512Mi

n x 450m

n x 1Gi

n x 1000m

None

Scaling Kafka and Zookeeper above the default of 3 nodes is only required for very large installations (> 20 collaboration server nodes).
If you start with a single-node cluster and the setup hits the vertical limit of the host (memory, cpu), extending to a multiple-node cluster requires some additional steps (for example, Kubernetes anti-affinity settings or Openshift anti-affinity settings for Zookeeper and Kafka).

Cluster Hardware Typical Setup

A setup with a maximum of 3 collaboration servers and 8 universal sessions running at the same time.

Name Number of PODs Requests RAM Requests CPU Limits RAM Limits CPU Persistent Storage

Collaboration Server

3

3 x 1Gi

3 x 500m

3 x 2Gi

3 x 2000m

None

HAProxy

1

128Mi

100m

256Mi

250m

None

Kafka

3

3 x 1Gi

3 x 500m

3 x 1Gi

3 x 1000m

None

nginx

1

128Mi

100m

256Mi

250m

None

Rendering Service

8

8 x 512Mi

8 x 450m

8 x 1Gi

8 x 1000m

None

Zookeeper

3

3 x 512Mi

3 x 200m

3 x 512Mi

3 x 500m

None

Total

19

11.75Gi

7400m

19Gi

19000m

None

  • Minimal requirements = 8 CPU cores ( > 7400m) and 12 GiB RAM ( > 11.765Gi)

  • Limits of system = 19 CPU cores and 19 GiB RAM

Multiple Node Setup - Example Hardware

Below is an example list of required hardware for a multiple-node setup. (For an example of typical hardware required for a single-node cluster setup see Single Node Cluster Setup for 100 Visitors.)

Multiple-Node Cluster-Setup for 100 Visitor Sessions

This type of setup offers failover and better performance.

Openshift Master Node

  • 2 core CPU

  • 16 GiB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

Openshift Worker Nodes (3 nodes)

Per node
  • 6 core CPU

  • 8 GiB RAM

  • SSD (min 10GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

Single-Host Installation on Single-Node Cluster

Openshift/Kubernetes can be run as a single-node cluster. There are a number of reasons not to do this (see Limitations of Single-Host Installation on Single-Node Cluster below) but the reason you may want to run Unblu in this way is to manage costs without losing scalability and the many benefits thereof. A single host on single-node cluster will still allow you to scale Unblu horizontally while leaving the door open to switching to multi-cluster in the future.

If you do not already have an implementation of Openshift/Kubernetes in place a single-host installation on a single cluster can be a cost-effective entry point.

Limitations of Single-Host Installation on Single-Node Cluster

  • No failover

  • Limited scaling (without an interrupt): Adding Openshift/Kubernetes master nodes will require Openshift/Kubernetes to be setup (almost) from scratch again. (Note that we would only expect to have to add more master nodes for very large cluster installations.)

  • While this setup is not recommended by RedHat/Openshift (as only multi-node offers failover on the cluster itself) it still may be the smart option at the outset.

These limitations notwithstanding, we still recommended single-host installation on a single-node cluster for an easy start, and the ability to scale, if/when required, without undue further effort.

Single-Node Cluster-Setup for 100 Visitor Sessions

These recommendations represent baseline limit requirements + 5 Rendering Service + Openshift Master (2 CPU, 4 GiB RAM) + Hosting Server (additional 2 GiB RAM base resources are required to run Linux etc.)

  • 16 core CPU

  • 24 GB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network