of 3

Documentation

Unblu 7 (latest)

Sizing the Unblu cluster

Sizing strategy requires not only a good knowledge of the intended processes but a degree of risk assessment (which can be fine-tuned as usage evolves). For more on sizing strategy and risk assessment, see Application Memory Sizing Strategy.

Baseline hardware requirements

Total requirements represent a combination of CPU and RAM.

Name Number of pods Requests RAM Requests CPU Limits RAM Limits CPU Persistent storage

Collaboration Server

1

1Gi

500m

2Gi

2000m

None

HAProxy

1

128Mi

100m

256Mi

250m

None

Kafka

3

3 x 1Gi

3 x 500m

3 x 1Gi

3 x 1000m

None

nginx

1

128Mi

100m

256Mi

250m

None

Rendering Service

1

512Mi

450m

1Gi

1000m

None

Zookeeper

3

3 x 512Mi

3 x 200m

3 x 512Mi

3 x 500m

None

Total

10

6400 Mi

3250m

8Gi

8000m

None

Persistent storage

Unblu and the required third-party components (HAProxy, Kafka, Nginx, and Zookeeper) don’t require any persistent storage. However, if you want to use Grafana and Promotheus to monitor your Unblu setup, you will need to provide persistent storage.

  • Prometheus needs a persistent volume to store the time-series data it collects.

  • For Grafana, Unblu provisions the Prometheus data source, a default admin user, and the dashboards automatically. The configuration for these are part of the deployment files in unblu-kubernetes-base, so Grafana doesn’t hold any data that needs to be persisted.

Failover considerations

With the baseline requirements you have minimal failover capacity. The Collaboration server may fail and restart (which would cause interruptions). To enjoy failover and full scalability, you should scale the following pods:

  • Collaboration Server: Scale to avoid interruptions, and to improve secure messenger performance and user experience

  • Rendering Service: Scales automatically for every universal or document co-browsing session

It is possible to scale vertically first by adding more resources (RAM, CPU) to the worker host.

In a single-node setup, the host remains the weakest link.

Scaling options

The following table illustrates your scaling options, regardless of whether you have a single or multiple node setup.

Name Number of pods Requests RAM Requests CPU Limits RAM Limits CPU Persistent storage

Collaboration server

m

m x 1Gi

m x 500m

m x 2Gi

m x 2000m

None

Rendering service

n

n x 512Mi

n x 450m

n x 1Gi

n x 1000m

None

Scaling Kafka and Zookeeper above the default of 3 nodes is only required for very large installations (> 20 collaboration server nodes).

If you start with a single-node cluster and the setup hits the vertical limit of the host (in terms of memory and/or CPU), extending to a multiple-node cluster requires some additional steps, such as Kubernetes or Openshift anti-affinity settings for Zookeeper and Kafka.

Typical cluster hardware setup

A setup with a maximum of 3 collaboration servers and 8 universal sessions running at the same time.

Name Number of pods Requests RAM Requests CPU Limits RAM Limits CPU Persistent storage

Collaboration Server

3

3 x 1Gi

3 x 500m

3 x 2Gi

3 x 2000m

None

HAProxy

1

128Mi

100m

256Mi

250m

None

Kafka

3

3 x 1Gi

3 x 500m

3 x 1Gi

3 x 1000m

None

nginx

1

128Mi

100m

256Mi

250m

None

Rendering service

8

8 x 512Mi

8 x 450m

8 x 1Gi

8 x 1000m

None

Zookeeper

3

3 x 512Mi

3 x 200m

3 x 512Mi

3 x 500m

None

Total

19

11.75Gi

7400m

19Gi

19000m

None

  • Minimum requirements = 8 CPU cores ( > 7400m) and 12 GiB RAM ( > 11.765Gi)

  • Limits of system = 19 CPU cores and 19 GiB RAM

Multiple node hardware setup — example

Below is an example list of the hardware required for a multiple-node setup. (For an example of typical hardware required for a single-node cluster setup see below.)

Multi-node cluster setup for 100 concurrent visitor sessions

This type of setup offers failover and better performance.

OpenShift master node

  • 2 core CPU

  • 16 GiB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

OpenShift worker nodes (3 nodes)

Each worker node requires the following resources:

  • 6 core CPU

  • 8 GiB RAM

  • SSD (min 10GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

Single-host installation on a single-node cluster

OpenShift and Kubernetes can be run as a single-node cluster. There are a number of reasons not to do so (see below), but the reason you may want to run Unblu this way is to manage costs without losing the benefits of scalability. A single host on a single-node cluster still allows you to scale Unblu horizontally while keeping the option open to switch to a multi-node cluster in the future.

If you don’t already have an implementation of OpenShift or Kubernetes in place, a single-host installation on a single cluster can be a cost-effective entry point.

Limitations of a single-host installation on a single-node cluster

  • No failover

  • Limited scaling (without an interrupt): Adding Openshift/Kubernetes master nodes will require Openshift/Kubernetes to be setup (almost) from scratch again. (Note that we would only expect to have to add more master nodes for very large cluster installations.)

  • While this setup is not recommended by RedHat/Openshift (as only multi-node offers failover on the cluster itself) it still may be the smart option at the outset.

These limitations notwithstanding, we still recommended single-host installation on a single-node cluster for an easy start, and the ability to scale, if/when required, without undue further effort.

Single-node cluster setup for 100 concurrent visitor sessions

These recommendations represent baseline limit requirements + 5 Rendering Service + Openshift Master (2 CPU, 4 GiB RAM) + Hosting Server (additional 2 GiB RAM base resources are required to run Linux etc.)

  • 16 core CPU

  • 24 GB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

See also