Cluster Hardware Requirements

This topic is about installing on a cluster. If you do not need to run the unblu Secure Messenger and want to run unblu 5 as a WAR/EAR file inside an application server/container see System Requirements - No Cluster. Note that running unblu in this way seriously curtails functionality.

If you want to run a single host on a single-node cluster see Single Host Installation on Single-Node Cluster at the bottom of this page for more. Note that running unblu in this way entails some limitations but may be a good starting point for customers with no experience running Openshift/Kubernetes.

Running on a multiple-node cluster is highly recommended.

Multiple-Node Cluster

unblu can be run on a single-node cluster, or using a traditional web application server. However, the only way to benefit from the full functionality offered by unblu 5, running on Openshift/Kubernetes, is to run on a multiple-node cluster.

A multiple-node cluster consists of:

  • Separated master node(s)

  • Three distinct nodes to run unblu's Kafka services

  • Three distinct nodes to run unblu's Zookeeper services

  • Additional (number of) nodes to run unblu's collaboration server and rendering service

The following diagram illustrates the respective unblu-cluster components. Each component is represented within one or more pods icon-pod.png

Note: The database is not part of the unblu cluster.

Note: Although Kafka and Zookeeper consume significant resources, customers can consider both of these services as 'black boxes'. Whether you employ a single- or multiple-node cluster you will still require 3 pods (each) to run them.

kubernetes-openshift-flow-diagram-13112018.png

Sizing the unblu Cluster

Sizing requires two values:

  • Request value: Most easily conceptualized as the minimum value that, when requested, will be fenced off for container usage.

  • Limit value: The upper limit of resources that, when required, will be allocated.

Note: These calculations are based solely on CPU and RAM within an Openshift/Kubernetes framework. (You must add these requirements to the unblu requirements to translate into real-world hardware requirements. See Multiple Node Setup - Example Hardware for more.)

Note: The limit value must be greater than or equal to the request value.

Note: It is possible for an administrator to override pre-defined request values in cases where the cluster is over-committed.

Sizing Strategy

Sizing strategy requires not only a good knowledge of the intended processes but some level of risk-assessment (which can be fine-tuned as usage evolves). For more on strategy and risk assessment see Application Memory Sizing Strategy.

Note: At the outset the only decison you need to make is whether to install unblu on a single- or multiple-node cluster.

Baseline Hardware Requirements

Before looking at the tables below you should understand a little about how requirements for cluster setups are expressed. Requirements are fullfilled by CPU and Memory, but we need to re-evaluate what CPU and Memory mean in the context of an Openshift/Kubernetes cluster.

What is 'CPU'?

The 'CPU' can be conceptualized as an absolute quantity. For the purposes of setting up a cluster: 1 CPU (core) equals 1000m (One thousand millicpu or millicores) and 1m (one millicpu or millicore) is the finest resolution available. No matter how many, or how few, cores your system uses, the CPU values remain consistent. For example, 100m represents the same value within a four-core, or a 64-core system.

What is 'Memory'?

For our purposes we must use the binary prefix, gibibyte (Gi or GiB), mebibyte (Mi or MiB), kibibit (Ki or KiB) when defining memory requirements. This is simply expressing the number of bytes as a power of 2 rather than a power of 10.

Note: The reason this can seem confusing is that software companies have traditionally expressed GiB values as GB. Thus, a 500GB hard drive is read by the operating system as 465.66 GiB (gibibytes) but reported as 465.66 GB (gigabytes).

Here is a link to an online calculator for GB to GiB conversion: https://wintelguy.com/gb2gib.html and a full explanation can be found here: https://physics.nist.gov/cuu/Units/binary.html

Baseline Hardware Requirements for Single- or Multiple-Node unblu Cluster Setup

Total requirements represent a combination of CPU and RAM.

Name Number of PODs Requests RAM Requests CPU Limits RAM Limits CPU Persistent Storage
Collaboration Server 1 1Gi 500m 2Gi 2000m None
HAProxy 1 128Mi 100m 256Mi 250m None
Kafka 3 3 x 1Gi 3 x 500m 3 x 1Gi 3 x 1000m None
NgineX 1 128Mi 100m 256Mi 250m None
Rendering Service 1 512Mi 450m 1Gi 1000m None
Zookeeper 3 3 x 512Mi 3 x 200m 3 x 512Mi 3 x 500m None
Total 10 6400 Mi 3250m 8Gi 8000m None

If we, for example, look at the entry for Kafka in the table above we see that the unblu cluster (whether single or multiple core) requires three pods. And those three pods each need 1Gi (gibibyte) of RAM and 500m (millicores) of CPU for Requests. Therefore, the baseline Kafka requirement, for Requests, is 3Gi of RAM plus 1500m (or 1.5 cores) of CPU.

The Request and the Limit values, for Kafka, are set exactly the same for RAM (3 x 1Gi) but the CPU capacity is doubled (3 x 1000m or 3 cores). (This complies with the rule that Limit values must be equal to or greater than Request values.)

Failover Considerations

With the baseline requirements you have minimal failover capacity. The Collaboration server may fail and restart (which would cause interruptions). In order to enjoy failover and full scalability the following pods should be scaled:

  • Collaboration Server: Scale to avoid interruptions, improve secure messenger performance and user experience.

  • Rendering Service: Scales automatically for every universal or document co-browsing session.

It is possible to scale vertically first, by adding additional resources (RAM, CPU) to the worker host.

Note: In a single-node setup the host would remain the weakest link.

Scaling Options

The following table illustrates your scaling options (independent of whether the setup is single or multple node).

Name Number of PODs Requests RAM Requests CPU Limits RAM Limits CPU Persistent Storage
Collaboration Server m m x 1Gi m x 500m m x 2Gi m x 2000m None
Rendering Service n n x 512Mi n x 450m n x 1Gi n x 1000m None

Note: Scaling Kafka and Zookeeper above the default of 3 nodes is only required for very large installations (> 20 collaboration server nodes).

Note: If you start with a single-node cluster and the setup hits the vertical limit of the host (memory, cpu), extending to a multiple-node cluster requires some additional steps (for example, Kubernetes anti-affinity settings or Openshift anti-affinity settings for Zookeeper and Kafka).

Cluster Hardware Typical Setup

A setup with a maximum of 3 collaboration servers and 8 universal sessions running at the same time.

Name Number of PODs Requests RAM Requests CPU Limits RAM Limits CPU Persistent Storage
Collaboration Server 3 3 x 1Gi 3 x 500m 3 x 2Gi 3 x 2000m None
HAProxy 1 128Mi 100m 256Mi 250m None
Kafka 3 3 x 1Gi 3 x 500m 3 x 1Gi 3 x 1000m None
NgineX 1 128Mi 100m 256Mi 250m None
Rendering Service 8 8 x 512Mi 8 x 450m 8 x 1Gi 8 x 1000m None
Zookeeper 3 3 x 512Mi 3 x 200m 3 x 512Mi 3 x 500m None
Total 21 11.756Gi 7400m 19Gi 19000m None
  • Minimal requirements = 8 CPU cores ( > 7400m) and 12 GiB RAM ( > 11.765Gi)

  • Limits of system = 19 CPU cores and 19 GiB RAM

IMPORTANT: The table above shows only unblu's requirements. Openshift / kubernetes hardware requirements are separate and must be added to unblu's requirements. See Multiple Node Setup - Example Hardware for a little guidance on how much (total) hardware you may need.

Multiple Node Setup - Example Hardware

Below is an example list of required hardware for a multiple-node setup. (For an example of typical hardware required for a single-node cluster setup see Single Node Cluster Setup for 100 Visitors.)

Multiple-Node Cluster-Setup for 100 Visitor Sessions

This type of setup offers failover and better performance.

Openshift Master Node

  • 2 Core CPU

  • 16 GiB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

Openshift Worker Nodes (3 nodes)

Per node
  • 6 Core CPU

  • 8 GiB RAM

  • SSD (min 10GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

Single-Host Installation on Single-Node Cluster

Openshift/Kubernetes can be run as a single-node cluster. There are a number of reasons not to do this (see Limitations of Single-Host Installation on Single-Node Cluster below) but the reason you may want to run unblu in this way is to manage costs without losing scalability and the many benefits thereof. A single host on single-node cluster will still allow you to scale unblu horizontally while leaving the door open to switching to multi-cluster in the future.

Note: If you do not already have an implementation of Openshift/Kubernetes in place a single-host installation on a single cluster can be a cost-effective entry point.

Limitations of Single-Host Installation on Single-Node Cluster

  • No failover

  • Limited scaling (without an interrupt): Adding Openshift/Kubernetes master nodes will require Openshift/Kubernetes to be setup (almost) from scratch again. (Note that we would only expect to have to add more master nodes for very large cluster installations.)

  • While this setup is not recommended by RedHat / Openshift (as only multi-node offers failover on the cluster itself) it still may be the smart option at the outset.

These limitations notwithstanding, we still recommended single-host installation on a single-node cluster for an easy start, and the ability to scale, if/when required, without undue further effort.

Single-Node Cluster-Setup for 100 Visitor Sessions

These recommendations represent baseline limit requirements + 5 Rendering Service + Openshift Master (2 CPU, 4 GiB RAM) + Hosting Server (additional 2 GiB RAM base resources are required to run Linux etc.)

  • 16 core CPU

  • 24 GB RAM

  • SSD (min 50GB available space on top of OS)

  • Gigabit Ethernet or fiber channel network

Classic Application Server Installation (no cluster)

You may need a 'classic' setup where unblu is installed/deployed as a WAR/EAR file inside an application container/server. We do not recommend this approach as it limits unblu's potential and will require a fresh installation/setup should you decide, in the future, to implement horizontal scalability and/or failover. See System Requirements - No Cluster for details on installing with a WAR/EAR.

Note: This setup cannot be used to fulfill use cases requiring the unblu secure messenger.

Common Requirements

Rendering Service Requirements

Rendering Service Compute Costs

  • deployonprem

results matching ""

    No results matching ""