Contact usRequest a demo

Cluster deployment


Before you begin installing Unblu on a cluster, you need the following:

  • A running Kubernetes cluster. This may be an existing cluster already used in production at your organization, a newly installed cluster, or a cloud cluster managed by your organization. (This is distinct from Unblu’s own cloud offering, the Unblu Cloud.) Unblu supports both Kubernetes and OpenShift.

  • The Kustomize configuration management tool. Kustomize has been integrated in kubectl since version 1.14, so it may already be available in your Kubernetes installation.

  • The Unblu Kustomize bundle used to deploy the software. This is provided to you by the Unblu delivery team.

  • Access to the Unblu container image registry ( to pull the images.

Your cluster must satisfy the following requirements:

  1. It must have at least 3 nodes. If it doesn’t, Unblu’s anti-affinity rules prevent a successful deployment.

  2. You don’t enforce a thread limit, or your thread limit is at least 4096. If you have a lower thread limit, Unblu will run in to errors under load.

    Note that OpenShift 4 enforces a default thread limit of 1024.

  3. You have a working Ingress Controller. The OpenShift Router works fine too.

  4. Unblu must be able to request persistent volumes using a PersistentVolumeClaim. If it can’t, the pre-configured monitoring stack won’t work.

You might want to check the cluster hardware requirements before you start.

If you’re unable to run kustomize, the Unblu delivery team can send you a prebuilt YAML deployment file.

Access to the Unblu image registry

A Kubernetes cluster requires access to the image registry at all times to pull images. If a company policys prevents this, you can use a company-internal registry as a proxy. Products such as Artifactory can be used to either manually push images or download images transparently in the background.

Access credentials to the Unblu image registry are usually provided as a gcr-secret.yaml YAML file. Apply this file to your cluster before you perform the installation:

Listing 1. Create a namespace and apply the image pull secret (Kubernetes)
kubectl create namespace unblu-test
kubectl apply -f gcr-secret.yaml --namespace=unblu-test
Listing 2. Create a project and apply the image pull secret (OpenShift)
oc new-project unblu-test \
    --description="Unblu Test Environment" \
oc project unblu-test
oc apply -f gcr-secret.yaml

Database secret

Unblu stores all data in a relational database. The credentials to access the database must be passed to Unblu as a secret named database.

Listing 3. Database secret
kind: Secret
apiVersion: v1
  name: database
type: Opaque
  DB_USER: unblu
  DB_PASSWORD: unblu_password
  DB_ADMIN_USER: unblu_admin
  DB_ADMIN_PASSWORD: admin_password

The database secret is used to populate the user configuration. Consequently, you don’t need to manually declare those parameters in the configuration file In other words you don’t need the following lines in

Listing 4. When using the secret, the following lines are not needed in<pwd><pwd>

Other database related configuration is part of file and follows the Unblu configuration standard. Please refer to the section Database configuration for more details.

Performing the installation

The Unblu delivery team will send a compressed archive containing a set of files. The listing below assumes that you’ve extracted the bundle into a folder called unblu-installation.

Listing 5. Build a kustomize bundle and apply the YAML to a cluster
kustomize build unblu-installation > unblu.yaml
kubectl apply -f unblu.yaml

Before deploying Unblu into a cluster, you may want to adjust the following in kustomization.yaml.

kind: Kustomization

namespace: customer (1)

bases: (2)
- unblu-kubernetes-base/collaboration-server
- unblu-kubernetes-base/renderingservice
- unblu-kubernetes-base/k8s-ingress
- unblu-kubernetes-base/k8s-prometheus
- unblu-kubernetes-base/grafana

resources: [] (3)

patchesStrategicMerge: [] (4)

- name: collaboration-server-config
  behavior: merge
    - (5)

- name: ingress-tls (6)
  behavior: merge
    - certs/tls.crt
    - certs/tls.key
  type: ""

images: (7)
  - name:
  - name:
  - name:
  - name:
  - name:
1 Change the namespace (Kubernetes) or project (OpenShift) to be used.
2 Add or remove base modules, depending on your environment or license.
3 Deploy custom components as part of Unblu.
4 Patch some values of the deployment.
5 Add the configuration file to the deployment.
6 Add the TLS certificate as a secret to be used for the Ingress or Route.
7 Rewrite the images source to a new registry.

Instead of updating the kustomization.yaml that was delivered to you, we recommend to create a new one and separate your customizations from our deliveries.

kind: Kustomization

namespace: unblu-production

- unblu-delivery

Update an existing installation

Upgrading an existing Unblu installation implies the following steps:

  1. Remove the existing deployment from the cluster using

  2. Apply the new deployment, identical to a new installation.

  3. Database patches are automatically applied when the Unblu server starts.

For simple configuration updates the first step may be omitted, for Unblu release upgrades all steps are mandatory.

Listing 6. for Kubernetes
#!/usr/bin/env bash


read -p "Do you really want to clean environment \"$NAMESPACE\"? (y/N) " -n 1 -r
if [[ ! $REPLY =~ ^[yY]$ ]]
  exit 1

echo ""
echo "Dropping Unblu"
kubectl delete deployment,pod -n $NAMESPACE -l "component = collaboration-server"
kubectl delete statefulset,pod -n $NAMESPACE -l "component in (kafka, zookeeper)" \
  --force --grace-period=0
kubectl delete deployment,statefulset,pod,service,configmap,persistentvolumeclaim,secret \
  -n $NAMESPACE -l "app = unblu"

read -p "Do you want to drop the metrics platform (Prometheus, Grafana) as well? (y/N) " -n 1 -r

if [[ $REPLY =~ ^[yY]$ ]]
  kubectl delete deployment,pod,service,configmap,persistentvolumeclaim,secret \
    -n $NAMESPACE -l "app in (grafana, prometheus)"

echo "Finished"
Listing 7. for OpenShift
#!/usr/bin/env bash

oc whoami &>/dev/null
if [ "$?" != "0" ]
  echo "You are not logged in to any openshift cluster. Please login first (oc login) and select the correct project"
  exit 1

if [[ ! $1 = "-f" ]]
  read -p "Do you want to delete the contents of $(oc project -q) (y/N) " -r

  if [[ ! $REPLY =~ ^[nNyY]?$ ]]
    echo "Unexpected answer. Exiting"
    exit 2

  if [[ ! $REPLY =~ ^[yY]$ ]]
    exit 0

echo "Dropping Unblu"
oc delete deployment,pod -l "component = collaboration-server"
oc delete statefulset,pod -l "component in (kafka, zookeeper)" --force --grace-period=0
oc delete deployment,statefulset,pod,service,configmap,persistentvolumeclaim,secret -l "app = unblu"

read -p "Do you want to drop the metrics platform (Prometheus, Grafana) as well? (y/N) " -n 1 -r

if [[ $REPLY =~ ^[yY]$ ]]
  oc delete deployment,pod,service,configmap,persistentvolumeclaim,secret -l "app in (grafana, prometheus)"

echo "Finished"

Smoke test of an OpenShift installation

Once you have completed an OpenShift installation, you can check the installation with the following procedure.

The listed instructions must all succeed in order for the smoke test to be successful. Perform the tests immediately after installation to ensure that you don’t miss important log messages.

OpenShift deployment status


oc status

Success criteria

No errors reported.


In an example project on server

$ oc status

svc/alertmanager - -> 9093
  deployment/alertmanager deploys
    deployment #1 running for 6 days - 1 pod

svc/blackbox-exporter - -> 9115
  deployment/blackbox-exporter deploys
    deployment #1 running for 6 days - 1 pod

svc/collaboration-server -
  deployment/collaboration-server deploys
    deployment #1 running for 4 days - 1 pod

svc/glusterfs-dynamic-bd5fa376-fb0a-11e9-8274-00ffffffffff -
svc/glusterfs-dynamic-bd66cca4-fb0a-11e9-8274-00ffffffffff -
svc/glusterfs-cluster -

svc/grafana - -> 3000
  deployment/grafana deploys
    deployment #1 running for 6 days - 1 pod

svc/haproxy -
  deployment/haproxy deploys,
    deployment #1 running for 6 days - 2 pods

svc/kafka-hs (headless):9092
svc/kafka -
  statefulset/kafka manages
    created 4 days ago - 3 pods (redirects) to pod port 8080-tcp (svc/nginx)
  deployment/nginx deploys,
    deployment #1 running for 6 days - 2 pods

svc/prometheus - -> 9090
  deployment/prometheus-server deploys
    deployment #1 running for 6 days - 1 pod

svc/prometheus-kube-state-metrics - -> 8080
  deployment/prometheus-kube-state-metrics deploys
    deployment #1 running for 6 days - 0/1 pods

svc/zookeeper-hs (headless) ports 2888, 3888
svc/zookeeper -
  statefulset/zookeeper manages
    created 4 days ago - 0/3 pods growing to 3

1 info identified, use 'oc status --suggest' to see details.

Unblu server startup status


$ oc logs <collaborationserverpod name>
for Unix/Linux systems:
$ oc logs $(oc get pods -l component=collaboration-server -o name | cut -d '/' -f 2) | grep "ready for requests"

Success criteria

A message containing "ready for requests" must exist in the logs.

$ oc logs collaboration-server-123

{"message":"Initializing Timer ","logger":"$1","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"Start Level: Equinox Container: a46608a9-4214-4f0e-871a-a24812ffffff","@timestamp":"2019-11-01T13:55:15.463Z"}
{"message":"all bundles (247) started in 64039ms ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.753Z"}
{"message":"Removed down state INITIALIZING. New states [ENTITY_CONFIGURATION_IMPORTING] ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.753Z"}
{"message":"No entity import source configured ","logger":"com.unblu.core.server.entityconfig.internal.EntityConfigImport","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.756Z"}
{"message":"Removed down state ENTITY_CONFIGURATION_IMPORTING. New states [] ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.756Z"}
{"message":" 6.0.0-beta.1-WjNnGKRa ready for requests ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.756Z"}
{"message":"disabling the agentAvailability auto updating due to request inactivity ","logger":"com.unblu.core.server.livetracking.agent.internal.AgentAvailabilityService","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"AgentAvail-timer","@timestamp":"2019-11-01T14:55:03.814Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"","client":"","page":"","request":"ROxknrGXQCuse2Q3CMFu2Q","execution":"","thread":"qtp1897380042-37","@timestamp":"2019-11-05T16:13:49.036Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"","client":"","page":"","request":"TXaZh7OxRhW2N6tFtRHJ9g","execution":"","thread":"qtp1897380042-42","@timestamp":"2019-11-05T16:13:49.067Z"}
{"message":"enabling agentAvailability auto updating ","logger":"com.unblu.core.server.livetracking.agent.internal.AgentAvailabilityService","severity":"INFO","user":"","client":"","page":"","request":"TXaZh7OxRhW2N6tFtRHJ9g","execution":"","thread":"qtp1897380042-42","@timestamp":"2019-11-05T16:13:49.087Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"","client":"","page":"","request":"_Drn9FNIRaODTCAqZiuSug","execution":"","thread":"qtp1897380042-37","@timestamp":"2019-11-05T16:14:07.306Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"superadmin","client":"","page":"","request":"G5UzUttIRmCkc3QXUbR3Pw","execution":"","thread":"qtp1897380042-39","@timestamp":"2019-11-05T16:14:09.965Z"}
{"message":"sessionItem prepared: TrackingItem type: TRACKINGLIST status: OPEN id: null details: accountId=wZvcAnbBSpOps9oteH-Oxw&status=OPEN&type=AGENTFORWARDING session: hPAkysS1Qqa7V5DVLrth7w node: collaboration-server-559b6487c8-qzqkx node instance: 1x2j3Qn_T--dszMYT_MI8g created: Tue Nov 05 16:14:11 UTC 2019  ","logger":"com.unblu.core.server.collaboration.CollaborationSession","severity":"INFO","user":"","client":"","page":"","request":"UR8u7Fh6TJCRaeJfKBPmxA","execution":"CollaborationSessionStore","thread":"RxCachedThreadScheduler-1 - CollaborationSessionStore -  $ FixedContextScheduler#CollaborationSessionStore $ ","@timestamp":"2019-11-05T16:14:11.554Z"}

Check browser access

  • Open a browser and open the Agent Desk domain. For example

  • Perform login.

Success criteria

Unblu displays the login screen and after login, the Agent Desk. There are no errors in the browser console.

JavaScript demo page and documentation

If you need to use the Unblu JavaScript demo page, you can activate it by setting com.unblu.server.resources.enableDemoResources to true. If you also want the Unblu docs available locally, set com.unblu.server.resources.enableDocResources to true.