Skip to content

Guide to Install and Configure RDA Fabric platform in on-premise environment.

1. RDAF platform and it's components

Robotic Data Automation Fabric (RDAF) is designed to manage data in a multi-cloud and multi-site environments at scale. It is built on microservices and distributed architecture which can be deployed in a Kubernetes cluster infrastructure or in a native docker container environment which is managed through RDA Fabric (RDAF) deployment CLI.

RDAF deployment CLI is built on top of docker-compose container management utility to automate the lifecycle management of RDAF platform components which includes, install, upgrades, patching, backup, recovery, other management and maintenance operations.

RDAF platform consists below set of services which can be deployed in a single virtual machine or a baremetal server or spread across multiple virtual machines or baremetal servers.

  • RDA core platform services

    • registry
    • api-server
    • identity
    • collector
    • scheduler
    • scheduler-admin
    • portal-ui
    • portal-backend
  • RDA infrastructure services

    • NATs
    • MariaDB
    • Minio
    • Opensearch
    • Kafka / Zookeeper
    • Redis
    • HAproxy

Note

Note: CloudFabrix RDAF platform integrates with above opensource services and uses them as back-end components. However, these opensource service images are not bundled with RDAF platform by default. Customers can download CloudFabrix validated versions from publicly available docker repository (ex: quay.io or docker hub) and deploy them with GPL / AGPL or Commercial license as per their support requirements. RDAF deployment tool provides a hook to assist the Customer to download these opensource software images (supported versions) from the Customer's chosen repository.

  • RDA application services

    • cfxOIA (Operations Intelligence & Analytics)
    • cfxAIA (Asset Intelligence & Analytics)
  • RDA worker service

  • RDA event gateway
  • RDA edgecollector
  • RDA studio

2. Docker registry access for RDAF platform services deployment

CloudFabrix provides secure docker images registry hosted on AWS cloud which contains all of the required docker images to deploy RDA Fabric platform, infrastructure and application services.

For deploying RDA Fabric services, the environment need to have access to CloudFabrix docker registry hosted on AWS cloud over internet. Below is the docker registry URL and port.

Outbound internet access:

  • URL: https://cfxregistry.cloudfabrix.io
  • Port: 443

Below picture illustrates network access flow from on-premise environment to access CloudFabrix docker registry hosted on AWS cloud.

Docker registry hosted on AWS cloud

Tip

Please click on the picture below to enlarge and to go back to the page please click on the back arrow button on the top.

Additionally, CloudFabrix also support hosting an on-premise docker registry in a restricted environment where RDA Fabric VMs do not have direct internet access with or without HTTP proxy.

Once on-premise docker registry service is installed within the Customer's DMZ environment, it communicates with docker registry service hosted on AWS cloud and replicates the selective images required for RDA Fabric deployment for a new installation, on-going updates and patches.

RDA Fabric VMs will pull the images from on-premise docker registry locally.

Below picture illustrates network access flow from on-premise environment to access CloudFabrix docker registry hosted on AWS cloud.

Docker registry hosted On-premise

Tip

Please click on the picture below to enlarge and to go back to the page please click on the back arrow button on the top.

Please refer On-premise Docker registry setup to setup, configure, install and manage on-premise docker registry service and images.

Tip

On-premise docker registry server is recommended to be deployed. However, it is an optional service if there is no restriction in accessing the internet. With on-premise docker registry server, images can be downloaded offline and keep them ready for a fresh install or an update. It will avoid any network glitches or issues in downloading the images from internet during a production environment's installation or upgrade.

3. HTTP Proxy support for deployment

Optionally, RDA Fabric docker images can also be accessed over HTTP proxy during the deployment if one is configured to control the internet access.

On all of the RDA Fabric machines where the services going to be deployed, should be configured with HTTP proxy settings.

  • Edit /etc/environment file and define HTTP Proxy server settings as shown below.
1
2
3
4
5
6
7
http_proxy="http://<username>:<password>@192.168.142.10:3128"
https_proxy="http://<username>:<password>@192.168.142.10:3128"
no_proxy="localhost,127.0.0.1,192.168.192.201,192.168.192.202,192.168.192.203,192.168.192.204,*.rhel.pool.ntp.org,*.us.pool.ntp.org"
HTTP_PROXY="http://<username>:<password>@192.168.142.10:3128"
HTTPS_PROXY="http://<username>:<password>@192.168.142.10:3128"
NO_PROXY="localhost,127.0.0.1,192.168.192.201,192.168.192.202,192.168.192.203,192.168.192.204,*.rhel.pool.ntp.org,*.us.pool.ntp.org"
export http_proxy https_proxy no_proxy HTTP_PROXY HTTPS_PROXY NO_PROXY

Info

Note: IP Address details are given for a reference only. They need to be replaced with appropriate HTTP Proxy server IP and port applicable to your environment.

Warning

Note: For no_proxy and NO_PROXY environment variables, please include loopback and IP addresses of all RDA platform, infrastructure, application and worker nodes. This will ensure to avoid internal RDA Fabric's application traffic going through HTTP proxy server.

Additionally, include any target applications or devices IP address or DNS names where it doesn't require to go through HTTP Proxy server.

Optionally, RDA Fabric docker images can also be accessed over HTTP proxy during the deployment if one is configured to control the internet access.

On all of the RDA Fabric machines where the services going to be deployed, should be configured with HTTP proxy settings.

  • Edit /etc/profile.d/proxy.sh file and define HTTP Proxy server settings as shown below.

sudo vi /etc/profile.d/proxy.sh
1
2
3
4
5
6
7
http_proxy="http://<username>:<password>@192.168.142.10:3128"
https_proxy="http://<username>:<password>@192.168.142.10:3128"
no_proxy="localhost,127.0.0.1,192.168.192.201,192.168.192.202,192.168.192.203,192.168.192.204,*.rhel.pool.ntp.org,*.us.pool.ntp.org"
HTTP_PROXY="http://<username>:<password>@192.168.142.10:3128"
HTTPS_PROXY="http://<username>:<password>@192.168.142.10:3128"
NO_PROXY="localhost,127.0.0.1,192.168.192.201,192.168.192.202,192.168.192.203,192.168.192.204,*.rhel.pool.ntp.org,*.us.pool.ntp.org"
export http_proxy https_proxy no_proxy HTTP_PROXY HTTPS_PROXY NO_PROXY

  • Update file permissions with execute and source the file or logout and login again to enable the http proxy settings.
sudo chmod +x /etc/profile.d/proxy.sh
source /etc/profile.d/proxy.sh
  • Configure http proxy for APT package manager by editing /etc/apt/apt.conf.d/80proxy file as shown below.

sudo vi /etc/apt/apt.conf.d/80proxy
Acquire::http::proxy "http://<username>:<password>@192.168.142.10:3128";
Acquire::https::proxy "http://<username>:<password>@192.168.142.10:3128";

Info

Note: IP Address details are given for a reference only. They need to be replaced with appropriate HTTP Proxy server IP and port applicable to your environment. Username and Password fields are optional and needed only if the HTTP Proxy is enabled with user authentication.

Warning

Note: For no_proxy and NO_PROXY environment variables, please include loopback and IP addresses of all RDA platform, infrastructure, application and worker nodes. This will ensure to avoid internal RDA Fabric's application traffic going through HTTP proxy server.

Additionally, include any target applications or devices IP address or DNS names where it doesn't require to go through HTTP Proxy server.

DNS Resolution and CFX Registry Access Issues

Note

If the above proxy settings are not working and seeing DNS challenge or CFX registry access, Please follow the below steps

  • Steps to resolve DNS issues and access to CFX registry

images Authentication Failure1

  • To update the DNS records we need to update the below domains in etc/hosts file
sudo vi /etc/hosts
127.0.0.1  localhost
54.177.20.202 cfxregistry.cloudfabrix.io
54.146.255.141 quay.io

Note

quay.io is having a dynamic ip, so before updating the host file check the ip again by using the command mentioned below.

ping quay.io

images Quay.Io

  • Please check the DNS Server Settings by using the below command
resolvectl status
Current DNS Server: 10.95.159.101
        DNS Servers: 10.95.159.101
                      10.95.159.100
  • To update the additional DNS Servers please run the below command
vi /etc/netplan/00-netcfg.yaml
network:
    version: 2
    renderer: networkd
    ethernets:
      ens160:
        dhcp4: no
        dhcp6: no
        addresses: [10.95.125.66/24]
        gateway4:  10.95.125.1
        nameservers:
          addresses: [10.95.159.101,10.95.159.100]
  • To apply the above changes please run the below command
sudo netplan apply
  • If you still see any DNS Server settings issue, please run the below commands

sudo systemctl restart systemd-resolved
sudo systemctl status systemd-resolved

  • Configure Docker Daemon with HTTP Proxy server settings.
sudo mkdir -p /etc/systemd/system/docker.service.d
cd /etc/systemd/system/docker.service.d

Create a file called http-proxy.conf under above directory and add the HTTP Proxy configuration lines as shown below.

vi http-proxy.conf
1
2
3
4
[Service]
Environment="HTTP_PROXY=http://<username>:<password>@192.168.142.10:3128"
Environment="HTTPS_PROXY=http://<username>:<password>@192.168.142.10:3128"
Environment="NO_PROXY=localhost,127.0.0.1,192.168.192.201,192.168.192.202,192.168.192.203,192.168.192.204,*.rhel.pool.ntp.org,*.us.pool.ntp.org"

Warning

Note: If there is an username and password required for HTTP Proxy server authentication, and if the username has any special characters like "\" (ex: username\domain), it need to be entered in HTTP encoded format. This is applicable only for Docker daemon. Please follow the below instructions.

HTTP Encode / Decode URL: https://www.urlencoder.org

If the username is john\acme.com : The HTTP encoded value is john%%5Cacme.com and the HTTP Proxy configuration looks like below.

1
2
3
4
[Service]
Environment="HTTP_PROXY=http://john%5Cacme.com:password@192.168.142.10:3128"
Environment="HTTPS_PROXY=http://john%5Cacme.com:password@192.168.142.10:3128"
Environment="NO_PROXY=localhost,127.0.0.1,192.168.192.201,192.168.192.202,192.168.192.203,192.168.192.204,*.rhel.pool.ntp.org,*.us.pool.ntp.org"
  • Restart the RDA Platform, Infrastructure, Application and Worker node VMs to apply the HTTP Proxy server settings.

  • To apply the HTTP Proxy server settings at the docker level please run the below 2 given commands

sudo systemctl daemon-reload
sudo systemctl restart docker
  • After restarting the docker services please verify the configuration by checking the docker environment using below command
sudo systemctl show --property=Environment docker
Environment=HTTP_PROXY=http://10.95.125.66:3128 HTTPS_PROXY=https://10.95.125.66:3129 NO_PROXY=localhost,127.0.0.1,cfxregistry.cloudfabrix.io

Note

You can find more info about docker proxy configuration in below URL https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

  • Verify if you are able to connect to CloudFabrix docker registry URL running the below command.
curl -vv https://cfxregistry.cloudfabrix.io:443
curl -vv https://cfxregistry.cloudfabrix.io:443
* Rebuilt URL to: https://cfxregistry.cloudfabrix.io:443/
*   Trying 54.177.20.202...
* TCP_NODELAY set
* Connected to cfxregistry.cloudfabrix.io (54.177.20.202) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
.
.
.
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: cfxregistry.cloudfabrix.io
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
  • After configuring the Docker deamon, Please run the below docker login command to verify if Docker daemon is able to access CloudFabrix docker registry service.
docker login -u=readonly -p=readonly cfxregistry.cloudfabrix.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/rdauser/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Login Succeeded should be seen as shown in the above command's output.

4. RDAF platform resource requirements

RDA Fabric platform deployment can vary from simple to advanced depends on type of the environment and requirements.

A simple deployment can consist of one or more VMs for smaller and non-critical environments, while an advanced deployment consists of many RDA Fabric VMs to support high-availability, scale and for business critical environments.

CloudFabrix provided OVF support: VMware vSphere 6.0 or above

4.1 Single VM deployment:

The below configuration can be used for a demo or POC environments. In this single VM configuration, all of RDA Fabric platform, infrastructure and application services will be installed.

For deployment and configuration options, Please refer OVF based deployment or Deployment on RHEL/Ubuntu OS section within this document.

Quantity
VM Type
CPU / Memory / Network Storage
1 OVF Profile: RDA Fabric Infra Instance
Services: Platform, Infrastructure, Application & Worker
CPU: 8
Memory: 64GB
Network:1 Gbps/10 Gbps
/ (root): 75GB
/opt: 50GB
/var/lib/docker: 100GB
/minio-data: 50GB
/kafka-logs: 25GB
/zookeeper: 15GB
/var/mysql: 50GB
/opensearch: 50GB

4.2 Distributed VM deployment:

The below configuration can be used to distribute the RDA Fabric services for smaller or non-critical environments.

For deployment and configuration options, Please refer OVF based deployment or Deployment on RHEL/Ubuntu OS section within this document.

Quantity
OVF Profile
CPU / Memory / Network Storage
1 OVF Profile: RDA Fabric Infra Instance
Services: Platform & Infrastructure
CPU: 8
Memory: 32GB
Network:1 Gbps/10 Gbps
/ (root): 75GB
/opt: 50GB
/var/lib/docker: 50GB
/minio-data: 50GB
/kafka-logs: 25GB
/zookeeper: 15GB
/var/mysql: 50GB
/opensearch: 50GB
1 OVF Profile: RDA Fabric Platform Instance
Services: Application (OIA/AIA)
CPU: 8
Memory: 48GB
Network:1 Gbps/10 Gbps
/ (root): 75GB
/opt: 50GB
/var/lib/docker: 50GB
1 OVF Profile: RDA Fabric Platform Instance
Services: RDA Worker
CPU: 8
Memory: 32GB
Network:1 Gbps/10 Gbps
/ (root): 75GB
/opt: 25GB
/var/lib/docker: 25GB

4.3 HA, Scale and Production deployment:

The below configuration can be used to distribute the RDA Fabric services for production environment to be highly-available and to support larger workloads.

For deployment and configuration options, Please refer OVF based deployment or Deployment on RHEL/Ubuntu OS section within this document.

Quantity
OVF Profile
CPU / Memory / Network Storage
3 OVF Profile: RDA Fabric Infra Instance
Services: Infrastructure
CPU: 8
Memory: 48GB
Network: 10 Gbps
SSD Only
/ (root): 75GB
/opt: 50GB
/var/lib/docker: 50GB
/minio-data: 100GB
/kafka-logs: 50GB
/zookeeper: 25GB
/var/mysql: 150GB
/opensearch: 100GB
2 OVF Profile: RDA Fabric Platform Instance
Services: RDA Platform
CPU: 4
Memory: 24GB
Network: 10 Gbps
/ (root): 75GB
/opt: 50GB
/var/lib/docker: 50GB
3 or more OVF Profile: RDA Fabric Platform Instance
Services: Application (OIA/AIA)
CPU: 8
Memory: 48GB
Network: 10 Gbps
/ (root): 75GB
/opt: 50GB
/var/lib/docker: 50GB
3 or more OVF Profile: RDA Fabric Platform Instance
Services: RDA Worker
CPU: 8
Memory: 32GB
Network: 10 Gbps
/ (root): 75GB
/opt: 25GB
/var/lib/docker: 25GB

Important

For a production rollout, RDA Fabric platform's resources such as CPU, Memory and Storage need to be sized appropriately depends on the environment size in terms of alert ingestion per minute, total number of assets for discovery and frequency, data retention of assets, alerts/events, incidents and observability data. Please contact support@cloudfabrix.com for a guidance on resource sizing.

Important

Minio service requires minimum of 4 VMs to run it in HA mode and to provide 1 node failure tolerance. For this requirement, an additional disk /minio-data need to be added on one of the RDA Fabric Platform or Application VMs. So, the Minio cluster service spans across 3 RDA Fabric Infrastructure VMs + 1 of the RDA Fabric Platform or Application VMs. Please refer Adding additional disk for Minio for more information on how to add and configure it.

4.4 Network layout and ports:

Please refer the below picture which outlines the network access layout between the RDAF services and the used ports by them. These are applicable for both Kubernetes and non Kubernetes environments as well.

Tip

Please click on the picture below to enlarge and to go back to the page please click on the back arrow button on the top.

RDAF Services & Network ports:

Service Type Service Name Network Ports
RDAF Infrastructure Service Minio 9000/TCP (Internal)
9443/TCP (External)
RDAF Infrastructure Service MariDB 3306/TCP (Internal)
3307/TCP (Internal)
4567/TCP (Internal)
4568/TCP (Internal)
4444/TCP (Internal)
RDAF Infrastructure Service Opensearch 9200/TCP (Internal)
9300/TCP (Internal)
9600/TCP (Internal)
RDAF Infrastructure Service Kafka 9092/TCP (Internal)
9093/TCP (Internal & External)
RDAF Infrastructure Service Zookeeper 2181/TCP (Internal)
2888/TCP (Internal)
RDAF Infrastructure Service NATs 4222/TCP (External)
6222/TCP (Internal)
8222/TCP (Internal)
RDAF Infrastructure Service Redis 6379/TCP (Internal)
RDAF Infrastructure Service Redis-Sentinel 26379/TCP (Internal)
RDAF Infrastructure Service HAProxy 443/TCP (External)
7443/TCP (External)
25/TCP (External)
7222/TCP (External)
RDAF Core Platform Service RDA API Server 8807/TCP (Internal)
8808/TCP (Internal)
RDAF Core Platform Service RDA Portal Backend 7780/TCP (Internal)
RDAF Core Platform Service RDA Portal Frontend (Nginx) 8080/TCP (Internal)
RDAF Application Service RDA Webhook Server 8888/TCP (Internal)
RDAF Application Service RDA SMTP Server 8456/TCP (Internal)

Internal Network ports: These ports are used by RDAF services for internal communication between them.

External Network ports: These ports are exposed for incoming traffic into RDAF platform, such as, portal UI access, RDA Fabric access (NATs,Minio & Kafka) for RDA workers & agents that were deployed at the edge locations, Webhook based alerts, SMTP email based alerts etc.

Asset Discovery and Integrations: Network Ports

Access Protocol Details Endpoint Network Ports
Windows AD or LDAP - Identity and Access Management - RDAF platform
--> endpoints (Windows AD or LDAP)
389/TCP
636/TCP
SSH based discovery - RDA Worker/Agent (EdgeCollector)
--> endpoints (Ex: Linux/Unix, Network/Storage Devices)
22/TCP
HTTP API based discovery/Integration:
RDA Worker / AIOps app --> endpoints (Ex: SNOW, CMDB, vCenter, K8s, AWS etc..)
443/TCP
80/TCP
8080/TCP
Windows OS discovery using WinRM/SSH protocol
RDA Worker --> Windows Servers
5985/TCP
5986/TCP
22/TCP
SNMP based discovery: - RDA Agent (EdgeCollector) --> endpoints (Ex: network devices like switches, routers,
firewall, load balancers etc)
161/UDP
161/TCP

5. RDAF platform VMs deployment using OVF

Download the latest OVF image from CloudFabrix (or contact support@cloudfabrix.com).

Supported VMware vSphere version: 6.5 or above

Step-1: Login to VMware vCenter Webclient to install RDA Fabric platform VMs using the downloaded OVF image.

Info

Note: It is expected that the user who is deploying VMs for CloudFabrix RDA Fabric platform, have sufficient VMware vCenter privileges. Also, has necessary pre-requisite credentials and details handy (e.g IP Address/FQDN, Gateway, DNS & NTP server details).

Step-2: Select a vSphere cluster/resource pool in vCenter and right click on it and then select -> Deploy OVF Template as shown below.

CFX-OVF-Step01

Step-3: Select the OVF image from the location it was downloaded.

CFX-OVF-Step02

Info

Note: When VMware vSphere Webclient is used to deploy OVF, it expects to select all the files necessary to deploy OVF template. Select all the binary files (.ovf, .mf, .vmdk files) for deploying VM.

CFX-OVF-Step03 CFX-OVF-Step04

Step-4: Click Next and Enter appropriate 'VM Name' and select an appropriate Datacenter on which it is going to be deployed.

CFX-OVF-Step05

Step-5: Click Next and select an appropriate vSphere Cluster / Resource pool where the RDA Fabric VM is going to be deployed.

CFX-OVF-Step06

Step-6: Click Next and you are navigated to deployment configuration view. The following configuration options are available to choose from during the deployment.

  • RDA Fabric Platform Instance: Select this option to deploy any of the RDA Fabric services such as Platform, Application, Worker, Event gateway and Edge collector. For HA configuration, multiple instances need to be provisioned.

  • RDA Fabric Infra Instance: Select this option to deploy RDA Fabric infrastructure services such as MariaDB, Kafka, Zookeeper, Minio, NATs, Redis and Opensearch. For HA configuration, multiple instances need to be provisioned.

CFX-OVF-Step07

Step-7: Click Next and you are navigated to selecting the Datastore (Virtual Storage). Select Datastore / Datastore Cluster where you want to deploy the VM. Make sure you select 'Thin Provision' option as highlighted in the below screenshot.

CFX-OVF-Step08

Step-8: Click Next and you are navigated to Network port-group view. Select the appropriate Virtual Network port-group as shown below.

CFX-OVF-Step09

Step-9: Click Next and you are taken to Network Settings/Properties as shown below. Please enter all the necessary details such as password, network settings and disk size as per the requirements.

  • Default OVF username and password is rdauser and rdauser1234 (Update the password field to change default password)

Warning

Note: Please make sure to configure same password for rdauser user on all of the RDA Fabric VMs.

CFX-OVF-Step10

Step-10: Adjust the Disk size settings based on the environment size. For production deployments, please adjust the disk size for /var/lib/docker and /opt to 75GB.

CFX-OVF-Step11

Step-11: Click Next to see a summary of OVF deployment settings and Click on Finish to deploy the VM.

Step-12: Before powering ON the deployed VM, Edit the VM settings and adjust the CPU and Memory settings based on the environment size. For any help/guidance on resource sizing, please contact support@cloudfabrix.com.

Step-13: Power ON the Vs and wait until it is completely up with OVF settings. It usually takes around 2 to 5 minutes.

Info

Note: Repeat the above OVF deployment steps (step-1 through step-13) for provisioning additional required VMs for RDA Fabric platform.

5.1 Post OS Image / OVF Deployment Configuration

Below steps are applicable for both Ubuntu 20.x and RHEL 8.x environments.

Step 1: Login into RDA Fabric VMs using any SSH client (ex: putty). Default username is rdauser

Step 2: Verify that NTP time is in sync on all of the RDA Fabric Platform, Infrastructure, Application, Worker & On-premise docker service VMs.

Timezone settings: Below are some of the useful commands to view / change / set Timezone on RDA Fabric VMs.

1
2
3
4
5
sudo tzselect

sudo timedatectl

sudo timedatectl set-timezone Europe/London

Important

The date & time settings should be in sync across all of RDA Fabric VMs for the application services to function appropriately.

To manually sync VM's time with NTP server, run the below commands.

1
2
3
sudo systemctl stop chronyd
sudo chronyd -q 'server <ntp-server-ip> iburst'
sudo systemctl start chronyd

To configure and update the NTP server settings, please update /etc/chrony/chrony.conf with NTP server details and restart the Chronyd service

1
2
3
4
5
6
7
sudo systemctl stop chronyd
sudo vi /etc/chrony/chrony.conf

# Add the below line at the end of the file. Repeat the line for each NTP server's IP Address
server <ntp-server-ip> prefer iburst

sudo systemctl start chronyd

Firewall Configuration:

Run the below commands to open or close application service ports within the firewall service if needed.

sudo ufw allow <port-number>/<tcp/udp>
sudo ufw deny <port-number>/<tcp/udp>

Run the below commands to open or close application service ports within the firewall service if needed.

1
2
3
sudo firewall-cmd --add-port <port-number>/<tcp/udp> --permanent
sudo firewall-cmd --remove-port <port-number>/<tcp/udp> --permanent
sudo firewall-cmd --reload

Verify network bandwidth between RDAF VMs:

For production deployment, the network bandwidth between RDAF VMs should be minimum of 10Gbps. CloudFabrix provided OVF comes with iperf utility which can be used to measure the network bandwidth.

To verify network bandwidth between RDAF platform service VM and infrastructure VM, follow the below steps.

Login into RDAF platform service VM as rdauser using SSH client to access the CLI and start iperf utility as a server.

Info

By default iperf listens on port 5001 over tcp

Enable the iperf server port using the below command.

sudo ufw allow 5001/tcp

Start the iperf server as shown below.

iperf -s
$ iperf -s

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

Now, login into RDAF infrastructure service VM as rdauser using SSH client to access the CLI and start iperf utility as a client.

iperf -c <RDAF-Platform-VM-IP>

operf utility connects to RDAF platform service VM as shown below. It will connect and verify the network bandwidth speed.

------------------------------------------------------------
Client connecting to 192.168.125.143, TCP port 5001
TCP window size: 2.86 MByte (default)
------------------------------------------------------------
[  3] local 192.168.125.141 port 10654 connected with 192.168.125.143 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  21.2 GBytes  18.2 Gbits/sec

Repeat the above steps between all of the RDAF VMs in both directions to make sure the network bandwidth speed is minimum of 10Gbps.

To verify network bandwidth between RDAF platform service VM and infrastructure VM, follow the below steps.

Login into RDAF platform service VM as rdauser using SSH client to access the CLI and start iperf utility as a server.

Info

By default iperf listens on port 5001 over tcp

Enable the iperf server port using the below command.

sudo firewall-cmd --add-port 5001/tcp --permanent
sudo firewall-cmd --reload

Start the iperf server as shown below.

iperf -s
$ iperf -s

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

Now, login into RDAF infrastructure service VM as rdauser using SSH client to access the CLI and start iperf utility as a client.

iperf -c <RDAF-Platform-VM-IP>

operf utility connects to RDAF platform service VM as shown below. It will connect and verify the network bandwidth speed.

------------------------------------------------------------
Client connecting to 192.168.125.143, TCP port 5001
TCP window size: 2.86 MByte (default)
------------------------------------------------------------
[  3] local 192.168.125.141 port 10654 connected with 192.168.125.143 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  21.2 GBytes  18.2 Gbits/sec

Repeat the above steps between all of the RDAF VMs in both directions to make sure the network bandwidth speed is minimum of 10Gbps.

5.2 Adding additional disk for Minio 4th HA node

Info

This step is only applicable for Minio infrastructure service when RDA Fabric VMs are deployed in HA configuration. Minio service requires minimum of 4 nodes to form HA cluster to provide 1 node failure tolerance.

Step-1: Login into VMware vCenter and Shutdown one of the RDA Fabric platform or application services VM.

Step-2: Edit the VM settings of RDA Fabric platform services VM that was shutdown in the above. Add a new disk and allocate the same size as other Minio service nodes. Please refer Minio disk size (/minio-data) for HA configuration

Step-3: Edit the VM settings of RDA Fabric platform services VM again on which new disk was added and note down the SCSI Disk ID as shown below.

CFX-OVF-Add-Disk

Step-4: Power ON the RDA Fabric platform services VM on which a new disk has been added in the above step.

Step-5: Login into RDA Fabric Platform VM using any SSH client (ex: putty). Default username is rdauser

Step-6: Run the below command to list all of the SCSI disks of the VM with their SCSI Disk IDs

lsblk -S
NAME HCTL       TYPE VENDOR   MODEL         REV TRAN
sda  2:0:0:0    disk VMware   Virtual_disk 1.0  
sdb  2:0:1:0    disk VMware   Virtual_disk 1.0  
sdc  2:0:2:0    disk VMware   Virtual_disk 1.0  
sdd  2:0:3:0    disk VMware   Virtual_disk 1.0

Run the below command to see the new disk along with used disks with their mount points

lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
...
sda                         8:0    0   75G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0  1.5G  0 part /boot
└─sda3                      8:3    0 48.5G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0 48.3G  0 lvm  /
sdb                         8:16   0   25G  0 disk /var/lib/docker
sdc                         8:32   0   25G  0 disk /opt
sdd                         8:48   0   50G  0 disk

In the above example command outputs, the newly added disk is sdd and it's size is 50GB

Step-7: Run the below command to create a new XFS filesystem and create a mount point directory.

sudo mkfs.xfs /dev/sdd
sudo mkdir /minio-data

Step-8: Run the below command to get the UUID of the newly created filesystem on /dev/sdd

sudo blkid /dev/sdd

Step-9: Update /etc/fstab to mount the /dev/sdd disk to /minio-data mount point

sudo vi /etc/fstab
Add the below line and save the /etc/fstab file.

UUID=<UUID-from-step-8>    /minio-data   xfs defaults    0   0

Step-10: Mount the /minio-data mount point and verify the mount point is mounted.

1
2
3
sudo mount -a

df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv   48G  8.3G   37G  19% /
...
/dev/sda2                          1.5G  209M  1.2G  16% /boot
/dev/sdb                            25G  211M   25G   1% /var/lib/docker
/dev/sdc                            25G  566M   25G   3% /opt
/dev/sdd                            50G  390M   50G   1% /minio-data

5.3 Extending the Root (/) filesystem

Warning

Note-1: The below provided instructions to extend the Root (/) filesystem are applicable only for the virtual machines that are provisioned using CloudFabrix provided Ubuntu OVF

Note-2: As a precautionary step, please take VMware VM snapshot before making the changes to Root (/) filesystem.

Step-1: Check on which disk the Root (/) filesystem was created using the below command. In the below example, it was created on disk /dev/sda and partition 3 i.e. sda3.

On partition sda3, a logical volume ubuntu--vg-ubuntu--lv was created and mounted as Root (/) filesystem.

lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0   62M  1 loop 
loop1                       7:1    0   62M  1 loop /snap/core20/1593
loop2                       7:2    0 67.2M  1 loop /snap/lxd/21835
loop3                       7:3    0 67.8M  1 loop /snap/lxd/22753
loop4                       7:4    0 44.7M  1 loop /snap/snapd/15904
loop5                       7:5    0   47M  1 loop /snap/snapd/16292
loop7                       7:7    0   62M  1 loop /snap/core20/1611
sda                         8:0    0   75G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0  1.5G  0 part /boot
└─sda3                      8:3    0 48.5G  0 part 
└─ubuntu--vg-ubuntu--lv     253:0  0 48.3G  0 lvm  /
sdb                         8:16   0  100G  0 disk /var/lib/docker
sdc                         8:32   0   75G  0 disk /opt

Step-2: Verify the SCSI disk id of the disk on which Root (/) filesystem was created using the below command.

In the below example, the SCSI disk id of root disk sda is 2:0:0:0, i.e. the SCSI disk id is 0 (third digit)

lsblk -S
NAME HCTL       TYPE VENDOR   MODEL         REV TRAN
sda  2:0:0:0    disk VMware   Virtual_disk 1.0  
sdb  2:0:1:0    disk VMware   Virtual_disk 1.0  
sdc  2:0:2:0    disk VMware   Virtual_disk 1.0  

Step-3: Edit the virtual machine's properties on vCenter and identify the Root disk sda using the above SCSI disk id 2:0:0:0 as highlighted in the below screenshot.

Increase the disk size from 75GB to higher desired value in GB.

CFXOVFRootExtend

Step-4: Login back to Ubuntu VM CLI using SSH client as rdauser

Switch to sudo user

sudo -s

Execute the below command to rescan the Root disk i.e. sda to reflect the increased disk size.

echo '1' > /sys/class/scsi_disk/2\:0\:0\:0/device/rescan

Execute the below command to see the increased size for Root disk sda

lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0   62M  1 loop /snap/core20/1611
loop1                       7:1    0 67.8M  1 loop /snap/lxd/22753
loop2                       7:2    0 67.2M  1 loop /snap/lxd/21835
loop3                       7:3    0   62M  1 loop 
loop4                       7:4    0   47M  1 loop /snap/snapd/16292
loop5                       7:5    0 44.7M  1 loop /snap/snapd/15904
loop6                       7:6    0   62M  1 loop /snap/core20/1593
sda                         8:0    0  100G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0  1.5G  0 part /boot
└─sda3                      8:3    0 48.5G  0 part 
└─ubuntu--vg-ubuntu--lv     253:0  0 48.3G  0 lvm  /
sdb                         8:16   0   25G  0 disk /var/lib/docker
sdc                         8:32   0   25G  0 disk /opt

Step-5: In the above command output, identify the Root (/) filesystem's disk partition, i.e. sda3

Run the below command cfdisk to resize the Root (/) filesystem.

cfdisk
  • Highlight /dev/sda3 partition using Down arrow on the keyboard as shown in the below screen, use Left/Right arrow to highlight the Resize option and hit Enter

CFXOVFRootResize01

  • Adjust the disk Resize in GB as desired.

Warning

The Resize disk value in GB cannot be smaller than the current size of the Root (/) filesystem.

CFXOVFRootResize02

  • Once the /dev/sda3 partition is resized, use Left/Right arrow to highlight the Write option and hit Enter
  • Confirm the resize by typing yes and hit Enter
  • Use Left/Right arrow to highlight the Quit option and hit Enter to quit the Disk resizing utility.

CFXOVFRootResize03 CFXOVFRootResize04

Step-6:

Disable the swap before resizing the Root (/) filesystem using the below command.

swapoff /swap.img

Resize the physical volume of Root (/) filesystem i.e. /dev/sda3

pvresize /dev/sda3

Resize the logical volume of Root (/) filesystem i.e. /dev/mapper/ubuntu--vg-ubuntu--lv. In the below example, the logical volume of Root (/) filesystem is increased by 20GB

lvextend -L +20G /dev/mapper/ubuntu--vg-ubuntu--lv

Resize the Root (/) filesystem

resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

Enable the swap after resizing the Root (/) filesystem

swapon /swap.img

Step-7:

Verify the increased Root (/) filesystem disk's space using the below command.

df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               3.9G     0  3.9G   0% /dev
tmpfs                              793M   50M  744M   7% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   68G  8.6G   56G  14% /
tmpfs                              3.9G     0  3.9G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/loop2                          68M   68M     0 100% /snap/lxd/21835
/dev/loop4                          47M   47M     0 100% /snap/snapd/16292
/dev/loop1                          68M   68M     0 100% /snap/lxd/22753
/dev/loop5                          45M   45M     0 100% /snap/snapd/15904
/dev/sda2                          1.5G  209M  1.2G  16% /boot
/dev/sdb                            25G  211M   25G   1% /var/lib/docker
/dev/sdc                            25G  566M   25G   3% /opt

6. RDAF Platform VMs deployment on RHEL/Ubuntu OS

Below steps outlines the required pre-requisites and the configuration to be applied on RHEL or Ubuntu OS VM instances when CloudFabrix provided OVF is not used, to deploy and install RDA Fabric platform, infrastructure, application, worker and on-premise docker registry services.

Software Pre-requisites:

  • RHEL: RHEL 8.3 or above
  • Ubuntu: Ubuntu 20.04.x or above
  • Python: 3.7.4 or above
  • Docker: 20.10.x or above
  • Docker-compose: 1.29.x or above

For resource requirements such as CPU, Memory, Network and Storage, please refer RDA Fabric VMs resource requirements

  • Once RHEL 8.3 or above OS version is deployed, register and apply the OS licenses using the below commands
sudo subscription-manager register
sudo subscription-manager attach
  • Create a new user called rdauser and configure the password.
sudo adduser rdauser
sudo passwd rdauser
sudo chown -R rdauser:rdauser /home/rdauser
sudo groupadd docker
sudo usermod -aG docker rdauser
  • Add rdauser to /etc/sudoers file. Add the below line at the end of the sudoers file.
rdauser ALL=(ALL) NOPASSWD:ALL
  • Modify the SSH service configuration with the below settings. Edit /etc/ssh/sshd_config file and update the below settings as shown below.
PasswordAuthentication yes
MaxSessions 10
LoginGraceTime 2m
  • Restart the SSH service
sudo systemctl restart sshd
  • Logout and Login back as newly created user rdauser

  • Format the disks with xfs filesystem and mount the disks as per the disk requirements outlined in RDA Fabric VMs resource requirements section.

sudo mkfs.xfs /dev/<disk-name>
  • Make sure disk mounts are updated in /etc/fstab to make them persistent across VM reboots.

  • In /etc/fstab, use filesystem's UUID instead of using SCSI disk names. Below command provides UUID of filesystem created on a disk or disk partition.

sudo blkid /dev/<disk-name>

Sample disk mount point entry on /etc/fstab file.

UUID=60174ace-e1f6-497e-90e2-7d889e6c5695    /opt   xfs defaults    0   0

Installing OS utilities and Python

  • Run the below commands to install the required software packages.
1
2
3
4
5
sudo yum install -y gcc openssl-devel bzip2-devel sqlite-devel xz-devel ncurses-devel readline readline-devel gdbm-devel tcl-devel tk-devel make libffi-devel

or

sudo dnf install -y gcc openssl-devel bzip2-devel sqlite-devel xz-devel ncurses-devel readline readline-devel gdbm-devel tcl-devel tk-devel make libffi-devel
1
2
3
4
5
sudo yum install -y install -y wget telnet net-tools unzip tar sysstat bind-utils iperf3 xinetd jq yum-utils device-mapper-persistent-data lvm2 mysql

or

sudo dnf install -y install -y wget telnet net-tools unzip tar sysstat bind-utils iperf3 xinetd jq yum-utils device-mapper-persistent-data lvm2 mysql
  • Download and install the below software packages.

1
2
3
wget https://download-ib01.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/s/sshpass-1.06-9.el8.x86_64.rpm

sudo rpm -ivh sshpass-1.06-9.el8.x86_64.rpm
1
2
3
wget https://download-ib01.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/n/nload-0.7.4-16.el8.x86_64.rpm

sudo rpm -ivh nload-0.7.4-16.el8.x86_64.rpm

  • Download and install Python 3.7.4 or above. Skip this step if Python is already installed as part of the OS install.
cd /opt

sudo wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz

sudo tar xvf Python-3.7.4.tgz

sudo chown -R rdauser:rdauser Python-3.7.4

cd /opt/Python-3.7.4

./configure --enable-optimizations

sudo make -j8 build_all

sudo make altinstall

sudo /usr/local/bin/python3.7 -m venv /opt/PYTHON37

sudo chown -R rdauser:rdauser /opt/PYTHON37

sudo rm -f /opt/Python-3.7.4.tgz

sudo alternatives --set python /usr/bin/python3.7

sudo ln -s /usr/local/bin/python3.7 /usr/bin/python

sudo ln -s /usr/local/bin/pip3.7 /usr/bin/pip

Installing Docker and Docker-compose

  • Run the below commands to install docker runtime environment

1
2
3
4
5
sudo yum config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

or

sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
1
2
3
4
5
sudo yum -y install --nobest --allowerasing docker-ce-20.10.5-3.el8

or

sudo dnf -y install --nobest --allowerasing docker-ce-20.10.5-3.el8

1
2
3
sudo systemctl enable docker
sudo systemctl start docker
sudo systemctl status docker
  • Configure docker service configuration updating /etc/docker/daemon.json as shown below.
sudo vi /etc/docker/daemon.json
{
"tls": true, 
"tlscacert": "/etc/tlscerts/ca/ca.pem", 
"tlsverify": true, 
"storage-driver": "overlay2", 
"hosts": [
"unix:///var/run/docker.sock", 
"tcp://0.0.0.0:2376"
], 
"tlskey": "/etc/tlscerts/server/server.key", 
"debug": false, 
"tlscert": "/etc/tlscerts/server/server.pem", 
"experimental": false, 
"live-restore": true
}
  • Download and execute macaw-docker.py script to configure TLS for docker service. (Note: Make sure python2.7 is installed)
1
2
3
4
5
6
7
8
9
mkdir ~/cfx-config-files

cd ~/cfx-config-files

wget https://macaw-amer.s3.amazonaws.com/images/misc/RHEL-bin-files.tar.gz

tar -xzvf RHEL-bin-files.tar.gz

sudo python2.7 macaw-docker.py
  • Edit /lib/systemd/system/docker.service file and update the below line and restart the docker service
From:
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

To:
ExecStart=/usr/bin/dockerd
  • Restart docker service and verify the status
sudo systemctl restart docker
sudo systemctl status docker
  • Update /etc/sysctl.conf file with below performance tuning settings.
#Performance Tuning.
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 2500
net.core.somaxconn = 65000
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.ip_local_port_range = 10000 65535
vm.max_map_count = 1048575
net.core.wmem_default=262144
net.core.wmem_max=4194304
net.core.rmem_default=262144
net.core.rmem_max=4194304

#file max
fs.file-max=518144

#swapiness
vm.swappiness = 1
#Set runtime for kernel.randomize_va_space
kernel.randomize_va_space = 2

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind=1
  • Install JAVA software package
sudo mkdir -p /opt/java
sudo tar xf ~/cfx-config-files/jdk-8u281-linux-x64.tar.gz -C /opt/java --strip-components 1
  • Add the JAVA software binary to PATH variable

vi ~/.bash_profile
PATH=/opt/java/bin:$PATH
source ~/.bash_profile

  • Reboot the host
sudo reboot
  • Once Ubuntu 20.04.x or above OS version is deployed, please apply the below configuration.

  • Create a new user called rdauser and configure the password.

sudo adduser rdauser
sudo passwd rdauser
sudo chown -R rdauser:rdauser /home/rdauser
sudo groupadd docker
sudo usermod -aG docker rdauser

  • Add rdauser to /etc/sudoers file. Add the below line at the end of the sudoers file.
rdauser ALL=(ALL) NOPASSWD:ALL
  • Modify the SSH service configuration with the below settings. Edit /etc/ssh/sshd_config file and update as shown below.
PasswordAuthentication yes
MaxSessions 10
LoginGraceTime 2m
  • Restart the SSH service
sudo systemctl restart sshd
  • Logout and Login back as newly created user rdauser

  • Format the disks with xfs filesystem and mount the disks as per the disk requirements outlined in RDA Fabric VMs resource requirements section.

sudo mkfs.xfs /dev/<disk-name>
  • Make sure disk mounts are updated in /etc/fstab to make them persistent across VM reboots.

  • In /etc/fstab, use filesystem's UUID instead of using SCSI disk names. Below command provides UUID of filesystem created on a disk or disk partition.

sudo blkid /dev/<disk-name>

Sample disk mount point entry on /etc/fstab file.

UUID=60174ace-e1f6-497e-90e2-7d889e6c5695    /opt   xfs defaults    0   0

Installing OS utilities and Python

  • Run the below commands to install the required software packages.

sudo apt update
sudo apt install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libsqlite3-dev libreadline-dev libffi-dev curl libbz2-dev

sudo apt install -y wget telnet net-tools unzip tar sysstat bind9-utils iperf3 xinetd jq lvm2 sshpass mysql-client
  • Download and install Python 3.7.4 or above. Skip this step if Python 3.7.4 or above is already installed as part of the OS install.
cd /opt

sudo wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz

sudo tar xvf Python-3.7.4.tgz

sudo chown -R rdauser:rdauser Python-3.7.4

cd /opt/Python-3.7.4

./configure --enable-optimizations

sudo make -j8 build_all

sudo make altinstall

sudo /usr/local/bin/python3.7 -m venv /opt/PYTHON37

sudo chown -R rdauser:rdauser /opt/PYTHON37

sudo rm -f /opt/Python-3.7.4.tgz

sudo alternatives --set python /usr/bin/python3.7

sudo ln -s /usr/local/bin/python3.7 /usr/bin/python

sudo ln -s /usr/local/bin/pip3.7 /usr/bin/pip

Installing Docker and Docker-compose

  • Run the below commands to install docker runtime environment

sudo apt-get install -y ca-certificates curl gnupg lsb-release
1
2
3
4
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y

sudo apt-get install docker-ce=5:20.10.12~3-0~ubuntu-focal docker-ce-cli=5:20.10.12~3-0~ubuntu-focal containerd.io docker-compose-plugin
sudo systemctl enable docker
  • Edit /lib/systemd/system/docker.service file and update the below line and restart the docker service
sudo vi /lib/systemd/system/docker.service

From:
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

To:
ExecStart=/usr/bin/dockerd
  • Configure docker service configuration updating /etc/docker/daemon.json as shown below.
sudo vi /etc/docker/daemon.json
{
"tls": true, 
"tlscacert": "/etc/tlscerts/ca/ca.pem", 
"tlsverify": true, 
"storage-driver": "overlay2", 
"hosts": [
"unix:///var/run/docker.sock", 
"tcp://0.0.0.0:2376"
], 
"tlskey": "/etc/tlscerts/server/server.key", 
"debug": false, 
"tlscert": "/etc/tlscerts/server/server.pem", 
"experimental": false, 
"live-restore": true
}
  • Start and verify the docker service
1
2
3
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl status docker
  • Update /etc/sysctl.conf file with below performance tuning settings.
#Performance Tuning.
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 2500
net.core.somaxconn = 65000
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.ip_local_port_range = 10000 65535
vm.max_map_count = 1048575
net.core.wmem_default=262144
net.core.wmem_max=4194304
net.core.rmem_default=262144
net.core.rmem_max=4194304

#file max
fs.file-max=518144

#swapiness
vm.swappiness = 1
#Set runtime for kernel.randomize_va_space
kernel.randomize_va_space = 2

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind=1
  • Download and execute macaw-docker.py script to configure TLS for docker service.
1
2
3
4
5
6
7
8
9
mkdir ~/cfx-config-files

cd ~/cfx-config-files

wget https://macaw-amer.s3.amazonaws.com/images/misc/RHEL-bin-files.tar.gz

tar -xzvf RHEL-bin-files.tar.gz

sudo python2.7 macaw-docker.py
  • Install JAVA software package
sudo mkdir -p /opt/java
sudo tar xf ~/cfx-config-files/jdk-8u281-linux-x64.tar.gz -C /opt/java --strip-components 1
  • Add the JAVA software binary to PATH variable

vi ~/.bashrc
PATH=/opt/java/bin:$PATH
source ~/.bashrc

  • Install pip3
sudo apt install python3-pip
  • Reboot the host
sudo reboot

7. SSL Certificates Installation

CloudFabrix's RDA Fabric platform is enabled with HTTPs (SSL/443) access by default for secure communication and it is installed with self-signed certificates during the deployment. However, for production deployments, it is highly recommended to install CA Signed certificates and the below steps help you to install them appropriately.

RDA Fabric platform uses HA Proxy service for managing UI access, incoming traffic (ex: Event / Alert notifications) securely over HTTPs (SSL/443) protocol, and internal application traffic where applicable.

Below steps to provide how to install CA Signed SSL certificates for HA Proxy service.

7.1 SSL Certificate Requirements:

CloudFabrix's dimensions platform's HA Proxy service requires below CA signed SSL certificate files in PEM format.

  • server-ssl-certificate.crt (format: PEM)
  • server-ssl-private.key
  • trusted-ca-intermediate-root.crt (format: PEM)
  • trusted-ca-root.crt (format: PEM)

OR

  • server-ssl-certificate.crt (format: PEM)
  • server-ssl-private.key
  • trusted-ca-intermediate-root.crt & trusted-ca-root.crt chain in a single file (format: PEM)

The SSL server certificate that is obtained should match the DNS / FQDN of the cfxDimensions platform VM's IP address (This is also referred to as Common Name or CN within the certificate). Wildcard domain SSL certificate is also supported. The below screen provides an example of how to check the server's ssl certificate's CN name using openssl command. (In this example, the RDA Fabric platform's FQDN is cfx-rdaf.cloudfabrix.io and using a wildcard domain name (CN) SSL certificate)

openssl crl2pkcs7 -nocrl -certfile server-ssl-certificate.crt | openssl pkcs7 -print_certs -noout

CFX-SSL-Cert1

Once you have the SSL certificate files as mentioned above, you need to create an SSL certificate chain by grouping them together as a single file in PEM format.

Th below diagram shows a valid CA signed SSL certificate chain flow for reference.

CFX-SSL-Cert2

Run the below command to create a valid SSL certificate chain. (supported format is PEM)

cat server-ssl-private.key server-ssl-certificate.crt trusted-ca-intermediate-root.crt trusted-ca-root.crt > cfx-ssl-haproxy.pem

OR

cat server-ssl-private.key server-ssl-certificate.crt trusted-ca-intermediate-and-root-chain.crt > cfx-ssl-haproxy.pem

CFX-SSL-Cert3

Info

Note: The final consolidated SSL certificate chain output is saved to cfx-ssl-haproxy.pem file which will be applied to HA Proxy configuration later in this document. The filename used here for reference only.

7.2 CA-signed SSL certificate verification:

Info

openssl tool is a pre-requisite for performing SSL certificate validation checks

Step 1: Run the below commands to verify both server's SSL certificate and private key. The output of these two commands should match exactly the same.

openssl x509 -noout -modulus -in server-ssl-certificate.crt | openssl md5
openssl rsa -noout -modulus -in server-ssl-private.key | openssl md5

Step 2: Run the below commands to verify server's SSL certificate, intermediate & root certificate's (chain) date is valid and not expired.

openssl x509 -noout -in server-ssl-certificate.crt -dates
openssl x509 -noout -in trusted-ca-root.crt -dates
openssl x509 -noout -in trusted-ca-intermediate-root.crt -dates

Step 3: Run the below commands to verify the public keys contained in the private key file and the server certificate file are the same. The output of these two commands should match.

openssl x509 -in server-ssl-certificate.crt -noout -pubkey
openssl rsa -in server-ssl-private.key -pubout

Step 4: Run the below command to verify the validity of the certificate chain. The response should come out as OK.

openssl verify -CAfile trusted-ca-root.crt server-ssl-certificate.crt
OR

openssl verify -CAfile trusted-ca-intermediate-and-root-chain.crt server-ssl-certificate.crt

Step 5: Run the below command to see and verify SSL certificate chain order is correct.

openssl crl2pkcs7 -nocrl -certfile cfx-ssl-haproxy.pem | openssl pkcs7 -print_certs -noout

Please refer to the below screenshot on how to validate the SSL certificate chain order.

CFX-SSL-Cert4

Verify if the SSL certificate and key is in PEM format.

openssl rsa -inform PEM -in server-ssl-private.key
openssl x509 -inform PEM -in server-ssl-certificate.crt

7.3 CA-signed SSL Certificate Installation for HA Proxy service:

Step 1: Go to HAProxy service's certificates path on VM host(s) where HAProxy service was installed.

/opt/rdaf/cert/<HAProxy IP>/

Step 2: Take a backup of the existing HA Proxy service's SSL certificate

cp haproxy.pem haproxy.pem.backup

Step 3: Copy the CA-signed SSL certificate chain file that is in PEM format to this location as haproxy.pem

cp <ssl-cert-path>/cfx-ssl-haproxy.pem haproxy.pem

Step 4: Restart HA Proxy container

docker ps -a | grep haproxy
docker restart <haproxy-container-id>

Step 5: Verify HA Proxy service logs to make sure there are no errors after installing CA signed SSL server certificate chain file.

docker logs -f <haproxy-container-id> --tail 200

Step 6: Run the below openssl command to verify the newly installed SSL certificate and check SSL verification is shown as OK without any validation failures.

openssl s_client -connect <cfx-platform-FQDN>:443

Step 7: Open an internet browser (Firefox / Chrome / Safari) and enter the RDA Fabric Platform's FQDN to access the UI securely over HTTPs (port: 443) protocol.

https://cfx-rdaf-platform-fqdn

7.4 Self Signed SSL Certificate with Custom CA Root:

The truststore or root store is a file that contains the root certificates for Certificate Authorities (CA) that issue SSL certificates such as GoDaddy, Verisign, Network Solutions, Comodo and others. Internet browsers, operating systems and applications include list of authorized SSL certificate authorities within their root store or truststore repository file.

However, many enterprises may use Custom CA root certificates to validate and certify self-signed SSL certificates for internal use. In such scenario, when an application is being accessed through a browser or an SSL client, SSL certificate verification error may be observed. Because, neither the browser nor the SSL client will have the Custom CA root certificate within their root store / truststore repository file and hence, they will fail to recognize the authenticity of the SSL certificate and the issuer (CA) from the application.

In order to resolve this issue, update the client's root store / truststore with the Custom CA root & intermediate root certificates so that they can recognize them as a valid & trusted Certificate Authority (CA). Please refer the client's (internet browser or application) documentation on how to update their root store / truststore with custom CA root certificates.

Warning

Note: Please take guidance from your internal security team while using self-signed SSL certificates with Custom CA root certificates.

7.5 Appendix:

SSL Certificate Formats and Conversion:

SSL certificate files come in different formats and most common ones that CA's (Certificate Authorities) deliver include .pfx, .p7b, .pem, .crt, .cer, and .cert. You can get more details about these different certificate formats in the following link:

https://comodosslstore.com/resources/a-ssl-certificate-file-extension-explanation-pem-pkcs7-der-and-pkcs12/

If you need to convert the format of your SSL certificate files to PEM, please use the following commands:

  • Convert PFX to PEM
openssl pkcs12 -in server-ssl-certificate.pfx -out server-ssl-certificate.pem -nodes
  • Convert P7B to PEM
openssl pkcs7 -print_certs -in server-ssl-certificate.p7b -out server-ssl-certificate.pem
  • Convert DER to PEM
openssl x509 -inform der -in server-ssl-certificate.cer -out server-ssl-certificate.pem

You can use the following commands to check if your certificate files are already in the required format:

  • Check and verify if your key is in PEM format
openssl rsa -inform PEM -in server-ssl-private.key
  • Check and verify if your certificate is in PEM format
openssl x509 -inform PEM -in server-ssl-certificate.pem