Guide to Install RDAF deployment CLI for Non-Kubernetes Environment.
Important
This document provides both installation and upgrade steps. Users can follow the instructions based on their specific needs.
1. RDAF Deployment CLI for Non-Kubernetes
RDA Fabric deployment CLI is a comprehensive command line management tool that is used to setup, install/deploy and manage CloudFabrix on-premise Docker registry, RDA Fabric platform, infrastructure and application services.
RDA Fabric platform, infrastructure and application services are supported to be deployed on a Kubernetes Cluster or as a Standalone Container Services using docker-compose
utility.
Please refer for RDAF Platform deployment on Kubernetes Cluster
RDAF CLI uses docker-compose as underlying container management utility for deploying and managing RDA Fabric environment when it need to be deployed on non-kubernetes cluster environment.
RDAF CLI can be installed on on-premise docker registry VM if it is provisioned or on one of the RDA Fabric platform VMs or both to install, configure and manage on-premise docker registry service and RDA Fabric platform services.
1.1 CLI Installation or Upgrade:
Please download the RDAF deployment bundles from the below provided links.
Tip
In a restricted environment where there is no direct internet access, please download RDAF Deployment CLI offline bundle.
RDAF Deployment CLI offline bundle for Ubuntu: offline-ubuntu-1.4.0.tar.gz
RDAF Deployment CLI bundle: rdafcli-1.4.0.tar.gz
Note
For latest RDAF Deployment CLI versioned package, please contact support@cloudfabrix.com
Login as rdauser user into on-premise docker registry or RDA Fabric Platform VM using any SSH client tool (ex: putty)
Run the following command to install or upgrade the RDA Fabric deployment CLI tool.
Note
Once the above commands run successfully, logout and logback in to a new session.
Run the below command to verify installed RDAF deployment CLI version
Run the below command to view the RDAF deployment CLI help
Documented commands (type help <topic>):
========================================
app help platform rdac_cli reset setregistry status worker
backup infra prune_images registry restore setup validate
1.2 On-premise Docker Registry setup:
CloudFabrix support hosting an on-premise docker registry which will download and synchronize RDA Fabric's platform, infrastructure and application services from CloudFabrix's public docker registry that is securely hosted on AWS and from other public docker registries as well. For more information on on-premise docker registry, please refer Docker registry access for RDAF platform services.
1.2.1 rdaf registry setup
Run rdaf registry --help to see available CLI options to deploy and manage on-premise docker registry.
Run rdaf registry setup --help to see available CLI options.
Run the below command to setup and configure on-premise docker registry service. In the below command example, 10.99.120.140 is the machine on which on-premise registry service is going to installed.
docker1.cloudfabrix.io is the CloudFabrix's public docker registry hosted on AWS from which RDA Fabric docker images are going to be downloaded.
1.2.2 rdaf registry install
Run the below command to install the on-premise docker registry service.
Info
- For latest tag version, please contact support@cloudfabrix.com
- On-premise docker registry service runs on port TCP/5000. This port may need to be enabled on firewall device if on-premise docker registry service and RDA Fabric service VMs are deployed in different network environments.
Run the below command to upgrade the on-premise docker registry service to latest version.
To check the status of the on-premise docker registry service, run the below command.
1.2.3 rdaf registry fetch
Once on-premise docker registry service is installed, run the below command to download one or more tags to pre-stage the docker images for RDA Fabric services deployment for fresh install or upgrade.
Minio object storage service image need to be downloaded explicitly using the below command.
Info
Note: It may take few minutes to few hours depends on the outbound internet access bandwidth and the number of docker images to be downloaded. The default location path for the downloaded docker images is /opt/rdaf-registry/data/docker/registry. This path can be overridden/changed during rdaf registry setup command using --install-root option if needed.
1.2.4 rdaf registry list-tags
Run the below command to list the downloaded images and their corresponding tags / versions.
1.2.5 rdaf registry delete-images
Run the below command to delete one or more tags and corresponding docker images from on-premise docker registry.
Important
When on-premise docker repository service is used, please make sure to add the insecure-registries parameter to /etc/docker/daemon.json file and restart the docker daemon as shown below on all of RDA Fabric VMs before the deployment.
...
...
rdauser@k8mater108112:~$ docker info
Client: Docker Engine - Community
Version: 27.1.2
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.16.2
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 23
Running: 12
Paused: 0
Stopped: 11
Images: 9
Server Version: 27.1.2
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
Kernel Version: 6.8.0-48-generic
Operating System: Ubuntu 24.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.57GiB
Name: k8mater108112
ID: d9a59caa-e3a8-4e50-87a9-c358ab115bae
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: true
rdauser@k8mater108112:~$
Tip
The location of the on-premise docker registry's CA certificate file ca.crt
is located under /opt/rdaf-registry/cert/ca. This file ca.crt
need to be copied to the machine on which RDAF CLI is used to setup, configure and install RDA Fabric platform and all of the required services using on-premise docker registry. This step is not applicable when cloud hosted docker registry docker1.cloudfabrix.io is used.
1.3 RDAF Platform setup
1.3.1 rdaf setregistry
When on-premise docker registry is deployed, change the default docker registry configuration to on-premise docker registry host to pull and install the RDA Fabric services.
Please refer rdaf setregistry --help
for detailed command options.
Configure the Docker registry for the platform
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations (Optional)
--host DOCKER_REGISTRY_HOST
Hostname/IP of the Docker registry
--port DOCKER_REGISTRY_PORT
Port of the Docker registry
--user DOCKER_REGISTRY_USER
Username of the Docker registry (Optional)
--password DOCKER_REGISTRY_PASSWORD
Password of the Docker registry (Optional)
--cert-path CERT_PATH
path of the Docker registry ca cert
- Copy the
ca.crt
file from on-premise registry.
scp rdauser@<on-premise-registry-ip>:/opt/rdaf-registry/cert/ca/ca.crt /opt/rdaf-registry/registry-ca-cert.crt
- Run the below command to set the docker-registry to on-premise one.
rdaf setregistry --host <on-premise-docker-registry-ip-or-dns> --port 5000 --cert-path /opt/rdaf-registry/registry-ca-cert.crt
Tip
Please verify if on-premise docker registry is accessible on port 5000 using either of the below commands.
- telnet
<on-premise-docker-registry-ip-or-dns>
5000 - curl -vv telnet://
<on-premise-docker-registry-ip-or-dns>
:5000
1.3.2 rdaf setup
Run the below rdaf setup
command to create the RDAF platform's deployment configuration. It is a pre-requisite before RDAF infrastructure, platform and application services can be installed.
It will prompt for all the necessary configuration details.
- Accept the EULA
- Enter the rdauser SSH password for all of the RDAF hosts.
What is the SSH password for the SSH user used to communicate between hosts
SSH password:
Re-enter SSH password:
Tip
Please make sure rdauser's SSH password on all of the RDAF hosts is same during the rdaf setup
command.
Press Enter to accept the defaults.
Provide any Subject alt name(s) to be used while generating SAN certs
Subject alt name(s) for certs[]:
- Enter RDAF Platform host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 2 hosts are required for the HA configuration. If it is a non-HA deployment, only one RDAF platform host's ip address or DNS name is required.
What are the host(s) on which you want the RDAF platform services to be installed?
Platform service host(s)[rda-platform-vm01]: 192.168.125.141,192.168.125.142
- Answer if the RDAF application services are going to be deployed in HA mode or standalone.
- Enter RDAF Application services host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 2 hosts or more are required for the HA configuration. If it is a non-HA deployment, only one RDAF application service host's ip address or DNS name is required.
What are the host(s) on which you want the application services to be installed?
Application service host(s)[rda-platform-vm01]: 192.168.125.143,192.168.125.144
- Enter the name of the Organization. In the below example,
ACME_IT_Services
is used as the Organization name. It is for a reference only.
What is the organization you want to use for the admin user created?
Admin organization[CloudFabrix]: ACME_IT_Services
What is the ca cert to use to communicate to on-prem docker registry
Docker Registry CA cert path[]:
- Enter RDAF Worker service host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 2 hosts or more are required for the HA configuration. If it is a non-HA deployment, only one RDAF worker service host's ip address or DNS name is required.
What are the host(s) on which you want the Worker to be installed?
Worker host(s)[rda-platform-vm01]: 192.168.125.145
- Enter ip address on which RDAF Event Gateway needs to be Installed, For HA configuration please enter comma separated values. Minimum of 2 hosts or more are required for the HA configuration. If it is a non-HA deployment, only one RDAF Event Gateway host's ip address or DNS name is required.
What are the host(s) on which you want the Event Gateway to be installed?
Event Gateway host(s)[rda-platform-vm01]: 192.168.125.67
- Enter RDAF infrastructure service
NATs
host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 2 hosts are required for theNATs
HA configuration. If it is a non-HA deployment, only one RDAFNATs
service host's ip address or DNS name is required.
What is the "host/path-on-host" on which you want the Nats to be deployed?
Nats host/path[192.168.125.141]: 192.168.125.145,192.168.125.146
- Enter RDAF infrastructure service
Minio
host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 4 hosts are required for theMinio
HA configuration. If it is a non-HA deployment, only one RDAFMinio
service host's ip address or DNS name is required.
What is the "host/path-on-host" where you want Minio to be provisioned?
Minio server host/path[192.168.125.141]: 192.168.125.145,192.168.125.146,192.168.125.147,192.168.125.148
- Change the default
Minio
user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for Minio root user that will be created and used by the RDAF platform?
Minio user[rdafadmin]:
What is the password you want to use for the newly created Minio root user?
Minio password[Q8aJ63PT]:
- Enter RDAF infrastructure service
MariDB
database host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 3 hosts are required for theMariDB
database HA configuration. If it is a non-HA deployment, only one RDAFMariaDB
service host's ip address or DNS name is required.
What is the "host/path-on-host" on which you want the MariaDB server to be provisioned?
MariaDB server host/path[192.168.125.141]: 192.168.125.145,192.168.125.146,192.168.125.147
- Change the default
MariaDB
user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for MariaDB admin user that will be created and used by the RDAF platform?
MariaDB user[rdafadmin]:
What is the password you want to use for the newly created MariaDB root user?
MariaDB password[jffqjAaZ]:
- Enter RDAF infrastructure service
Opensearch
host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 3 hosts are required for theOpensearch
HA configuration. If it is a non-HA deployment, only one RDAFOpensearch
service host's ip address or DNS name is required.
What is the "host/path-on-host" on which you want the opensearch server to be provisioned?
opensearch server host/path[192.168.125.141]: 192.168.125.145,192.168.125.146,192.168.125.147
- Change the default
Opensearch
user credentials if needed or press Enter to accept the defaults.
What is the user name you want to give for Opensearch admin user that will be created and used by the RDAF platform?
Opensearch user[rdafadmin]:
What is the password you want to use for the newly created Opensearch admin user?
Opensearch password[sLmr4ICX]:
- Enter RDAF infrastructure service
Kafka
host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 3 hosts are required for theKafka
HA configuration. If it is a non-HA deployment, only one RDAFKafka
service host's ip address or DNS name is required.
What is the "host/path-on-host" on which you want the Kafka server to be provisioned?
Kafka server host/path[192.168.125.141]: 192.168.125.145,192.168.125.146,192.168.125.147
- Enter RDAF infrastructure service
HAProxy
(load-balancer) host(s) ip address or DNS name. For HA configuration, please enter comma separated values. Minimum of 2 hosts are required for theHAProxy
HA configuration. If it is a non-HA deployment, only one RDAFHAProxy
service host's ip address or DNS name is required.
What is the host on which you want HAProxy to be provisioned?
HAProxy host[192.168.125.141]: 192.168.125.145,192.168.125.146
- Select the network interface name which is used for UI portal access. Ex:
eth0
orens160
etc.
What is the network interface on which you want the rdaf to be accessible externally?
Advertised external interface[eth0]: ens160
- Enter the
HAProxy
service's virtual IP address when it is configured in HA configuration. Virtual IP address should be an unused IP address. This step is not applicable whenHAProxy
service is deployed in non-HA configuration.
What is the host on which you want the platform to be externally accessible?
Advertised external host[192.168.125.143]: 192.168.125.149
- Enter the ip address of the Internal accessible advertised host
Note
Internal advertized host ip is only needed when RDA Fabric VMs are configured with dual NIC interfaces, one is for management network for UI access, second one is for internal app to app communication using non-routable ip address network scheme which is isolated from management network.
Dual network configuration is primarily used to support DR solution where the RDA Fabric VMs are replicated from one site to another site using VM level replication or underlying storage array replication (volume to volume or LUN to LUN on which RDA Fabric VMs are hosted). When RDA Fabric VMs are recovered on a DR site, management network IPs need be changed as per DR site network's subnet, while secondary NIC's ip address scheme can be maintained with same as primary site to avoid RDA Fabric's application reconfiguration.
After entering the required inputs as above, rdaf setup
generates self-signed SSL certificates, creates the required directory structure, configures SSH key based authentication on all of the RDAF hosts and generates rdaf.cfg
configuration file under /opt/rdaf
directory.
It creates the below director structure on all of the RDAF hosts.
- /opt/rdaf/cert: It contains the generated self-signed SSL certificates for all of the RDAF hosts.
- /opt/rdaf/config: It contains the required configuration file for each deployed RDAF service where applicable.
- /opt/rdaf/data: It contains the persistent data for some of the RDAF services.
- /opt/rdaf/deployment-scripts: It contains the docker-compose
.yml
file of the services that are configured to be provisioned on RDAF host. - /opt/rdaf/logs: It contains the RDAF services log files.
1.3.4.1 rdaf infra
rdaf infra
command is used to deploy and manage RDAF infrastructure services. Run the below command to view available CLI options.
usage: infra [--insecure] [-h] [--debug] {status,install,upgrade,up,down} ...
Manage infra services
positional arguments:
{status,install,upgrade,up,down}
status Status of the RDAF Infra
install Install the RDAF Infra containers
upgrade Upgrade the RDAF Infra containers
up Crate the RDAF Infra Containers
down Delete the RDAF Infra Containers
healthcheck Check the liveness/health of Infra services.
optional arguments:
--insecure Ignore SSL certificate issues when communicating with
various hosts
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.3.4.1.1 Install infra services
rdaf infra install
command is used to deploy / install RDAF infrastructure services. Run the below command to view the available CLI options.
usage: infra install [-h] --tag TAG [--service SERVICES]
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the infra components
--service SERVICES Restrict the scope of the command to a specific service
Run the below command to deploy all RDAF infrastructure services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at support@cloudfabrix.com.)
Run the below command to install a specific RDAF infrastructure service. Below are the supported infrastructure services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at support@cloudfabrix.com)
- haproxy
- nats
- mariadb
- opensearch
- kafka
- graphdb
1.3.4.1.2 Status check
Run the below command to see the status of all of the deployed RDAF infrastructure services.
+----------------------+----------------+-----------------+--------------+--------------------+
| Name | Host | Status | Container Id | Tag |
+--------------------+----------------+-----------------+--------------+--------------------+
| haproxy | 192.168.133.97 | Up 3 days | 41208fa98fa6 | 1.0.3.3 |
| haproxy | 192.168.133.98 | Up 3 days | 3891dded450a | 1.0.3.3 |
|keepalived | 192.168.133.97 | active | N/A | N/A |
| keepalived | 192.168.133.98 | active | N/A | N/A |
| nats | 192.168.133.97 | Up 3 days | f4405859b336 | 1.0.3 |
| nats | 192.168.133.98 | Up 3 days | e8bd7ec195cb | 1.0.3 |
| minio | 192.168.133.93 | Up 3 days | 13a00b450e74 | RELEASE.2023-09-30 |
| | | | | T07-02-29Z |
| minio | 192.168.133.97 | Up 3 days | 1727f382a70a | RELEASE.2023-09-30 |
| | | | | T07-02-29Z |
| minio | 192.168.133.98 | Up 3 days | d011be7b43c9 | RELEASE.2023-09-30 |
| | | | | T07-02-29Z |
| minio | 192.168.133.99 | Up 3 days | 240eb6fbe918 | RELEASE.2023-09-30 |
| | | | | T07-02-29Z |
| mariadb | 192.168.133.97 | Up 3 days | 6a1b26cd8f6c | 1.0.3 |
| mariadb | 192.168.133.98 | Up 3 days | 2328874827de | 1.0.3 |
| mariadb | 192.168.133.99 | Up 3 days | 65159da97d95 | 1.0.3 |
| opensearch | 192.168.133.97 | Up 3 days | 8f550b70d7ce | 1.0.3 |
| opensearch | 192.168.133.98 | Up 3 days | 83bdd9bece04 | 1.0.3 |
| opensearch | 192.168.133.99 | Up 3 days | 0225e9f6222d | 1.0.3 |
+--------------------+----------------+-----------------+--------------+--------------------+
1.3.4.1.3 Start/Stop infra services
Run the below command to start / stop all RDAF infrastructure services.
Run the below commands to start / stop a specific RDAF infrastructure service.
Danger
Stopping and Starting RDAF infrastructure service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.3.4.1.4 Upgrade infra services
Run the below command to upgrade all RDAF infrastructure services to a newer version.
Run the below command to upgrade a specific RDAF infrastructure service to a newer version.
Tip
Above shown tag version is a sample one and for a reference only, for actual newer versioned tag, please contact CloudFabrix support team at support@cloudfabrix.com
Danger
Please take full configuration and data backup of RDAF platform before any upgrade process. Upgrading RDAF infrastructure service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.3.4.1.5 Check Infra services liveness / health status
Run the below command to verify RDAF infrastructure service's liveness / health status. This command helps to quickly identify any infrastructure service's availability or accessibility issues.
2025-02-05 02:18:14,565 [rdaf.cmd.infra] INFO - Running Health Check on Infra services
2025-02-05 02:18:14,565 [rdaf.cmd.infra] INFO - Running Health Check on haproxy on host 192.168.125.41
2025-02-05 02:18:14,691 [rdaf.cmd.infra] INFO - Running Health Check on nats on host 192.168.125.41
2025-02-05 02:18:14,812 [rdaf.cmd.infra] INFO - Running Health Check on minio on host 192.168.125.41
2025-02-05 02:18:15,001 [rdaf.cmd.infra] INFO - Running Health Check on mariadb on host 192.168.125.41
2025-02-05 02:18:15,152 [rdaf.cmd.infra] INFO - Running Health Check on opensearch on host 192.168.125.41
2025-02-05 02:18:15,904 [rdaf.cmd.infra] INFO - Running Health Check on kafka on host 192.168.125.41
+----------------+-----------------+--------+------------------------+----------------+--------------+
| Name | Check | Status | Reason | Host | Container Id |
+----------------+-----------------+--------+------------------------+----------------+--------------+
| haproxy | Port Connection | OK | N/A | 192.168.125.41 | e905acafc36b |
| haproxy | Service Status | OK | N/A | 192.168.125.41 | e905acafc36b |
| haproxy | Firewall Port | OK | N/A | 192.168.125.41 | e905acafc36b |
| nats | Port Connection | OK | N/A | 192.168.125.41 | 83d674da41dd |
| nats | Service Status | OK | N/A | 192.168.125.41 | 83d674da41dd |
| nats | Firewall Port | OK | N/A | 192.168.125.41 | 83d674da41dd |
| minio | Port Connection | OK | N/A | 192.168.125.41 | ba13e7023d9f |
| minio | Service Status | OK | N/A | 192.168.125.41 | ba13e7023d9f |
| minio | Firewall Port | OK | N/A | 192.168.125.41 | ba13e7023d9f |
| mariadb | Port Connection | OK | N/A | 192.168.125.41 | 2fb8ca0233ec |
| mariadb | Service Status | OK | N/A | 192.168.125.41 | 2fb8ca0233ec |
| mariadb | Firewall Port | OK | N/A | 192.168.125.41 | 2fb8ca0233ec |
| opensearch | Port Connection | OK | N/A | 192.168.125.41 | 9cde1a3ab673 |
| opensearch | Service Status | Failed | 401 Client Error: | 192.168.125.41 | 9cde1a3ab673 |
| | | | Unauthorized for url: | | |
| | | | https://192.168.125.41:9 | | |
| | | | 200/_cluster/stats | | |
| opensearch | Firewall Port | OK | N/A | 192.168.125.41 | 9cde1a3ab673 |
| kafka | Port Connection | OK | N/A | 192.168.125.41 | 813e6a5235cd |
| kafka | Service Status | OK | N/A | 192.168.125.41 | 813e6a5235cd |
| kafka | Firewall Port | OK | N/A | 192.168.125.41 | 813e6a5235cd |
+----------------+-----------------+--------+------------------------+--------------+--------------+
1.3.4 rdaf platform
rdaf platform
command is used to deploy and manage RDAF core platform services. Run the below command to view available CLI options.
usage: platform [-h] [--debug] {} ...
Manage the RDAF Platform
positional arguments:
{} commands
add-service-host
Add extra service vm
status Status of the RDAF Platform
up Create the RDAF Platform Containers
down Deleting the RDAF Platform Containers
install Install the RDAF platform containers
upgrade Upgrade the RDAF platform containers
generate-certs Generate certificates for hosts belonging to this
installation
reset-admin-user
reset the password of user
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.3.4.1 Install platform services
rdaf platform install
command is used to deploy / install RDAF core platform services. Run the below command to view the available CLI options.
usage: platform install [-h] --tag TAG [--service SERVICES]
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the platform
components
--service SERVICES Restrict the scope of the command to specific service
Run the below command to deploy all RDAF core platform services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at support@cloudfabrix.com)
As part of the installation of RDAF core platform services, it creates a default tenant admin user called admin@cfx.com
The default password for admin@cfx.com
is admin1234
On first login onto RDAF UI portal, it prompts for resetting the above default password to user's choice.
In order to access RDAF UI portal, open a web browser and type the HAProxy server's IP address if it is a non-HA deployment or HAProxy server's virtual IP address if it is an HA deployment as shown below.
1.3.4.2 Status check
Run the below command to see the status of all of the deployed RDAF infrastructure services.
+--------------------+----------------+-----------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+--------------------+----------------+-----------+--------------+-------+
| rda_api_server | 192.168.133.92 | Up 3 days | b2c91b3f5b8d | 8.0.0 |
| rda_api_server | 192.168.133.93 | Up 3 days | 2c7e6e79e0d1 | 8.0.0 |
| rda_registry | 192.168.133.92 | Up 3 days | 464161ddae16 | 8.0.0 |
| rda_registry | 192.168.133.93 | Up 3 days | 946366995e8a | 8.0.0 |
| rda_scheduler | 192.168.133.92 | Up 3 days | e6ab76d712fa | 8.0.0 |
| rda_scheduler | 192.168.133.93 | Up 3 days | 93910af6e17e | 8.0.0 |
| rda_collector | 192.168.133.92 | Up 3 days | 9c6e2a641ece | 8.0.0 |
| rda_collector | 192.168.133.93 | Up 3 days | 2694023681e0 | 8.0.0 |
| rda_asset_dependen | 192.168.133.92 | Up 3 days | ef475644d1bd | 8.0.0|
| cy | | | | |
| rda_asset_dependen | 192.168.133.93 | Up 3 days | 6c8570b3bb9c | 8.0.0 |
| cy | | | | |
| rda_identity | 192.168.133.92 | Up 3 days | eadd3c3d5c8e | 8.0.0 |
| rda_identity | 192.168.133.93 | Up 3 days | 32b7aca03e4a | 8.0.0 |
| rda_fsm | 192.168.133.92 | Up 3 days | d553502dad1a | 8.0.0 |
| rda_fsm | 192.168.133.93 | Up 3 days | 14ae04b1c4d2 | 8.0.0 |
| rda_chat_helper | 192.168.133.92 | Up 3 days | 302a80076309 | 8.0.0 |
| rda_chat_helper | 192.168.133.93 | Up 3 days | 601c21a8493d | 8.0.0 |
| cfx-rda-access- | 192.168.133.92 | Up 3 days | 44e7cc4d1764 | 8.0.0 |
| manager | | | | |
| cfx-rda-access- | 192.168.133.93 | Up 3 days | 688b5aa2c895 | 8.0.0 |
| manager | | | | |
+--------------------+----------------+-----------+--------------+-------+
1.3.4.3 External Opensearch Setup(Optional)
RDAF CLI Usage for External Opensearch
- Please use the below mentioned command for RDAF CLI Usage for External Opensearch
usage: opensearch_external [-h] [--debug] {} ...
Manage the Opensearch External
positional arguments:
{} commands
setup Setup Opensearch External
add-opensearch-external-host
Add extra opensearch external vm
install Install the RDAF opensearch_external containers
status Status of the RDAF opensearch_external Component
upgrade Upgrade the RDAF opensearch_external Component
start Start the RDAF opensearch_external Component
stop Stop the RDAF opensearch_external Component
down Start the RDAF opensearch_external Component
up Stop the RDAF opensearch_external Component
reset Reset the Opensearch External Component
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
rdauser@infra-93:~$
- Please use the below mentioned command for External opensearch setup using RDAF CLI
rdauser@infra13340:~$ rdaf opensearch_external setup
What is the SSH password for the SSH user used to communicate between hosts
SSH password:
Re-enter SSH password:
What is the host(s) for cluster manager?
opensearch cluster manager host(s)[]: 10.95.121.202,10.95.121.203,10.95.121.204
What is the host(s) for cluster clients?
opensearch cluster client host(s)[]: 10.95.121.202,10.95.121.203,10.95.121.204
Do you want to configure cluster zoning? [yes/No]: No
What is the host(s) for data nodes?
opensearch cluster data host(s)[]: 10.95.121.202,10.95.121.203,10.95.121.204
What is the user name you want to give for opensearch cluster admin user that will be created and used by the RDAF platform?
opensearch user[rdafadmin]:
What is the password you want to use for opensearch admin user?
opensearch password[7XvJqlSxTd]:
Re-enter opensearch password[7XvJqlSxTd]:
2024-11-29 04:11:02,810 [rdaf.component.opensearch_external] INFO - Doing setup for opensearch_external
2024-11-29 04:11:14,079 [rdaf.component.opensearch_external] INFO - Created opensearch external configuration at /opt/rdaf/config/opensearch_external/opensearch.yaml on 10.95.133.46
2024-11-29 04:11:14,410 [rdaf.component.opensearch_external] INFO - Created opensearch external configuration at /opt/rdaf/config/opensearch_external/opensearch.yaml on 10.95.133.47
2024-11-29 04:11:14,766 [rdaf.component.opensearch_external] INFO - Created opensearch external configuration at /opt/rdaf/config/opensearch_external/opensearch.yaml on 10.95.133.48
[+] Pulling 11/1149,181 [rdaf.component] INFO -
✔ opensearch Pulled 31.4s
✔ b741dbbfb498 Pull complete 7.8s
✔ 9b98b52b7e47 Pull complete 8.4s
✔ 15f0f9977346 Pull complete 8.5s
✔ 3ad72b8a8518 Pull complete 31.1s
✔ 4f4fb700ef54 Pull complete 31.1s
✔ 98cabfd2ffca Pull complete 31.2s
✔ c81b98a60c3f Pull complete 31.2s
✔ 3cd896096cca Pull complete 31.2s
✔ bb771aa5679e Pull complete 31.2s
✔ acea08536baf Pull complete 31.3s
2024-11-29 04:12:01,121 [rdaf.component.opensearch_external] INFO - Setup completed successfully
rdauser@infra13340:~$
- Please use the below mentioned command for External opensearch Installation using RDAF CLI
rdauser@primary180:~$ rdaf opensearch_external install --tag 1.0.3
2024-12-03 04:54:48,530 [rdaf.component] INFO - Pulling opensearch_external images on host 10.95.107.187
2024-12-03 04:54:48,918 [rdaf.component] INFO - 1.0.3: Pulling from internal/rda-platform-opensearch
Digest: sha256:fc0c794872425d40b28549f254a4a4c79813960d4c4d1491a29a4cd955953983
Status: Image is up to date for docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
2024-12-03 04:54:48,920 [rdaf.component] INFO - Pulling opensearch_external images on host 10.95.107.188
2024-12-03 04:54:49,344 [rdaf.component] INFO - 1.0.3: Pulling from internal/rda-platform-opensearch
Digest: sha256:fc0c794872425d40b28549f254a4a4c79813960d4c4d1491a29a4cd955953983
Status: Image is up to date for docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
2024-12-03 04:54:49,346 [rdaf.component] INFO - Pulling opensearch_external images on host 10.95.107.189
2024-12-03 04:54:49,741 [rdaf.component] INFO - 1.0.3: Pulling from internal/rda-platform-opensearch
Digest: sha256:fc0c794872425d40b28549f254a4a4c79813960d4c4d1491a29a4cd955953983
Status: Image is up to date for docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
[+] Running 1/14:51,842 [rdaf.component] INFO -
✔ Container os_external-opensearch_external-1 Started 0.2s
[+] Running 1/14:52,804 [rdaf.component] INFO -
✔ Container os_external-opensearch_external-1 Started 0.3s
[+] Running 1/14:53,676 [rdaf.component] INFO -
✔ Container os_external-opensearch_external-1 Started 0.2s
2024-12-03 04:54:53,688 [rdaf.component.opensearch_external] INFO - Updating config.json with os_external endpoint.
2024-12-03 04:54:53,691 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:54,238 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:54,805 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:55,358 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:56,002 [rdaf.component.opensearch_external] INFO - Updating policy.json with os_external endpoint.
rdauser@primary180:~$
opensearch.yaml
in the following path cd /opt/rdaf/config/opensearch
Note
The user must follow the instructions given below and restart the External OpenSearch Container in order to migrate data using Reindex. If not please disregard these instructions.
- Example Configuration in
Opensearch.yaml
reindex.remote.allowlist: ["192.168.107.110:9200","192.168.109.50:9200"]
reindex.remote.whitelist: ["192.168.107.110:9200","192.168.109.50:9200"]
reindex.ssl.verification_mode: none
- reindex.remote.allowlist
: Provide comma seperated Platform opensearch Node Ips and port
- reindex.remote.whitelist
: Provide comma seperated Platform opensearch Node Ips and port
- reindex.ssl.verification_mode
: It should be none
- The following command can be used to Stop External Opensearch nodes
- The following command can be used to Start External Opensearch nodes
Verification
Verification of os_external
Section in rdaf.cfg
This Section shows how os_external section can be verified in rdaf.cfg
file
Verification of os_external
in config.json
File
This Section shows how os_external section can be verified in config.json
file
"os_external": {
"hosts": [
"192.168.102.69"
],
"port": 9200,
"scheme": "https",
"ssl_verify": false,
"$user": "eyJzYWx0IjogIkRwVkYiLCAiZGF0YSI6ICJnQUFBQUFCbVI3YTN3ZnNmNTdkRGIzMXNMT2hFM213dHRldWhUVEY0ZmxrLUxVdmZDSlNaVExPRXZWLTZTZlBvQjlPVVFCUjlUZmFwYjRUN0d6SXY2QWVkSXJWSHlFV011QT09In0=",
"$password": "eyJzYWx0IjogImFrTHoiLCAiZGF0YSI6ICJnQUFBQUFCbVI3YTM1aU1OZUFHNHE3WmRPVTBhV25YRy1QMEk4SUdCYTZMVlRsWVVnZndJTFJ4dXExcmthcGo2VmtWSE9ZRXdPUEJ1YWxuV2VSVkE3bGprSG9oMXc1UmM3UT09In0="
}
Note
Verification Step: The above cat command spits the above content (in json format) and is expected to have one os_external
elements as highlighted in yellow color in the above example.
Verification of pstream-mappings
in policy.json
File
This Section shows how pstream-mappings
can be verified in policy.json file
{
"pstream-mappings": [
{
"pattern": "admin-90915a066fd14c3ea8d828e7aa7fde27.*",
"es_name": "cfx_admin_es",
"tenant_specific": true
},
{
"pattern": "rda.*",
"es_name": "default"
},
{
"pattern": "os-external-admin-90915a066fd14c3ea8d828e7aa7fde27.*",
"es_name": "os_external_default"
},
{
"pattern": "os-external-rda.*",
"es_name": "os_external_default"
},
{
"pattern": "os-external-.*",
"es_name": "os_external_default"
},
{
"pattern": ".*",
"es_name": "default"
}
],
"credentials": {
"es": {
"cfx_admin_es": {
"hosts": [
"192.168.125.217"
],
"port": "9200",
"user": "90915a066fd14c3ea8d828e7aa7fde27adminuser",
"password": "LtcgfYK5L1",
"scheme": "https",
"ssl_verify": false
},
"os_external_default": {
"hosts": [
"192.168.102.69"
],
"user": "rdafadmin",
"password": "abcd1234",
"port": "9200",
"scheme": "https",
"ssl_verify": false
}
}
}
}
Note
Verification Step: The above cat command splits the above content (in json format) and is expected to have lines which are highlighted in Yellow as shown in the above example outpput
Collector Service Restart
Restart the collector service whenever external opensearch is added.
+---------------------+-----------------+------------------+--------------+------------+
| rda_collector | 192.168.107.125 | Up 5 minutes | 484a115a4852 | 3.4.2 |
+---------------------+-----------------+------------------+--------------+------------+
Api-Server Service Restart
Before upgrading the platform services, restart the api-server whenever External OpenSearch is added.
+---------------------+-----------------+------------------+--------------+------------+
| rda_api_server | 192.168.107.125 | Up 5 minutes | a4852484a115 | 3.4.2 |
+---------------------+-----------------+------------------+--------------+------------+
Note
If the user has completed the OpenSearch Extended configuration before upgrading the API Server, then skip this step.
1.3.4.4 Upgrade platform services
Run the below command to upgrade all RDAF core platform services to a newer version.
Below are the RDAF core platform services
- cfx-rda-access-manager
- cfx-rda-resource-manager
- cfx-rda-user-preferences
- portal-backend
- portal-frontend
- rda_api_server
- rda_asm
- rda_asset_dependency
- rda_collector
- rda_identity
- rda_registry
- rda_sched_admin
- rda_scheduler
Run the below command to upgrade a specific RDAF core platform service to a newer version.
Tip
Above shown tag version is a sample one and for a reference only, for actual newer versioned tag, please contact CloudFabrix support team at support@cloudfabrix.com
Danger
Please take full configuration and data backup of RDAF platform before any upgrade process. Upgrading RDAF core platform service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.3.4.5 Start/Stop platform services
Run the below commands to start / stop all RDAF core platform services.
Run the below commands to start / stop a specific RDAF core platform service.
Danger
Stopping and Starting RDAF core platform service or services is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.3.4.6 Reset password
Run the below command to reset the default user's admin@cfx.com password to factory default. i.e. admin1234 and will force the user to reset the default password to tenant admin user's choice.
Warning
Use above command option only in a scenario where tenant admin users are not able to access RDAF UI portal because of external IAM (identity and access management) tool such as Active Directory / LDAP / SSO is down or not accessible and default tenant admin user's password is forgotten or lost.
1.3.4.7 Generate SSL Certificates
Self-signed SSL certificates are used for RDAF infrastructure, core platform services and for RDAF CLI as well. This manual step is not usually needed as it will be run automatically during rdaf setup
execution.
However, this command is useful to re-generate self-signed SSL certificates and overwrite existing ones if there is a need.
After re-generating the SSL certificates, please restart RDAF infrastructure, core platform, application, worker and agent services.
Danger
Re-generating self-signed SSL certificates is a disruptive operation which will impact all of the RDAF dependant services and causes a downtime. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.3.4.8 Add new service host
RDAF platform's application services can be distributed on multiple hosts to distributed the workload and to run them in high-availability mode.
After deploying initial RDAF platform's application services, if there is a need, using the below command, a new RDAF platform's application services host can be added to the configuration after which existing application services can be re-deployed to be run on this new host to distribute the workload.
1.3.5 rdaf app
rdaf app
command is used to deploy and manage RDAF application services. Run the below command to view available CLI options.
The supported application services are below.
- OIA: Operations Intelligence and Analytics (Also known as AIOps)
- AIA: Asset Intelligence and Analytics
usage: ('app',) [-h] [--debug] {} ...
Manage the RDAF Apps
positional arguments:
{} commands
status Status of the App
up Create the App serviceContainers
down Delete the App service Containers
install Install the App service containers
upgrade Upgrade the App service containers
update-config
Updated configurations of one or more components
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.3.5.1 Install OIA/AIA services
rdaf app install
command is used to deploy / install RDAF OIA/AIA application services. Run the below command to view the available CLI options.
usage: ('app',) install [-h] --tag TAG [--service SERVICES] {AIA,OIA}
positional arguments:
{AIA,OIA} Select the APP to act on
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the app components
--service SERVICES Restrict the scope of the command to specific service
Run the below command to deploy RDAF OIA / AIA application services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at support@cloudfabrix.com)
1.3.5.2 Start/Stop app services
Run the below command to start / stop all RDAF application OIA services.
Run the below command to start / stop all RDAF application AIA services.
Run the below commands to start / stop a specific RDAF application OIA service.
Below are the RDAF OIA application services
- rda-app-controller
- rda-alert-processor
- rda-file-browser
- rda-smtp-server
- rda-ingestion-tracker
- rda-reports-registry
- rda-ml-config
- rda-event-consumer
- rda-webhook-server
- rda-irm-service
- rda-alert-ingester
- rda-collaboration
- rda-notification-service
- rda-configuration-service
- rda-alert-processor-companion
Danger
Stopping and Starting RDAF application OIA / AIA service or services is a disruptive operation which will impact the availability of these application services. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.3.5.3 Status check
Run the below command to see the status of all of the deployed RDAF application services.
+--------------------+----------------+-----------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+--------------------+----------------+-----------+--------------+---------+
| cfx-rda-app- | 192.168.133.96 | Up 3 days | 133c976d2e64 | 8.0.0 |
| controller | | | | |
| cfx-rda-app- | 192.168.133.92 | Up 3 days | fc155ecf6f47 | 8.0.0 |
| controller | | | | |
| cfx-rda-reports- | 192.168.133.96 | Up 3 days | e7412d9eb3f1 | 8.0.0 |
| registry | | | | |
| cfx-rda-reports- | 192.168.133.92 | Up 3 days | 9bc6ec617744 | 8.0.0 |
| registry | | | | |
| cfx-rda- | 192.168.133.96 | Up 3 days | 40859a933dc7 | 8.0.0 |
| notification- | | | | |
| service | | | | |
| cfx-rda- | 192.168.133.92 | Up 3 days | 3b2757fe7313 | 8.0.0 |
| notification- | | | | |
| service | | | | |
| cfx-rda-file- | 192.168.133.96 | Up 3 days | ac9e1576332c | 8.0.0 |
| browser | | | | |
| cfx-rda-file- | 192.168.133.92 | Up 3 days | 3b0332b0a703 | 8.0.0 |
| browser | | | | |
| cfx-rda- | 192.168.133.96 | Up 3 days | 6982a9bdebe1 | 8.0.0 |
| configuration- | | | | |
| service | | | | |
| cfx-rda- | 192.168.133.92 | Up 3 days | 7ee95287f65f | 8.0.0 |
| configuration- | | | | |
| service | | | | |
| cfx-rda-alert- | 192.168.133.96 | Up 3 days | 582d55c8da74 | 8.0.0 |
| ingester | | | | |
| cfx-rda-alert- | 192.168.133.92 | Up 3 days | f14ad552ed3e | 8.0.0 |
| ingester | | | | |
+--------------------+----------------+-----------+--------------+---------+
1.3.5.4 Upgrade app OIA/AIA services
Run the below command to upgrade all RDAF OIA / AIA application services to a newer version.
Note
Only for AIA Services
Below are the RDAF OIA application services
- rda-app-controller
- rda-alert-processor
- rda-file-browser
- rda-smtp-server
- rda-ingestion-tracker
- rda-reports-registry
- rda-ml-config
- rda-event-consumer
- rda-webhook-server
- rda-irm-service
- rda-alert-ingester
- rda-collaboration
- rda-notification-service
- rda-configuration-service
- rda-alert-processor-companion
Run the below command to upgrade a specific RDAF OIA application service to a newer version.
Tip
Above shown tag version is a sample one and for a reference only, for actual newer versioned tag, please contact CloudFabrix support team at support@cloudfabrix.com
Danger
Please take full configuration and data backup of RDAF platform before any upgrade process. Upgrading RDAF OIA / AIA application service or services is a disruptive operation which will impact the availability of these services. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.3.5.5 Update HAProxy configuration
Run the below command to update the necessary HAProxy load-balancer configuration for RDAF OIA / AIA application services.
Note
Only for AIA Services
After deploying the RDAF OIA application services, it is mandatory to run the rdaf app update-config
which will apply and restart the HAProxy load-balancer service automatically.
1.3.6 rdaf worker
rdaf worker
command is used to deploy and manage RDAF worker services. Run the below command to view available CLI options.
usage: worker [-h] [--debug] {} ...
Manage the RDAF Worker
positional arguments:
{} commands
add-worker-host
Add extra worker vm
status Status of the RDAF Worker
up Create the RDAF Worker Containers
down Delete the RDAF Worker Containers
install Install the RDAF Worker containers
upgrade Upgrade the RDAF Worker containers
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.3.6.1 Install worker service(s)
rdaf worker install
command is used to deploy / install RDAF worker services. Run the below command to view the available CLI options.
usage: worker install [-h] --tag TAG
optional arguments:
-h, --help show this help message and exit
--tag TAG Tag to use for the docker images of the worker components
Run the below command to deploy all RDAF worker services. (Note: Below shown tag name is a sample one for a reference only, for actual tag, please contact CloudFabrix support team at support@cloudfabrix.com)
1.3.6.2 Status check
Run the below command to see the status of all of the deployed RDAF worker services.
+------------+----------------+-----------+--------------+---------+
| Name | Host | Status | Container Id | Tag |
+------------+----------------+-----------+--------------+---------+
| rda_worker | 192.168.133.96 | Up 4 days | bfeb469c3277 | 8.0.0 |
| rda_worker | 192.168.133.92 | Up 4 days | 43385833db75 | 8.0.0 |
+------------+----------------+-----------+--------------+---------+
1.3.6.3 Upgrade worker services
Run the below command to upgrade all RDAF worker service(s) to a newer version.
Tip
Above shown tag version is a sample one and for a reference only, for actual newer versioned tag, please contact CloudFabrix support team at support@cloudfabrix.com
Danger
Upgrading RDAF worker service or services is a disruptive operation which will impact all of the worker jobs. When RDAF platform is deployed in Production environment, please perform upgrade operation only during a scheduled downtime.
1.3.6.4 Start/Stop worker services
Run the below commands to start / stop all RDAF worker services.
Danger
Stopping and Starting RDAF worker service(s) is a disruptive operation which will impact all of the worker jobs. When RDAF platform is deployed in Production environment, please perform these operations only during a scheduled downtime.
1.3.6.5 Add new worker host
RDAF platform's worker services can be distributed on multiple hosts to distributed the workload.
After deploying initial RDAF platform's worker services, if there is a need, using the below command, a new RDAF platform's worker host can be added to the configuration after which new jobs can be run on this new worker host to distribute the workload.
1.3.7 Install RDAF Bulkstats Services
Note
The RDAF Bulkstats service is optional and only necessary if the Bulkstats data ingestion feature is required. Otherwise, you may ignore the steps below and go to next section.
Run the below command to install bulk_stats services
A comma can be used to identify two hosts for HA Setups.
Note
When deploying bulk stats on New VM, make sure the username and password matches with the existing VM's
Run the below command to get the bulk_stats status
+----------------+----------------+---------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+----------------+----------------+---------------+--------------+-------+
| rda-bulk-stats | 192.168.108.13 | Up 1 Days ago | ac3379bfcc9d | 8.0.0 |
| rda-bulk-stats | 192.168.108.14 | Up 1 Days ago | c78283c06d88 | 8.0.0 |
+----------------+----------------+---------------+--------------+-------+
Note
The RDAF Bulkstats service is optional and only necessary if the Bulkstats data ingestion feature is required. Otherwise, you may ignore the steps below and go to next section.
Run the below command to install bulk_stats services
A comma can be used to identify two hosts for HA Setups.
Note
When deploying bulk stats on New VM, make sure the username and password matches with the existing VM's
Run the below command to get the bulk_stats status
+----------------+----------------+-------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+----------------+----------------+-------------+--------------+-------+
| rda_bulk_stats | 192.168.133.96 | Up 4 days | 67da2301d30c | 8.0.0 |
| rda_bulk_stats | 192.168.133.92 | Up 46 hours | 32179032bb97 | 8.0.0 |
+----------------+----------------+-------------+--------------+-------+
1.3.8 Install RDAF File Object Services
Note
This service is applicable for Non-K8s only, The RDAF File Object service is optional and only necessary if the Bulkstats data ingestion feature is required. Otherwise, you may ignore the steps below and go to next section
Stop file object service using docker-compose file
Remove rda_file_object
entries in rdaf.cfg
file if bulkstats already deployed with older versions
Run the below command to install File Object services and provision service instances across multiple hosts, ensuring that all VMs use the same username and password.
Log in to each file object node and update the permissions for the /opt/public folder.
Run the below command to get the file_object status
+-----------------+----------------+-----------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+-----------------+----------------+-----------+--------------+-------+
| rda_file_object | 192.168.108.50 | Up 7 days | 47d1a68c2bf2 | 8.0.0 |
| rda_file_object | 192.168.108.51 | Up 7 days | 6ce10218c204 | 8.0.0 |
+-----------------+----------------+-----------+--------------+-------+
1.3.9 Install Event Gateway Services
Important
This Service is for Non-K8s only
- To Install the event gateway, log in to the rdaf cli VM and execute the following command.
- Run the below command to verify the status of the RDAF Event Gateway Service.
+-------------------+-----------------+---------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+-------------------+-----------------+---------------+--------------+-------+
| rda_event_gateway | 192.168.108.127 | Up 43 seconds | 44c4937ebf0a | 8.0.0 |
| | | | | |
| rda_event_gateway | 192.168.108.128 | Up 16 seconds | d6779fd7f75f | 8.0.0 |
| | | | | |
+-------------------+-----------------+---------------+--------------+-------+
1.3.10 rdaf prune_images
After upgrading the RDAF infrastructure, core platform, application and worker services, run the below command to clean up the un-used docker images. This command helps to clean up and free the disk space on /var/lib/docker mount point.
1.3.11 rdaf validate
rdaf validate
command helps to verify or validate the below two configurations.
- values-yaml:
values.yml
is a configuration file which allows the user to modify RDAF service's parameter(s) based on the deployment requirements. This file resides under /opt/rdaf/deployment-scripts directory on RDAF platform VM on whichrdaf setup
was run.
- configs: This command option verifies some of the pre-requisites on all RDAF hosts.
Below are the checks it performs.
- SSH access and port check
- Docker is installed or not
- Docker-Compose is installed or not
- Firewall ports are opened or not for RDAF services
2025-02-05 00:30:40,660 [rdaf.cmd.validate] INFO - checking connection for the host 192.168.125.146
2025-02-05 00:30:40,701 [rdaf.cmd.validate] INFO - ssh check for host 192.168.125.146 successful
2025-02-05 00:30:40,701 [rdaf.cmd.validate] INFO - checking connection for the host 192.168.125.143
2025-02-05 00:30:40,791 [rdaf.cmd.validate] INFO - ssh check for host 192.168.125.143 successful
2025-02-05 00:30:40,792 [rdaf.cmd.validate] INFO - checking connection for the host 192.168.125.149
....
2025-02-05 00:30:40,949 [rdaf.cmd.validate] INFO - ssh check for host 192.168.125.144 successful
2025-02-05 00:30:41,112 [rdaf.cmd.validate] INFO - Docker is installed on host 192.168.125.146
2025-02-05 00:30:41,317 [rdaf.cmd.validate] INFO - Docker is installed on host 192.168.125.143
....
2025-02-05 00:30:42,036 [rdaf.cmd.validate] INFO - Docker-compose is installed on host 192.168.125.146
2025-02-05 00:30:42,189 [rdaf.cmd.validate] INFO - Docker-compose is installed on host 192.168.125.143
....
2025-02-05 00:30:42,899 [rdaf.cmd.validate] INFO - port is open 7222 on host 192.168.125.143 of component haproxy
2025-02-05 00:30:42,900 [rdaf.cmd.validate] INFO - port is open 9443 on host 192.168.125.143 of component haproxy
2025-02-05 00:30:42,900 [rdaf.cmd.validate] INFO - port is open 3307 on host 192.168.125.143 of component haproxy
....
2025-02-05 00:30:43,134 [rdaf.cmd.validate] INFO - port is open 8808 on host 192.168.125.144 of component haproxy
2025-02-05 00:30:43,364 [rdaf.cmd.validate] INFO - port is open 4222 on host 192.168.125.143 of component nats
....
2025-02-05 00:30:47,060 [rdaf.cmd.validate] INFO - port is open 9093 on host 192.168.125.144 of component kafka
2025-02-05 00:30:47,264 [rdaf.cmd.validate] INFO - port is open 9092 on host 192.168.125.145 of component kafka
2025-02-05 00:30:47,264 [rdaf.cmd.validate] INFO - port is open 9093 on host 192.168.125.145 of component kafka
1.3.12 rdaf rdac_cli
rdaf rdac_cli
command allows you to install and upgrade RDA client CLI utility which interacts with RDA Fabric services and operations.
usage: rdac-cli [-h] [--debug] {} ...
Install RDAC CLI
positional arguments:
{} commands
install Install the RDAC CLI
upgrade Upgrade the RDAC CLI
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
- To install RDA client CLI, run the below command
- To upgrade RDA client CLI version, run the below command
- Run the below command to see RDA client CLI help and available subcommand options.
Run with one of the following commands
agent-bots List all bots registered by agents for the current tenant
agents List all agents for the current tenant
alert-rules Alert Rule management commands
bot-catalog-generation-from-file Generate bot catalog for given sources
bot-package Bot Package management commands
bots-by-source List bots available for given sources
check-credentials Perform credential check for one or more sources on a worker pod
checksum Compute checksums for pipeline contents locally for a given JSON file
compare Commands to compare different RDA systems using different RDA Config files
content-to-object Convert data from a column into objects
copy-to-objstore Deploy files specified in a ZIP file to the Object Store
dashboard User defined dashboard management commands
dashgroup User defined dashboard-group management commands
dataset Dataset management commands
demo Demo related commands
deployment Service Blueprints (Deployments) management commands
event-gw-status List status of all ingestion endpoints at all the event gateways
evict Evict a job from a worker pod
file-ops Perform various operations on local files
file-to-object Convert files from a column into objects
fmt-template Formatting Templates management commands
healthcheck Perform healthcheck on each of the Pods
invoke-agent-bot Invoke a bot published by an agent
jobs List all jobs for the current tenant
logarchive Logarchive management commands
object RDA Object management commands
output Get the output of a Job using jobid.
pipeline Pipeline management commands
playground Start Webserver to access RDA Playground
pods List all pods for the current tenant
project Project management commands. Projects can be used to link different tenants / projects from this RDA Fabric or a remote RDA Fabric.
pstream Persistent Stream management commands
purge-outputs Purge outputs of completed jobs
read-stream Read messages from an RDA stream
reco-engine Recommendation Engine management commands
restore Commands to restore backed-up artifacts to an RDA Platform
run Run a pipeline on a worker pod
run-get-output Run a pipeline on a worker, and Optionally, wait for the completion, get the final output
schedule Pipeline execution schedule management commands
schema Dataset Model Schema management commands
secret Credentials (Secrets) management commands
set-pod-log-level Update the logging level for a given RDA Pod.
shell Start RDA Client interactive shell
site-profile Site Profile management commands
site-summary Show summary by Site and Overall
stack Application Dependency Mapping (Stack) management commands
staging-area Staging Area based data ingestion management commands
subscription Show current CloudFabrix RDA subscription details
synthetics Data synthesizing management commands
verify-pipeline Verify the pipeline on a worker pod
viz Visualize data from a file within the console (terminal)
watch Commands to watch various streams such sas trace, logs and change notifications by microservices
web-server Start Webserver to access RDA Client data using REST APIs
worker-obj-info List all worker pods with their current Object Store configuration
write-stream Write data to the specified stream
positional arguments:
command RDA subcommand to run
optional arguments:
-h, --help show this help message and exit
Tip
Please refer RDA Client CLI Usage for detailed information.
1.3.13 rdaf backup
Using rdaf backup
command, RDAF configuration and data can be backed up periodically which can be used to restore in case of a recovery scenario.
rdaf backup -h
usage: backup [--insecure] [-h] [--debug] --dest-dir BACKUP_DEST_DIR
[--create-tar] [--service SERVICES]
Backup the RDAF platform
optional arguments:
--insecure Ignore SSL certificate issues when communicating with
various hosts
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
--dest-dir BACKUP_DEST_DIR
Directory into which the backup will be stored
--create-tar Creates a tar file for the backed up data
--service SERVICES Backup only the specified components
For --service
is an optional argument, and below are the supported options, when specified, respective service's configuration and data will be backed up.
- haproxy (configuration backup)
- mariadb (configuration and DB backup)
- minio (configuration and data backup)
- opensearch (configuration and index backup)
- kafka (configuration and data backup)
- config (system configuration such as rdaf.cfg, values.yml etc and certificates backup)
When --service
option is not specified, all of the above service's configuration and data will be backed up.
Run the below command to take RDAF system's full configuration and data backup.
rdaf backup --dest-dir /opt/backup --create-tar
Tip
Please make sure to pre-create /opt/backup
folder or a local or an NFS mount point and provide appropriate user permissions.
Note: For RDAF platform's configuration and application data backup, it is a pre-requisite to mount an NFS volume on all of the VMs. It is used to store the backup data and for restore using RDAF CLI tool.
Run the below command to take specific service's configuration and data backup.
Run the below command to take more than one service's configuration and data backup.
rdaf backup --dest-dir /opt/backup --create-tar --service mariadb --service minio --service opensearch
Warning
Though RDAF CLI takes backup of complete configuration and application data, it does not take backup of the OS (Ubuntu) on which the RDA Fabric services are deployed. It is recommended to use 3rd party tools like Veeam, HP Dataprotect, Cohesity, Netbackup etc. to take full VM level backup on periodic basis.
3rd party VM level backup need to be used to recover RDAF VMs if OS is unable to boot Ubuntu OS.
1.3.14 rdaf restore
Using rdaf restore
command, RDAF configuration and data can be restored from the previously taken backup.
Warning
While restoring RDAF services data from the backup, please make sure to stop both application and platform services.
For restoring the below service's data from the backup, please make sure their service is up and running.
mariadb
minio
opensearch
Below command shows the above service's running status.
Below command shows the above service's functional status.
rdaf restore -h
usage: restore [--insecure] [-h] [--debug] [--no-prompt] [--service SERVICES]
(--from-dir BACKUP_SRC_DIR | --from-tar BACKUP_SRC_TAR)
Restore the RDAF platform from a previously backed up state
optional arguments:
--insecure Ignore SSL certificate issues when communicating with
various hosts
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
--no-prompt Don't prompt for inputs
--service SERVICES Restore only the specified components
--from-dir BACKUP_SRC_DIR
The directory which contains the backed up
installation state
--from-tar BACKUP_SRC_TAR
The tar.gz file which contains the backed up
installation state
For --service
is an optional argument, and below are the supported options, when specified, respective service's configuration and data will be restored.
- haproxy (configuration backup)
- mariadb (configuration and DB backup)
- minio (configuration and data backup)
- opensearch (configuration and index backup)
- kafka (configuration and data backup)
- config (system configuration such as rdaf.cfg, values.yml etc and certificates backup)
When --service
option is not specified, all of the above service's configuration and data will be restored.
When the backup was taken without --create-tar
option, please use --from-dir
option and specify the backup folder as shown below.
Run the below command to restore RDAF system's full configuration and data from the backup folder.
When the backup was taken with --create-tar
option, please use --from-tar
option and specify the backup tar file path as shown below.
Run the below command to restore RDAF system's full configuration and data from the backup tar file.
rdaf restore --from-tar /opt/backup/2025-02-05-1669346503.565267/rdaf-backup-2025-02-05-1669346503.565267.tar.gz
Run the below command to restore specific service's configuration and data from the backup folder.
Run the below command to restore more than one service's configuration and data from the backup folder.
Run the below command to restore specific service's configuration and data from the backup tar file.
rdaf restore --from-tar /opt/backup/2025-02-05-1669346503.565267/rdaf-backup-2025-02-05-1669346503.565267.tar.gz --service mariadb
Run the below command to restore more than one service's configuration and data from the backup tar file.
rdaf restore --from-tar /opt/backup/2025-02-05-1669346503.565267/rdaf-backup-2025-02-05-1669346503.565267.tar.gz --service mariadb --service minio
1.3.15 rdaf opensearch_external
rdaf opensearch_external
command is used to deploy and manage external Opensearch standalone or cluster, which is used to ingest performance management module's data such as metrics, logs and events.
Run the below command to view available CLI options.
usage: opensearch_external [-h] [--debug] {} ...
Manage the Opensearch External
positional arguments:
{} commands
setup Setup Opensearch External
add-opensearch-external-host
Add extra opensearch external vm
install Install the RDAF opensearch_external containers
status Status of the RDAF opensearch_external Component
upgrade Upgrade the RDAF opensearch_external Component
start Start the RDAF opensearch_external Component
stop Stop the RDAF opensearch_external Component
down Start the RDAF opensearch_external Component
up Stop the RDAF opensearch_external Component
reset Reset the Opensearch External Component
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.3.15.1 rdaf opensearch_external setup
rdaf opensearch_external setup
command is used to configure and setup external Opensearch cluster service.
- Please use the command below to setup external Opensearch cluster configuration.
Note
OpenSearch clusters scale horizontally to support growing workloads. For high availability and fault tolerance, OpenSearch allows tagging nodes with a zone attribute, where each zone represents an availability zone. This ensures that primary and replica shards are not placed in the same availability zone, reducing the risk of data loss in case of a failure.
OpenSearch automatically distributes primary and replica shards across availability zones to maintain data redundancy and ensure continued operations even if one availability zone becomes unavailable.
For optimal resilience, it is recommended to distribute OpenSearch master, co-ordinator and data nodes across at least three availability zones or more.
When deploying an OpenSearch cluster on-premise, each Physical Server, Hypervisor, or Rack can be treated as an availability zone. OpenSearch nodes that are tagged with an availability zone attribute, should be provisioned accordingly on the underlying Hypervisor or Physical Servers. This ensures that failures at the hardware or hypervisor level do not impact the availability of the entire cluster.
Please note that, in the output of the command below, the OpenSearch cluster client host is essentially the same as the OpenSearch cluster coordinator host.
What is the SSH password for the SSH user used to communicate between hosts
SSH password:
Re-enter SSH password:
What is the host(s) for cluster manager?
opensearch cluster manager host(s)[]: 192.168.102.69
What is the host(s) for cluster clients?
opensearch cluster client host(s)[]: 192.168.102.69
Do you want to configure cluster zoning? [yes/No]: No
What is the host(s) for data nodes?
opensearch cluster data host(s)[]: 192.168.102.69
What is the user name you want to give for opensearch cluster admin user that will be created and used by the RDAF platform?
opensearch user[rdafadmin]:
What is the password you want to use for opensearch admin user?
opensearch password[7XvJqlSxTd]:
Re-enter opensearch password[7XvJqlSxTd]:
2024-11-29 04:11:02,810 [rdaf.component.opensearch_external] INFO - Doing setup for opensearch_external
2024-11-29 04:11:14,079 [rdaf.component.opensearch_external] INFO - Created opensearch external configuration at /opt/rdaf/config/opensearch_external/opensearch.yaml on 192.168.102.69
[+] Pulling 11/1149,181 [rdaf.component] INFO -
✔ opensearch Pulled 31.4s
✔ b741dbbfb498 Pull complete 7.8s
...
2024-11-29 04:12:01,121 [rdaf.component.opensearch_external] INFO - Setup completed successfully
What is the SSH password for the SSH user used to communicate between hosts
SSH password:
Re-enter SSH password:
What is the host(s) for cluster manager?
opensearch cluster manager host(s)[]: 192.168.121.202,192.168.121.203,192.168.121.204
What is the host(s) for cluster clients?
opensearch cluster client host(s)[]: 192.168.121.202,192.168.121.203,192.168.121.204
Do you want to configure cluster zoning? [yes/No]: No
What is the host(s) for data nodes?
opensearch cluster data host(s)[]: 192.168.121.202,192.168.121.203,192.168.121.204
What is the user name you want to give for opensearch cluster admin user that will be created and used by the RDAF platform?
opensearch user[rdafadmin]:
What is the password you want to use for opensearch admin user?
opensearch password[7XvJqlSxTd]:
Re-enter opensearch password[7XvJqlSxTd]:
2024-11-29 04:11:02,810 [rdaf.component.opensearch_external] INFO - Doing setup for opensearch_external
2024-11-29 04:11:14,079 [rdaf.component.opensearch_external] INFO - Created opensearch external configuration at /opt/rdaf/config/opensearch_external/opensearch.yaml on 192.168.133.46
2024-11-29 04:11:14,410 [rdaf.component.opensearch_external] INFO - Created opensearch external configuration at /opt/rdaf/config/opensearch_external/opensearch.yaml on 192.168.133.47
2024-11-29 04:11:14,766 [rdaf.component.opensearch_external] INFO - Created opensearch external configuration at /opt/rdaf/config/opensearch_external/opensearch.yaml on 192.168.133.48
[+] Pulling 11/1149,181 [rdaf.component] INFO -
✔ opensearch Pulled 31.4s
✔ b741dbbfb498 Pull complete 7.8s
...
2024-11-29 04:12:01,121 [rdaf.component.opensearch_external] INFO - Setup completed successfully
What is the SSH password for the SSH user used to communicate between hosts
SSH password:
Re-enter SSH password:
What is the host(s) for cluster manager?
opensearch cluster manager host(s)[]: 192.168.230.81,192.168.230.82,192.168.230.83
What is the host(s) for cluster clients?
opensearch cluster client host(s)[]: 192.168.230.81,192.168.230.82,192.168.230.83
Do you want to configure cluster zoning? [yes/No]: yes
Please specify the number of zones to be configured
Number of Zones[2]: 3
What is the host(s) for data nodes in zone-0?
opensearch cluster data host(s) for zone-0[]: 192.168.230.84
What is the host(s) for data nodes in zone-1?
opensearch cluster data host(s) for zone-1[]: 192.168.230.85
What is the host(s) for data nodes in zone-2?
opensearch cluster data host(s) for zone-2[]: 192.168.230.86
What is the user name you want to give for opensearch cluster admin user that will be created and used by the RDAF platform?
opensearch user[rdafadmin]:
What is the password you want to use for opensearch admin user?
opensearch password[VV5Bk4nbB1]:
Re-enter opensearch password[VV5Bk4nbB1]:
2025-02-24 18:49:42,394 [rdaf.component.opensearch_external] INFO - Doing setup for opensearch_external
1.3.15.2 rdaf opensearch_external install
rdaf opensearch_external install
command installs external Opensearch service on all the configured nodes.
- Please use the command below to install external Opensearch cluster services.
2024-12-03 04:54:48,530 [rdaf.component] INFO - Pulling opensearch_external images on host 192.168.125.45
2024-12-03 04:54:48,918 [rdaf.component] INFO - 1.0.3: Pulling from internal/rda-platform-opensearch
Digest: sha256:fc0c794872425d40b28549f254a4a4c79813960d4c4d1491a29a4cd955953983
Status: Image is up to date for docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
2024-12-03 04:54:48,920 [rdaf.component] INFO - Pulling opensearch_external images on host 192.168.125.46
2024-12-03 04:54:49,344 [rdaf.component] INFO - 1.0.3: Pulling from internal/rda-platform-opensearch
Digest: sha256:fc0c794872425d40b28549f254a4a4c79813960d4c4d1491a29a4cd955953983
Status: Image is up to date for docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
2024-12-03 04:54:49,346 [rdaf.component] INFO - Pulling opensearch_external images on host 192.168.125.47
2024-12-03 04:54:49,741 [rdaf.component] INFO - 1.0.3: Pulling from internal/rda-platform-opensearch
Digest: sha256:fc0c794872425d40b28549f254a4a4c79813960d4c4d1491a29a4cd955953983
Status: Image is up to date for docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
docker1.cloudfabrix.io:443/internal/rda-platform-opensearch:1.0.3
[+] Running 1/14:51,842 [rdaf.component] INFO -
✔ Container os_external-opensearch_external-1 Started 0.2s
[+] Running 1/14:52,804 [rdaf.component] INFO -
✔ Container os_external-opensearch_external-1 Started 0.3s
[+] Running 1/14:53,676 [rdaf.component] INFO -
✔ Container os_external-opensearch_external-1 Started 0.2s
2024-12-03 04:54:53,688 [rdaf.component.opensearch_external] INFO - Updating config.json with os_external endpoint.
2024-12-03 04:54:53,691 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:54,238 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:54,805 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:55,358 [rdaf.component.platform] INFO - Creating directory /opt/rdaf/config/network_config
2024-12-03 04:54:56,002 [rdaf.component.opensearch_external] INFO - Updating policy.json with os_external endpoint.
1.3.15.3 rdaf opensearch_external status
rdaf opensearch_external status
command displays external Opensearch service's status on all the configured nodes.
- Run the command below to verify the external Opensearch cluster deployment status
+---------------------+---------------+----------------+--------------+-------+
| Name | Host | Status | Container Id | Tag |
+---------------------+---------------+----------------+--------------+-------+
| opensearch_external | 192.168.125.45 | Up 8 Weeks ago | 57d5a30ab896 | 1.0.3 |
| opensearch_external | 192.168.125.46 | Up 8 Weeks ago | 1654e0745ab4 | 1.0.3 |
| opensearch_external | 192.168.125.47 | Up 8 Weeks ago | 61f153a20e50 | 1.0.3 |
+---------------------+---------------+----------------+--------------+-------+
1.3.15.4 rdaf opensearch_external upgrade
rdaf opensearch_external upgrade
command is used to upgrade the external OpenSearch service to a newer version or apply updated resource allocations and configuration parameters from the /opt/rdaf/deployment-scripts/opensearch-external-values.yaml
file.
- Run the command below to upgrade the external Opensearch cluster version
The resource settings for the external OpenSearch service, such as the memory limit for the service container and Java HEAP memory allocation, can be modified in the /opt/rdaf/deployment-scripts/opensearch-external-values.yaml
file.
Note
The mem_limit
and memswap_limit
environment variables can be used to increase the memory allocated to the OpenSearch container service.
The OPENSEARCH_JAVA_OPTS
environment variable is used to configure Java HEAP memory for OpenSearch. In production environments, it should be allocated at least 50% of the OpenSearch container service's allocated memory and should not exceed slightly less than 32GB to optimize performance. It is applicable to all Opensearch master, co-ordinator and data nodes.
- Run the command below to apply the updated configuration from the
/opt/rdaf/deployment-scripts/opensearch-external-values.yaml
file.
Note
Please use the same existing tag version to apply the updated configuration.
1.3.15.5 rdaf opensearch_external stop
rdaf opensearch_external stop
is used to stop the external Opensearch services.
- Run the command below to stop the external Opensearch cluster service
1.3.15.6 rdaf opensearch_external start
rdaf opensearch_external start
is used to start the external Opensearch services.
- Run the command below to start the external Opensearch cluster service
1.3.15.7 rdaf opensearch_external down
rdaf opensearch_external down
is used to stop the external Opensearch services and also deletes the container.
- Run the command below to stop and delete the external Opensearch cluster service container
1.3.15.8 rdaf opensearch_external up
rdaf opensearch_external up
is used to start and redeploy the external Opensearch services.
- Run the command below to start and redeploy the external Opensearch cluster service
1.3.15.9 rdaf opensearch_external reset
rdaf opensearch_external reset
is used to delete the external Opensearch cluster services and resets the configuration.
- Run the command below to delete and reset the external Opensearch cluster service
Danger
The rdaf opensearch_external reset
command must be used with extreme caution, as it permanently deletes data. Ensure that a backup is taken before running this command in production environments.
1.3.16 rdaf log_monitoring
rdaf log_monitoring
command is used to deploy and manage log monitoring services, through which the RDAF infrastructure, platform, application, and worker service logs are streamed in real-time.
As part of the log monitoring services, it installs the following services:
- Fluentbit: It is a log shipping agent that streams the logs in real-time and ingests them into the
Logstash
service. - Logstash: It is a log processing agent that normalizes and extracts key attributes from log messages, such as timestamp, severity, process name, process function, container name, etc., before ingesting them into an index store service for analytics and visualization.
Run the below command to view available CLI options.
usage: log_monitoring [-h] [--debug]
{upgrade,install,status,up,down,start,stop} ...
Manage the RDAF log monitoring
positional arguments:
{upgrade,install,status,up,down,start,stop}
commands
upgrade Upgrade log monitoring components
install Install log monitoring components
status Status of the RDAF log monitoring
up Create the RDAF log monitoring Containers
down Delete the RDAF log monitoring Containers
start Start the RDAF log monitoring Containers
stop Stop the RDAF log monitoring Containers
optional arguments:
-h, --help show this help message and exit
--debug Enable debug logs for the CLI operations
1.3.16.1 Install Log Monitoring
rdaf log_monitoring install
command is used to deploy / install RDAF log monitoring services. Run the below command to view the available CLI options.
usage: log_monitoring install [-h] --log-monitoring-host LOG_MONITORING_HOST
--tag TAG [--no-prompt]
log_monitoring install: error: the following arguments are required: --log-monitoring-host, --tag
To deploy all RDAF log monitoring services, execute the following command. Please note that it is mandatory to specify the host for the Logstash service deployment using the --log-monitoring-host
option.
Note
Below shown Logstash host ip address is for a reference only. For the latest log monitoring services tag, please contact CloudFabrix support team at support@cloudfabrix.com.
{"status":"CREATED","message":"'rdaf-log-monitoring' created."}
{"status":"CREATED","message":"'role-log-monitoring' created."}
{"status":"OK","message":"'rdaf-log-monitoring' updated."}
{"status":"CREATED","message":"'role-log-monitoring' created."}
{
"retention_days": 15,
"timestamp": "@timestamp",
"search_case_insensitive": true,
"_settings": {
"number_of_shards": 3,
"number_of_replicas": 1,
"refresh_interval": "60s"
}
}
Persistent stream saved.
2025-02-05 05:04:08,842 [rdaf.component.haproxy] INFO - Updated HAProxy configuration at /opt/rdaf/config/haproxy/haproxy.cfg on 192.168.125.53
...
...
[+] Running 1/1
⠿ Container fluent-bit-fluentbit-1 Started 0.4s
2025-02-05 05:06:05,138 [rdaf.component.log_monitoring] INFO - Restarting logstash services on host 192.168.125.53
[+] Running 1/1
⠿ Container logstash-logstash-1 Started 0.4s
2025-02-05 05:06:05,617 [rdaf.component.log_monitoring] INFO - Restarting fluentbit services on host 192.168.125.53
[+] Running 1/1
⠿ Container fluent-bit-fluentbit-1 Started 10.8s
2025-02-05 05:06:16,488 [rdaf.component.minio] INFO - configuring minio services logs
Successfully applied new settings.
Successfully applied new settings.
2025-02-05 05:06:16,936 [rdaf.component.log_monitoring] INFO - Successfully installed and configured rdaf log streaming
1.3.16.2 Status check
Run the below command to see the status of all of the deployed RDAF log monitoring services.
+---------------------+----------------------+---------------------------+-------------------------+---------+
| Name | Host | Status | Container Id | Tag |
+---------------------+----------------------+---------------------------+-------------------------+---------+
| logstash | 192.168.125.53 | Up About a minute | 62b3b7c81472 | 1.0.3 |
| fluentbit | 192.168.125.53 | Up About a minute | c5f8a6f340b3 | 1.0.3 |
+---------------------+----------------------+---------------------------+-------------------------+---------+
1.3.16.3 Upgrade Log Monitoring
Run the below command to upgrade all RDAF log monitoring to a newer version.
1.3.16.4 Restart Log Monitoring services
Restarting the log monitoring service using rdaf
CLI commands.
a) To Stop
Run the below command to Stop all RDAF log monitoring services.
---------------------------------------------------------------------------------------------------------------------------
2025-02-05 05:20:53,313 [rdaf.component.log_monitoring] INFO - Deleting logstash service on host 192.168.125.53
[+] Running 1/1 ⠿ Container logstash-logstash-1 Stopped 0.3s
Going to remove logstash-logstash-1
[+] Running 1/0 ⠿ Container logstash-logstash-1 Removed 0.0s
2025-02-05 05:20:53,639 [rdaf.component.log_monitoring] INFO - Deleting fluent-bit service on host 192.168.125.53
[+] Running 1/1 ⠿ Container fluent-bit-fluentbit-1 Stopped 10.8s
Going to remove fluent-bit-fluentbit-1
[+] Running 1/0 ⠿ Container fluent-bit-fluentbit-1 Removed
---------------------------------------------------------------------------------------------------------------------------
Run the below command to Start all RDAF log monitoring services.
---------------------------------------------------------------------------------------------------------------------------
2025-02-05 05:21:33,355 [rdaf.component.log_monitoring] INFO - Creating logstash services on host 192.168.125.53
[+] Running 1/1
⠿ Container logstash-logstash-1 Started 0.2s
2025-02-05 05:21:33,641 [rdaf.component.log_monitoring] INFO - Creating fluent-bit services on host 192.168.125.53
[+] Running 1/1
⠿ Container fluent-bit-fluentbit-1 Started
---------------------------------------------------------------------------------------------------------------------------
1.3.16.5 Add Log Monitoring dashboard
Login to RDAF UI portal as MSP admin user.
Go to Main Menu --> Configuration --> RDA Administration --> Dashboards --> User Dashboards --> Click on Add and create a new dashboard by copying the below Dashboard configuration for RDAF log monitoring services.
{
"name": "rdaf-platform-log-analytics",
"label": "RDAF Platform Logs",
"description": "RDAF Platform service's log analysis dashboard",
"version": "23.01.14.1",
"enabled": true,
"dashboard_style": "tabbed",
"status_poller": {
"stream": "rdaf_services_logs",
"frequency": 15,
"columns": [
"@timestamp"
],
"sorting": [
{
"@timestamp": "desc"
}
],
"query": "`@timestamp` is after '${timestamp}'",
"defaults": {
"@timestamp": "$UTCNOW"
},
"action": "refresh"
},
"dashboard_filters": {
"time_filter": true,
"columns_filter": [
{
"id": "@timestamp",
"label": "Timestamp",
"type": "DATETIME"
},
{
"id": "service_name",
"label": "Service Name",
"type": "TEXT"
},
{
"id": "service_category",
"label": "Service Category",
"type": "TEXT"
},
{
"id": "log_severity",
"label": "Log Severity",
"type": "TEXT"
},
{
"id": "log",
"label": "Log Message",
"type": "TEXT"
},
{
"id": "log.text",
"label": "Log Message Text",
"type": "SIMPLE_TEXT"
},
{
"id": "process_name",
"label": "Process Name",
"type": "TEXT"
},
{
"id": "process_function",
"label": "Process Function",
"type": "TEXT"
},
{
"id": "thread_id",
"label": "Thread ID",
"type": "TEXT"
},
{
"id": "k8s_pod_name",
"label": "POD Name",
"type": "TEXT"
},
{
"id": "k8s_container_name",
"label": "Container Name",
"type": "TEXT"
}
],
"group_filters": [
{
"stream": "rdaf_services_logs",
"title": "Log Severity",
"group_by": [
"log_severity"
],
"ts_column": "@timestamp",
"agg": "value_count",
"column": "_id",
"type": "int"
},
{
"stream": "rdaf_services_logs",
"title": "Service Name",
"group_by": [
"service_name"
],
"ts_column": "@timestamp",
"limit": 50,
"agg": "value_count",
"column": "_id",
"type": "int"
},
{
"stream": "rdaf_services_logs",
"title": "Service Category",
"group_by": [
"service_category"
],
"ts_column": "@timestamp",
"agg": "value_count",
"column": "_id",
"type": "int"
},
{
"stream": "rdaf_services_logs",
"title": "POD Name",
"group_by": [
"k8s_pod_name"
],
"ts_column": "@timestamp",
"agg": "value_count",
"limit": 200,
"column": "_id",
"type": "int"
}
]
},
"dashboard_sections": [
{
"title": "Overall Summary",
"show_filter": true,
"widgets": [
{
"title": "Log Severity Trend",
"widget_type": "timeseries",
"stream": "rdaf_services_logs",
"ts_column": "@timestamp",
"max_width": 12,
"height": 3,
"min_width": 12,
"chartProperties": {
"yAxisLabel": "Count",
"xAxisLabel": null,
"legendLocation": "bottom"
},
"interval": "15Min",
"group_by": [
"log_severity"
],
"series_spec": [
{
"column": "log_severity",
"agg": "value_count",
"type": "int"
}
],
"widget_id": "06413884"
},
{
"widget_type": "pie_chart",
"title": "Logs by Severity",
"stream": "rdaf_services_logs",
"ts_column": "@timestamp",
"column": "_id",
"agg": "value_count",
"group_by": [
"log_severity"
],
"type": "str",
"style": {
"color-map": {
"ERROR": [
"#ef5350",
"#ffffff"
],
"WARNING": [
"#FFA726",
"#ffffff"
],
"INFO": [
"#388e3c",
"#ffffff"
],
"DEBUG": [
"#000000",
"#ffffff"
],
"UNKNOWN": [
"#bcaaa4",
"#ffffff"
]
}
},
"min_width": 4,
"height": 4,
"max_width": 4,
"widget_id": "b2ffa8e9"
},
{
"widget_type": "pie_chart",
"title": "Logs by RDA Host IP",
"stream": "rdaf_services_logs",
"ts_column": "@timestamp",
"column": "_id",
"agg": "value_count",
"group_by": [
"host"
],
"type": "str",
"min_width": 4,
"height": 4,
"max_width": 4,
"widget_id": "a4f2d8bd"
},
{
"widget_type": "pie_chart",
"title": "Logs by Service Category",
"stream": "rdaf_services_logs",
"ts_column": "@timestamp",
"column": "_id",
"agg": "value_count",
"group_by": [
"service_category"
],
"type": "str",
"min_width": 4,
"height": 4,
"max_width": 4,
"widget_id": "89ac5ce9"
},
{
"widget_type": "pie_chart",
"title": "Logs by Service Name",
"stream": "rdaf_services_logs",
"ts_column": "@timestamp",
"column": "_id",
"agg": "value_count",
"group_by": "service_name",
"type": "int",
"min_width": 4,
"height": 4,
"max_width": 4,
"widget_id": "4b267fce"
}
]
},
{
"title": "App Services",
"show_filter": true,
"widgets": [
{
"widget_type": "tabular",
"title": "Log Messages",
"stream": "rdaf_services_logs",
"extra_filter": "service_category in ['rda_app_svcs', 'rda_pfm_svcs']",
"ts_column": "@timestamp",
"sorting": [
{
"@timestamp": "desc"
}
],
"columns": {
"@timestamp": {
"title": "Timestamp",
"type": "DATETIME"
},
"state_color2": {
"type": "COLOR-MAP",
"source-column": "log_severity",
"color-map": {
"INFO": "#388e3c",
"ERROR": "#ef5350",
"WARNING": "#ffa726",
"DEBUG": "#000000"
}
},
"log_severity": {
"title": "Severity",
"htmlTemplateForRow": "<span class='badge' style='background-color: {{ row.state_color2 }}' > {{ row.log_severity }} </span>"
},
"service_name": "Service Name",
"process_name": "Process Name",
"process_function": "Process Function",
"log": "Message"
},
"widget_id": "6895c8f0"
}
]
},
{
"title": "Infra Services",
"show_filter": true,
"widgets": [
{
"widget_type": "tabular",
"title": "Log Messages",
"stream": "rdaf_services_logs",
"extra_filter": "service_category in ['rda_infra_svcs']",
"ts_column": "@timestamp",
"sorting": [
{
"@timestamp": "desc"
}
],
"columns": {
"@timestamp": {
"title": "Timestamp",
"type": "DATETIME"
},
"log_severity": {
"title": "Severity",
"htmlTemplateForRow": "<span class='badge' style='background-color: {{ row.state_color2 }}' > {{ row.log_severity }} </span>"
},
"state_color2": {
"type": "COLOR-MAP",
"source-column": "log_severity",
"color-map": {
"INFO": "#388e3c",
"ERROR": "#ef5350",
"WARNING": "#ffa726",
"DEBUG": "#000000",
"UNKNOWN": "#bcaaa4"
}
},
"service_name": "Service Name",
"process_name": "Process Name",
"log": "Message",
"minio_object": "Minio Object"
},
"widget_id": "98f10587"
}
]
}
]
}
1.3.17 rdaf reset
rdaf reset
command allows the user to reset the RDAF platform configuration by performing the below operations.
- Stop RDAF application, worker, platform & infrastructure services
- Delete RDAF application, worker, platform & infrastructure services and its data
- Delete all Docker images and volumes RDAF application, worker, platform & infrastructure services
- Delete RDAF platform configuration
Danger
rdaf reset command is a disruptive operation as it clears entire RDAF platform footprint. It's primary purpose is to use only in Demo or POC environments ("NOT" in Production) where it requires to re-install entire RDAF platform from scratch.