Skip to content

RDA Client Command Line Interface

1. Installing RDA Command Line Tool

RDA Comand Line Interface tool comes as a docker image to make it easy to run RDA commands on any Laptop, Desktop or in a Cloud VM.

To run this tool, following are required:

  • Operating Systems: Linux or MacOS
  • Docker installed on that system
  • At least Python 3.0

STEP-1: Download the python script

curl -o rdac.py https://bot-docs.cloudfabrix.io/data/wrappers/rdac.py

Make the script an executable:

chmod +x rdac.py

STEP-2: Download RDA network configuration

From your cfxCloud account, you can download a copy the RDA Network configuration.

Download the RDA Network configuration file and save it under $HOME/.rda/rda_network_config.json

Portal

STEP-3: Verify Script

Verify that rdac.py is working correctly by using one of the following commands:

python3 rdac.py --help

Or

./rdac.py --help

For very first time, above script will validate dependencies such OS, Python version and availability of Docker. If the validation is successful, it will download the docker container image for RDA CLI and run it.

Subsequently, if you want to update the docker container image to latest version, run following command:

./rdac.py update

2. RDA Commands: Cheat Sheet

This section lists few most commonly used RDA Commands.

Listing RDA Platform Microservices

RDA Fabric is for each tenant has a set of microservices (pods) deployed as containers either using Kubernetes or as simple docker containers.

Following command lists all active microservices in your RDA Fabric:

python3 rdac.py pods

Typical output for pods command would look like:

Portal

Most of RDA Commands support option --json which would print output in a JSON format instead of tabular format.

python3 rdac.py pods --json
Example rdac pods JSON Output

Partial output of --json option:

{
    "now": "2022-05-20T02:16:31.054287",
    "started_at": "2022-05-17T22:44:13.602509",
    "pod_type": "worker",
    "pod_category": "rda_infra",
    "pod_id": "ae875728",
    "hostname": "d1d45ec2d08f",
    "proc_id": 1,
    "labels": {
        "tenant_name": "dev-1-unified",
        "rda_platform_version": "22.5.13.3",
        "rda_messenger_version": "22.5.15.1",
        "rda_pod_version": "22.5.17.1",
        "rda_license_valid": "no",
        "rda_license_not_expired": "no",
        "rda_license_expiration_date": ""
    },
    "build_tag": "daily",
    "requests": {
        "auto": "tenants.2dddab0e52544f4eb2de067057aaac31.worker.group.3571581d876b.auto",
        "direct": "tenants.2dddab0e52544f4eb2de067057aaac31.worker.group.3571581d876b.direct.ae875728"
    },
    "resources": {
        "cpu_count": 8,
        "cpu_load1": 2.24,
        "cpu_load5": 2.43,
        "cpu_load15": 2.52,
        "mem_total_gb": 25.3,
        "mem_available_gb": 9.7,
        "mem_percent": 61.7,
        "mem_used_gb": 15.01,
        "mem_free_gb": 2.93,
        "mem_active_gb": 11.49,
        "mem_inactive_gb": 7.64,
        "pod_usage_active_jobs": 15,
        "pod_usage_total_jobs": 578
    },
    "pod_leader": false,
    "objstore_info": {
        "host": "10.10.10.100:9000",
        "config_checksum": "8936434b"
    },
    "group": "cfx-lab-122-178",
    "group_id": "3571581d876b",
    "site_name": "cfx-lab-122-178",
    "site_id": "3571581d876b",
    "public_access": false,
    "capacity_filter": "cpu_load1 <= 7.0 and mem_percent < 98 and pod_usage_active_jobs < 20",
    "_local_time": 1653012991.0593688
}

Listing RDA Platform Microservices with Versions

python3 rdac.py pods --versions

Portal

Performing a Health Check on RDA Microservices

Following command performs a health check on all microservices and returns status of each health parameter.

python3 rdac.py healthcheck

Portal

Listing all Running Pipeline Jobs

Following command lists all active jobs created using Portal, CLI, Scheduler or via Service Blueprints.

python3 rdac.py jobs

Portal

Evicting a Job

Following command can be used to evict a specific Job from RDA Worker. If the job was created by Scheduler or by a Service Blueprint, a new job may be re-created immediately after the job has been evicted.

python3 rdac.py evict --jobid c38025837c284562957f78ab385a0caf

This script attempts to evict the job with ID c38025837c284562957f78ab385a0caf

Observing Pipeline Execution Traces from CLI

Following command can be used watch (observe) all traces from all workers and all the pipelines that are getting executed anywhere in the RDA Fabric.

python3 rdac.py watch-traces

Portal

List all datasets currently saved in RDA Fabric

python3 rdac.py dataset-list

Adding a new dataset to RDA Fabric

Datasets can be added if the data is available as a local file on your system where rdac.py is available or if the data is available via URL. Supported formats are CSV, JSON, XLS, Parquet, ORC and many compresses formats for CSV.

To add a local file as a dataset:

python3 rdac.py dataset-add --name my-dataset --file ./mydata.csv
Note: rdac.py mounts current directory as /home inside the docker container. You may also place the data in your home directory folder $HOME/rdac_data/ and access it as --file /data/mydata.csv

You may also add a dataset if the data is accessible via http or https URL.

1
2
3
python3 rdac.py dataset-add \
              --name 'sample-ecommerce-data' \
              --file 'https://bot-docs.cloudfabrix.io/data/datasets/sample-ecommerce-data.csv'

3. List of All RDA CLI Sub Commands

Sub Command Description
agent-bots List all bots registered by agents for the current tenant
agents List all agents for the current tenant
alert-rules-add Add or update alert ruleset
alert-rules-delete Delete an alert ruleset
alert-rules-get Get YAML data for an alert ruleset
alert-rules-list List all alert rulesets.
bots-by-source List bots available for given sources
check-credentials Perform credential check for one or more sources on a worker pod
checksum Compute checksums for pipeline contents locally for a given JSON file
content-to-object Convert data from a column into objects
copy-to-objstore Deploy files specified in a ZIP file to the Object Store
dataset-add Add a new dataset to the object store
dataset-delete Delete a dataset from the object store
dataset-get Download a dataset from the object store
dataset-list List datasets from the object store
dataset-meta Download metadata for a dataset from the object store
deployment-activity List recent deployment activities
deployment-add Add a new Deployment to the repository. Deployment specification must be in valid YML format
deployment-audit-report Display Audit report for a given deployment ID
deployment-delete Delete an existing deployment from repository
deployment-dependencies List all artifact dependencies used by the deployment
deployment-disable Disable an existing deployment if it is not already disabled
deployment-enable Enable an existing deployment if it is not already enabled
deployment-map Print service map information in JSON format for the given deployment
deployment-status Display status of all deployments
deployment-svcs-status List current status of all service pipelines in a deployment
event-gw-status List status of all ingestion endpoints at all the event gateways
evict Evict a job from a worker pod
file-ops Perform various operations on local files
file-to-object Convert files from a column into objects
fmt-template-delete Delete Formatting Template
fmt-template-get Get Formatting Template
fmt-template-list List Formatting Templates
healthcheck Perform healthcheck on each of the Pods
invoke-agent-bot Invoke a bot published by an agent
jobs List all jobs for the current tenant
logarchive-add-platform Add current platform Minio as logarchive repository
logarchive-data-read Read the data from given archive for a specified time interval
logarchive-data-size Show size of data available for given archive for a specified time interval
logarchive-download Download the data from given archive for a specified time interval
logarchive-names List archive names in a given repository
logarchive-replay Replay the data from given archive for a specified time interval with specified label
logarchive-repos List of all log archive repositories
merge-logarchive-files Merge multiple locally downloaded Log Archive (.gz) filles into a single CSV/Parquet file
object-add Add a new object to the object store
object-delete Delete object from the object store
object-delete-list Delete list of objects
object-get Download a object from the object store
object-list List objects from the object store
object-meta Download metadata for an object from the object store
object-to-content Convert object pointers from a column into content
object-to-file Convert object pointers from a column into file
output Get the output of a Job using jobid.
pipeline-delete Delete pipeline by name and version
pipeline-get Get pipeline by name and version
pipeline-get-versions Get versions for the pipeline
pipeline-list List published pipelines
pipeline-publish Publish the pipeline on a worker pod
pipeline-published-run Run a published pipeline on a worker pod
pods List all pods for the current tenant
pstream-add Add a new Persistent stream
pstream-delete Delete a persistent stream
pstream-get Get information about a persistent stream
pstream-list List persistent streams
pstream-query Query persistent stream data via collector
pstream-tail Query a persistent stream and continue to query for incremental data every few seconds
purge-outputs Purge outputs of completed jobs
read-stream Read messages from an RDA stream
run Run a pipeline on a worker pod
run-get-output Run a pipeline on a worker, wait for the completion, get the final output
schedule-add Add a new schedule for pipeline execution
schedule-delete Delete an existing schedule
schedule-edit Edit an existing schedule
schedule-info Get details of a schedule
schedule-list List all schedules
schedule-update-status Update status of an existing schedule
schema-add Add a new schema to the object store
schema-delete Delete a schema from the object store
schema-get Download a schema from the object store
schema-list List schemas from the object store
secret-add Add a new secret to the vault
secret-list List names and types of all secrets in vault
secret-types List of all available secret types
site-profile-add Add a new site profile
site-profile-delete Delete a site profile
site-profile-edit Update a site profile
site-profile-get Get a site profile data
site-profile-list List all site profiles.
site-summary Show summary by Site and Overall
stack-cache-list List cached stack entries from asset-dependency service
stack-impact-distance Find the impact distances in a stack using asset-dependency service, load search criteria from a JSON file
stack-search Search in a stack using asset-dependency service
stack-search-json Search in a stack using asset-dependency service, load search criteria from a JSON file
staging-area-add Add or update staging area
staging-area-delete Delete a staging area
staging-area-get Get YAML data for a staging area
staging-area-list List all staging areas.
subscription Show current CloudFabrix RDA subscription details
verify-pipeline Verify the pipeline on a worker pod
viz Visualize data from a file within the console (terminal)
watch-logs Start watching logs produced by the pipelines
watch-registry Start watching updates published by the RDA pod registry
watch-scheduler Start watching updates published by the scheduler pods
watch-scheduler-admin Start watching updates published by the scheduler admin pods
watch-traces Start watching traces produced by the pipelines
worker-obj-info List all worker pods with their current Object Store configuration
write-stream Write data to the specified stream

Sub Command: agent-bots

Description: List all bots registered by agents for the current tenant

Usage: agent-bots  [-h] [--json] [--type AGENT_TYPE] [--group AGENT_GROUP]

optional arguments:
  -h, --help           show this help message and exit
  --json               Print detailed information in JSON format instead of
                       tabular format
  --type AGENT_TYPE    Show only the agents that match the specified agent
                       type
  --group AGENT_GROUP  Show only the agents that match the specified agent
                       group

Sub Command: agents

Description: List all agents for the current tenant

Usage: agents  [-h] [--json] [--type AGENT_TYPE] [--group AGENT_GROUP]
            [--site SITE_NAME]

optional arguments:
  -h, --help           show this help message and exit
  --json               Print detailed information in JSON format instead of
                       tabular format
  --type AGENT_TYPE    Show only the agents that match the specified agent
                       type
  --group AGENT_GROUP  Deprecated. Use --site. Show only the agents that match
                       the specified site
  --site SITE_NAME     Show only the agents that match the specified site

Sub Command: alert-rules-add

Description: Add or update alert ruleset

Usage: alert-rules-add  [-h] --file INPUT_FILE [--overwrite]

optional arguments:
  -h, --help         show this help message and exit
  --file INPUT_FILE  YAML file containing alert ruleset definition
  --overwrite        Overwrite even if a ruleset already exists with a name.

Sub Command: alert-rules-delete

Description: Delete an alert ruleset

Usage: alert-rules-delete  [-h] --name RULESET_NAME

optional arguments:
  -h, --help           show this help message and exit
  --name RULESET_NAME  Name of the alert ruleset to delete

Sub Command: alert-rules-get

Description: Get YAML data for an alert ruleset

Usage: alert-rules-get  [-h] --name RULESET_NAME

optional arguments:
  -h, --help           show this help message and exit
  --name RULESET_NAME  Name of the alert ruleset to display

Sub Command: alert-rules-list

Description: List all alert rulesets.

Usage: alert-rules-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: bots-by-source

Description: List bots available for given sources

Usage: bots-by-source  [-h] [--sources SOURCES] [--group WORKER_GROUP]
            [--site WORKER_SITE] [--lfilter LABEL_FILTER]
            [--rfilter RESOURCE_FILTER] [--maxwait MAX_WAIT] [--json]

optional arguments:
  -h, --help            show this help message and exit
  --sources SOURCES     Comma separated list of sources to find bots (in
                        addition to built-in sources)
  --group WORKER_GROUP  Deprecated. Use --site option. Specify a worker site
                        name. If not specified, will use any available worker.
  --site WORKER_SITE    Specify a worker site name. If not specified, will use
                        any available worker.
  --lfilter LABEL_FILTER
                        CFXQL style query to narrow down workers using their
                        labels
  --rfilter RESOURCE_FILTER
                        CFXQL style query to narrow down workers using their
                        resources
  --maxwait MAX_WAIT    Maximum wait time (seconds) for credential check to
                        complete.
  --json                Print detailed information in JSON format instead of
                        tabular format

Sub Command: check-credentials

Description: Perform credential check for one or more sources on a worker pod

Usage: check-credentials  [-h] --config CONFIG [--group WORKER_GROUP] [--site WORKER_SITE]
            [--maxwait MAX_WAIT]

optional arguments:
  -h, --help            show this help message and exit
  --config CONFIG       File containing pipeline contents or configuration
  --group WORKER_GROUP  Deprecated. Use --site. Specify a worker site name. If
                        not specified, will use any available worker.
  --site WORKER_SITE    Specify a worker Site name. If not specified, will use
                        any available worker.
  --maxwait MAX_WAIT    Maximum wait time (seconds) for credential check to
                        complete.

Sub Command: checksum

Description: Compute checksums for pipeline contents locally for a given JSON file

Usage: checksum  [-h] --pipeline PIPELINE

optional arguments:
  -h, --help           show this help message and exit
  --pipeline PIPELINE  File containing pipeline information in JSON format

Sub Command: content-to-object

Description: Convert data from a column into objects

Usage: content-to-object  [-h] --inpcol INPUT_CONTENT_COLUMN --outcol OUTPUT_COLUMN --file
            INPUT_FILE --outfolder OUTPUT_FOLDER --outfile OUTPUT_FILE

optional arguments:
  -h, --help            show this help message and exit
  --inpcol INPUT_CONTENT_COLUMN
                        Name of the column in input that contains the data
  --outcol OUTPUT_COLUMN
                        Column name where object names will be inserted
  --file INPUT_FILE     Input csv filename
  --outfolder OUTPUT_FOLDER
                        Folder name where objects will be stored
  --outfile OUTPUT_FILE
                        Name of output csv file that has object location
                        stored

Sub Command: copy-to-objstore

Description: Deploy files specified in a ZIP file to the Object Store

Usage: copy-to-objstore  [-h] --file ZIP_FILENAME [--verify] [--force]

optional arguments:
  -h, --help           show this help message and exit
  --file ZIP_FILENAME  ZIP filename (or URL) containing bucket/object entries.
                       If bucket name is 'default', this tool will use the
                       target bucket as specified in configuration.
  --verify             Do not upload files, only verify if the objects in the
                       ZIP file exists on the target object store
  --force              Upload the files even if they exist on the target
                       system with same size

Sub Command: dataset-add

Description: Add a new dataset to the object store

Usage: dataset-add  [-h] --name NAME --file INPUT_FILE [--local_format LOCAL_FORMAT]
            [--remote_format REMOTE_FORMAT] [--force] [--schema SCHEMA_NAME]
            [--json]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Dataset name
  --file INPUT_FILE     CSV or parquet formatted file from which dataset will
                        be added
  --local_format LOCAL_FORMAT
                        Local file format (auto or csv or parquet or json).
                        'auto' means format will be determined from filename
                        extension
  --remote_format REMOTE_FORMAT
                        Remote file format (csv or parquet)
  --force               Do not validate data against schema
  --schema SCHEMA_NAME  Validate data against given schema. By default, data
                        is validated against the schema with the same name as
                        dataset.
  --json                Print detailed information in JSON format instead of
                        tabular format

Sub Command: dataset-delete

Description: Delete a dataset from the object store

Usage: dataset-delete  [-h] --name NAME [--yes]

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Dataset name
  --yes        Delete without prompting

Sub Command: dataset-get

Description: Download a dataset from the object store

Usage: dataset-get  [-h] --name NAME [--tofile SAVE_TO_FILE] [--json]
            [--format DATA_FORMAT] [--viz]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Dataset name
  --tofile SAVE_TO_FILE
                        Save the data to the specified file (CSV or JSON if
                        --json is specified)
  --json                Export data as a JSON formatted rows. ** Deprecated.
                        Use --format **
  --format DATA_FORMAT  Save the downloaded data in the specified format.
                        Valid values are csv, json, parquet. If format is
                        'auto', format is determined from extension
  --viz                 Open Dataframe visualizer to show the data

Sub Command: dataset-list

Description: List datasets from the object store

Usage: dataset-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: dataset-meta

Description: Download metadata for a dataset from the object store

Usage: dataset-meta  [-h] --name NAME

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Dataset name

Sub Command: deployment-activity

Description: List recent deployment activities

Usage: deployment-activity  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: deployment-add

Description: Add a new Deployment to the repository. Deployment specification must be in valid YML format

Usage: deployment-add  [-h] --file INPUT_FILE [--overwrite]

optional arguments:
  -h, --help         show this help message and exit
  --file INPUT_FILE  YAML file containing Deployment specification
  --overwrite        Overwrite even if a ruleset already exists with a name.

Sub Command: deployment-audit-report

Description: Display Audit report for a given deployment ID

Usage: deployment-audit-report  [-h] --id DEPLOYMENT_ID [--json]

optional arguments:
  -h, --help          show this help message and exit
  --id DEPLOYMENT_ID  Deployment ID
  --json              Print detailed information in JSON format instead of
                      tabular format

Sub Command: deployment-delete

Description: Delete an existing deployment from repository

Usage: deployment-delete  [-h] --id DEP_ID

optional arguments:
  -h, --help   show this help message and exit
  --id DEP_ID  Deployment ID

Sub Command: deployment-dependencies

Description: List all artifact dependencies used by the deployment

Usage: deployment-dependencies  [-h] --id DEPLOYMENT_ID [--json]

optional arguments:
  -h, --help          show this help message and exit
  --id DEPLOYMENT_ID  Deployment ID
  --json              Print detailed information in JSON format instead of
                      tabular format

Sub Command: deployment-disable

Description: Disable an existing deployment if it is not already disabled

Usage: deployment-disable  [-h] --id DEP_ID

optional arguments:
  -h, --help   show this help message and exit
  --id DEP_ID  Deployment ID

Sub Command: deployment-enable

Description: Enable an existing deployment if it is not already enabled

Usage: deployment-enable  [-h] --id DEP_ID

optional arguments:
  -h, --help   show this help message and exit
  --id DEP_ID  Deployment ID

Sub Command: deployment-map

Description: Print service map information in JSON format for the given deployment

Usage: deployment-map  [-h] --id DEPLOYMENT_ID

optional arguments:
  -h, --help          show this help message and exit
  --id DEPLOYMENT_ID  Deployment ID

Sub Command: deployment-status

Description: Display status of all deployments

Usage: deployment-status  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: deployment-svcs-status

Description: List current status of all service pipelines in a deployment

Usage: deployment-svcs-status  [-h] --id DEPLOYMENT_ID [--json]

optional arguments:
  -h, --help          show this help message and exit
  --id DEPLOYMENT_ID  Deployment ID
  --json              Print detailed information in JSON format instead of
                      tabular format

Sub Command: event-gw-status

Description: List status of all ingestion endpoints at all the event gateways

Usage: event-gw-status  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: evict

Description: Evict a job from a worker pod

Usage: evict  [-h] --jobid JOBID [--yes]

optional arguments:
  -h, --help     show this help message and exit
  --jobid JOBID  RDA worker jobid. If partial must match only one job.
  --yes          Do not prompt for confirmation, evict if job is found

Sub Command: file-ops

Description: Perform various operations on local files

Usage: file-ops copy                      Copy dataframe from one format to another. Format is inferred from extension. Examples are csv, parquet, json
  csv-to-parquet            Copy data from CSV to parquet file using chunking
  test-formats              Run performance test on various formats

positional arguments:
  subcommand  File ops sub-command

optional arguments:
  -h, --help  show this help message and exit

Sub Command: file-to-object

Description: Convert files from a column into objects

Usage: file-to-object  [-h] --inpcol INPUT_FILENAME_COLUMN --outcol OUTPUT_COLUMN --file
            INPUT_FILE --outfolder OUTPUT_FOLDER --outfile OUTPUT_FILE

optional arguments:
  -h, --help            show this help message and exit
  --inpcol INPUT_FILENAME_COLUMN
                        Name of the column in input that contains the
                        filenames
  --outcol OUTPUT_COLUMN
                        Column name where object names will be inserted
  --file INPUT_FILE     Input csv filename
  --outfolder OUTPUT_FOLDER
                        Folder name where objects will be stored
  --outfile OUTPUT_FILE
                        Name of output csv file that has object location
                        stored

Sub Command: fmt-template-delete

Description: Delete Formatting Template

Usage: fmt-template-delete  [-h] --name NAME

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Formatting Template Name

Sub Command: fmt-template-get

Description: Get Formatting Template

Usage: fmt-template-get  [-h] --name NAME [--tofile SAVE_TO_FILE] [--json]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Formatting Template Name
  --tofile SAVE_TO_FILE
                        Save the data to the specified file
  --json                Export data as a JSON formatted rows. ** Deprecated.
                        Use --format **

Sub Command: fmt-template-list

Description: List Formatting Templates

Usage: fmt-template-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: healthcheck

Description: Perform healthcheck on each of the Pods

Usage: healthcheck  [-h] [--json] [--type POD_TYPE] [--infra] [--apps] [--simple]

optional arguments:
  -h, --help       show this help message and exit
  --json           Print detailed information in JSON format instead of
                   tabular format
  --type POD_TYPE  Show only the pods that match the specified pod type
  --infra          List only RDA Infra pods. not compatible with --apps option
  --apps           List only RDA App pods. not compatible with --infra option
  --simple         When showing in tabular format, show in a easy to read
                   format.

Sub Command: invoke-agent-bot

Description: Invoke a bot published by an agent

Usage: invoke-agent-bot  [-h] --type AGENT_TYPE --group AGENT_GROUP --bot BOT_NAME
            [--query QUERY] [--input INPUT_FILE] [--output OUTPUT_FILE]

optional arguments:
  -h, --help            show this help message and exit
  --type AGENT_TYPE     Agent type
  --group AGENT_GROUP   Agent group
  --bot BOT_NAME        Bot name
  --query QUERY         Bot Query (CFXQL)
  --input INPUT_FILE    Input Dataframe (CSV File)
  --output OUTPUT_FILE  Output Dataframe (CSV File)

Sub Command: jobs

Description: List all jobs for the current tenant

Usage: jobs  [-h] [--json] [--all]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format
  --all       Retrieve all jobs not just active jobs

Sub Command: logarchive-add-platform

Description: Add current platform Minio as logarchive repository

Usage: logarchive-add-platform  [-h] --repo REPO --prefix OBJECT_PREFIX
            [--retention RETENTION_DAYS]

optional arguments:
  -h, --help            show this help message and exit
  --repo REPO           Log archive repository name to be created
  --prefix OBJECT_PREFIX
                        Object prefix to be used for the archive
  --retention RETENTION_DAYS
                        Data retention period in number of days. If not
                        specified, RDA will not manage the data retention.

Sub Command: logarchive-data-read

Description: Read the data from given archive for a specified time interval

Usage: logarchive-data-read  [-h] --repo REPO --name ARCHIVE_NAME [--from TIMESTAMP]
            [--minutes MINUTES] [--max_rows MAX_ROWS] [--speed SPEED] [--line]

optional arguments:
  -h, --help           show this help message and exit
  --repo REPO          Log archive repository name
  --name ARCHIVE_NAME  Name of the log archive within the repository
  --from TIMESTAMP     From Date & time in text format (ex: ISO format).
                       Timezone must be UTC. If not specified, it will use
                       current time minus specified minutes
  --minutes MINUTES    Number of minutes from specified date & time. Default
                       is 15
  --max_rows MAX_ROWS  If value is specified > 0, stop after reading max_rows
                       from the archive
  --speed SPEED        Replay speed. 0 means no delay, 1.0 means closer to
                       original rate, < 1.0 means slower, > 1.0 means faster
  --line               Instead of JSON format, print one message per line

Sub Command: logarchive-data-size

Description: Show size of data available for given archive for a specified time interval

Usage: logarchive-data-size  [-h] --repo REPO --name ARCHIVE_NAME [--from TIMESTAMP]
            [--minutes MINUTES] [--json]

optional arguments:
  -h, --help           show this help message and exit
  --repo REPO          Log archive repository name
  --name ARCHIVE_NAME  Name of the log archive within the repository
  --from TIMESTAMP     From Date & time in text format (ex: ISO format).
                       Timezone must be UTC. If not specified, it will use
                       current time minus specified minutes
  --minutes MINUTES    Number of minutes from specified date & time. Default
                       is 15
  --json               Print detailed information in JSON format instead of
                       tabular format

Sub Command: logarchive-download

Description: Download the data from given archive for a specified time interval

Usage: logarchive-download  [-h] --repo REPO --name ARCHIVE_NAME [--from TIMESTAMP]
            [--minutes MINUTES] --out OUTPUT_DIR [--flatten]

optional arguments:
  -h, --help           show this help message and exit
  --repo REPO          Log archive repository name
  --name ARCHIVE_NAME  Name of the log archive within the repository
  --from TIMESTAMP     From Date & time in text format (ex: ISO format).
                       Timezone must be UTC. If not specified, it will use
                       current time minus specified minutes
  --minutes MINUTES    Number of minutes from specified date & time. Default
                       is 15
  --out OUTPUT_DIR     Output directory where to save the downloaded data
  --flatten            Flatten directory structure of the files, which
                       otherwise stores in yyyy/mm/dd/HH/MM/ directory
                       structure

Sub Command: logarchive-names

Description: List archive names in a given repository

Usage: logarchive-names  [-h] --repo REPO [--json]

optional arguments:
  -h, --help   show this help message and exit
  --repo REPO  Name of the log archive repository
  --json       Print detailed information in JSON format instead of tabular
               format

Sub Command: logarchive-replay

Description: Replay the data from given archive for a specified time interval with specified label

Usage: logarchive-replay  [-h] --repo REPO --name ARCHIVE_NAME [--from TIMESTAMP]
            [--minutes MINUTES] [--max_rows MAX_ROWS] [--speed SPEED]
            [--batch_size BATCH_SIZE] --stream STREAM [--label LABEL] --site
            SITE

optional arguments:
  -h, --help            show this help message and exit
  --repo REPO           Log archive repository name
  --name ARCHIVE_NAME   Name of the log archive within the repository
  --from TIMESTAMP      From Date & time in text format (ex: ISO format).
                        Timezone must be UTC. If not specified, it will use
                        current time minus specified minutes
  --minutes MINUTES     Number of minutes from specified date & time. Default
                        is 15
  --max_rows MAX_ROWS   If value is specified > 0, stop after reading max_rows
                        from the archive
  --speed SPEED         Replay speed. 0 means no delay, 1.0 means closer to
                        original rate, < 1.0 means slower, > 1.0 means faster
  --batch_size BATCH_SIZE
                        Number of rows to return for each iteration
  --stream STREAM       Name of the stream to write to
  --label LABEL         Label for the replay job
  --site SITE           Site name to run this on a worker

Sub Command: logarchive-repos

Description: List of all log archive repositories

Usage: logarchive-repos  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: merge-logarchive-files

Description: Merge multiple locally downloaded Log Archive (.gz) filles into a single CSV/Parquet file

Usage: merge-logarchive-files  [-h] --folder FOLDER --tofile TOFILE [--sample SAMPLE_RATE]
            [--ts TIMESTAMP]

optional arguments:
  -h, --help            show this help message and exit
  --folder FOLDER       Path to the folder where locally downloaded .gz files
                        are available
  --tofile TOFILE       Save the output to specified file
  --sample SAMPLE_RATE  Data sample rate must be >0 and <= 1.0
  --ts TIMESTAMP        Timestamp column, if specified will sort the data
                        after merge

Sub Command: object-add

Description: Add a new object to the object store

Usage: object-add  [-h] --name NAME --folder FOLDER --file INPUT_FILE
            [--descr DESCRIPTION] [--overwrite OVERWRITE]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Object name
  --folder FOLDER       Folder name on the object storage
  --file INPUT_FILE     file from which object will be added
  --descr DESCRIPTION   Description
  --overwrite OVERWRITE
                        If file already exists, overwrite without prompting.
                        Accepted values (yes/no)

Sub Command: object-delete

Description: Delete object from the object store

Usage: object-delete  [-h] --name NAME --folder FOLDER

optional arguments:
  -h, --help       show this help message and exit
  --name NAME      Object name
  --folder FOLDER  Folder name on the object storage

Sub Command: object-delete-list

Description: Delete list of objects

Usage: object-delete-list  [-h] --inpcol INPUT_OBJECT_COLUMN --file INPUT_FILE --outfile
            OUTPUT_FILE

optional arguments:
  -h, --help            show this help message and exit
  --inpcol INPUT_OBJECT_COLUMN
                        Column with object names
  --file INPUT_FILE     Input csv filename
  --outfile OUTPUT_FILE
                        Name of output csv file that has result for deletion

Sub Command: object-get

Description: Download a object from the object store

Usage: object-get  [-h] --name NAME --folder FOLDER [--tofile SAVE_TO_FILE]
            [--todir SAVE_TO_DIR]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Object name
  --folder FOLDER       Folder name on the object storage
  --tofile SAVE_TO_FILE
                        Save the downloaded object to specified file
  --todir SAVE_TO_DIR   Save the downloaded object to specified directory

Sub Command: object-list

Description: List objects from the object store

Usage: object-list  [-h] [--folder FOLDER] [--json]

optional arguments:
  -h, --help       show this help message and exit
  --folder FOLDER  Folder name on the object storage
  --json           Print detailed information in JSON format instead of
                   tabular format

Sub Command: object-meta

Description: Download metadata for an object from the object store

Usage: object-meta  [-h] --name NAME --folder FOLDER

optional arguments:
  -h, --help       show this help message and exit
  --name NAME      Dataset name
  --folder FOLDER  Folder name on the object storage

Sub Command: object-to-content

Description: Convert object pointers from a column into content

Usage: object-to-content  [-h] --inpcol INPUT_OBJECT_COLUMN --outcol OUTPUT_COLUMN --file
            INPUT_FILE --outfile OUTPUT_FILE

optional arguments:
  -h, --help            show this help message and exit
  --inpcol INPUT_OBJECT_COLUMN
                        Name of the column in input that contains the object
                        name
  --outcol OUTPUT_COLUMN
                        Column name where content will be inserted
  --file INPUT_FILE     Input csv file
  --outfile OUTPUT_FILE
                        Name of output csv file that has content inserted

Sub Command: object-to-file

Description: Convert object pointers from a column into file

Usage: object-to-file  [-h] --inpcol INPUT_OBJECT_COLUMN --outcol OUTPUT_COLUMN --file
            INPUT_FILE --outfile OUTPUT_FILE

optional arguments:
  -h, --help            show this help message and exit
  --inpcol INPUT_OBJECT_COLUMN
                        Name of the column in input that contains the objects
  --outcol OUTPUT_COLUMN
                        Column name where filenames need to be inserted
  --file INPUT_FILE     Input csv file
  --outfile OUTPUT_FILE
                        Name of output csv file that has filename inserted

Sub Command: output

Description: Get the output of a Job using jobid.

Usage: output  [-h] --jobid JOBID [--tofile SAVE_TO_FILE] [--format DATA_FORMAT]
            [--viz]

optional arguments:
  -h, --help            show this help message and exit
  --jobid JOBID         Job ID (either partial or complete)
  --tofile SAVE_TO_FILE
                        Save the data to the specified file (CSV)
  --format DATA_FORMAT  Format for the saved file. Valid values are auto, csv,
                        json, parquet. If 'auto' format will be determined
                        from extension
  --viz                 Open Dataframe visualizer to show the data

Sub Command: pipeline-delete

Description: Delete pipeline by name and version

Usage: pipeline-delete  [-h] --name NAME --version VERSION

optional arguments:
  -h, --help         show this help message and exit
  --name NAME        Pipeline name
  --version VERSION  Version for pipeline

Sub Command: pipeline-get

Description: Get pipeline by name and version

Usage: pipeline-get  [-h] --name NAME --version VERSION [--tofile SAVE_TO_FILE]
            [--json]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Pipeline name
  --version VERSION     Pipeline version
  --tofile SAVE_TO_FILE
                        Save the downloaded pipeline to specified file
  --json                Print detailed information in JSON format instead of
                        tabular format

Sub Command: pipeline-get-versions

Description: Get versions for the pipeline

Usage: pipeline-get-versions  [-h] --name NAME [--json]

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Get versions of pipeline specified by name
  --json       Print detailed information in JSON format instead of tabular
               format

Sub Command: pipeline-list

Description: List published pipelines

Usage: pipeline-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: pipeline-publish

Description: Publish the pipeline on a worker pod

Usage: pipeline-publish  [-h] --pipeline PIPELINE --name NAME --version VERSION --category
            CATEGORY [--usecase USECASE] [--group WORKER_GROUP]
            [--site WORKER_SITE] [--lfilter LABEL_FILTER]
            [--rfilter RESOURCE_FILTER] [--maxwait MAX_WAIT]

optional arguments:
  -h, --help            show this help message and exit
  --pipeline PIPELINE   File containing pipeline contents
  --name NAME           Pipeline name
  --version VERSION     Pipeline version
  --category CATEGORY   Pipeline category
  --usecase USECASE     Pipeline usecase
  --group WORKER_GROUP  Deprecated. Use --site option. Specify a worker site
                        name. If not specified, will use any available worker.
  --site WORKER_SITE    Specify a worker site name. If not specified, will use
                        any available worker.
  --lfilter LABEL_FILTER
                        CFXQL style query to narrow down workers using their
                        labels
  --rfilter RESOURCE_FILTER
                        CFXQL style query to narrow down workers using their
                        resources
  --maxwait MAX_WAIT    Maximum wait time (seconds) for credential check to
                        complete.

Sub Command: pipeline-published-run

Description: Run a published pipeline on a worker pod

Usage: pipeline-published-run  [-h] --name NAME --version VERSION [--group WORKER_GROUP]
            [--site WORKER_SITE] [--lfilter LABEL_FILTER]
            [--rfilter RESOURCE_FILTER] [--maxwait MAX_WAIT]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Pipeline name
  --version VERSION     Pipeline version
  --group WORKER_GROUP  Deprecated. Use --site option. Specify a worker site
                        name. If not specified, will use any available worker.
  --site WORKER_SITE    Specify a worker site name. If not specified, will use
                        any available worker.
  --lfilter LABEL_FILTER
                        CFXQL style query to narrow down workers using their
                        labels
  --rfilter RESOURCE_FILTER
                        CFXQL style query to narrow down workers using their
                        resources
  --maxwait MAX_WAIT    Maximum wait time (seconds) for credential check to
                        complete.

Sub Command: pods

Description: List all pods for the current tenant

Usage: pods  [-h] [--json] [--type POD_TYPE] [--versions] [--infra] [--apps]

optional arguments:
  -h, --help       show this help message and exit
  --json           Print detailed information in JSON format instead of
                   tabular format
  --type POD_TYPE  Show only the pods that match the specified pod type
  --versions       Show versions for each pod in tabular format, not
                   compatible with --json option
  --infra          List only RDA Infra pods. not compatible with --apps option
  --apps           List only RDA App pods. not compatible with --infra option

Sub Command: pstream-add

Description: Add a new Persistent stream

Usage: pstream-add  [-h] --name NAME [--index INDEX_NAME] [--attr [ATTRS [ATTRS ...]]]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Persistent Stream name
  --index INDEX_NAME    OpenSearch index name to store Persistent Stream
  --attr [ATTRS [ATTRS ...]]
                        Optional name=value pairs to add to attributes of
                        persistent stream

Sub Command: pstream-delete

Description: Delete a persistent stream

Usage: pstream-delete  [-h] --name NAME

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Persistent Stream name

Sub Command: pstream-get

Description: Get information about a persistent stream

Usage: pstream-get  [-h] --name NAME [--json]

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Persistent Stream name
  --json       Print in JSON format instead of text format

Sub Command: pstream-list

Description: List persistent streams

Usage: pstream-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: pstream-query

Description: Query persistent stream data via collector

Usage: pstream-query  [-h] --name NAME [--max_rows MAX_ROWS] [--query CFXQL_QUERY]
            [--aggs AGGS] [--groupby GROUPBY] [--ts TS_COLUMN] [--json]
            [--no_sort]

optional arguments:
  -h, --help           show this help message and exit
  --name NAME          Persistent Stream name
  --max_rows MAX_ROWS  Max rows in output
  --query CFXQL_QUERY  CFXQL Query
  --aggs AGGS          Optional aggs, specified as 'sum:field_name'
  --groupby GROUPBY    Comma separated list of columns to groupby. Used only
                       when --aggs is used
  --ts TS_COLUMN       Timestamp column for sorting. Default is 'timestamp'
  --json               Print detailed information in JSON format instead of
                       tabular format
  --no_sort            Do not sort by timestamp field

Sub Command: pstream-tail

Description: Query a persistent stream and continue to query for incremental data every few seconds

Usage: pstream-tail  [-h] --name NAME [--max_rows MAX_ROWS] [--query CFXQL_QUERY]
            [--ts TS_COLUMN] [--format FORMAT] [--out_cols OUTPUT_COLUMNS]
            [--json]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Persistent Stream name
  --max_rows MAX_ROWS   Max rows in output for initial query
  --query CFXQL_QUERY   CFXQL Query
  --ts TS_COLUMN        Timestamp column for sorting. Default is 'timestamp'
  --format FORMAT       Format string in {field1:<8} {field2:,.2f} style
  --out_cols OUTPUT_COLUMNS
                        Comma separated list of column names to be included in
                        output. If not specified, all columns will be included
  --json                Print detailed information in JSON format instead of
                        tabular format

Sub Command: purge-outputs

Description: Purge outputs of completed jobs

Usage: purge-outputs  [-h] --hours OLDER_THAN_HOURS

optional arguments:
  -h, --help            show this help message and exit
  --hours OLDER_THAN_HOURS
                        Purge jobs older than specified number of hours. Must
                        be >= 1

Sub Command: read-stream

Description: Read messages from an RDA stream

Usage: read-stream  [-h] --name STREAM_NAME [--group GROUP] [--delay DELAY]
            [--show_rate]

optional arguments:
  -h, --help          show this help message and exit
  --name STREAM_NAME  Stream name to read from
  --group GROUP       Message consumer group name
  --delay DELAY       Simulate processing delay between each read message
  --show_rate         Do not print messages, just show rate per minute and
                      counts

Sub Command: run

Description: Run a pipeline on a worker pod

Usage: run  [-h] --pipeline PIPELINE [--nowait] [--log LOGLEVEL]
            [--group WORKER_GROUP] [--site WORKER_SITE]
            [--lfilter LABEL_FILTER] [--rfilter RESOURCE_FILTER] [--dryrun]
            [--save_jobid SAVE_JOBID]

optional arguments:
  -h, --help            show this help message and exit
  --pipeline PIPELINE   File containing pipeline information in JSON format
  --nowait              If specified, command does not wait for the completion
                        of the pipeline
  --log LOGLEVEL        Specify logging level as none,
                        DEBUG,INFO,WARNING,ERROR,CRITICAL
  --group WORKER_GROUP  Deprecated. Use --site option. Specify a worker site
                        name. If not specified, will use any available worker.
  --site WORKER_SITE    Specify a worker site name. If not specified, will use
                        any available worker.
  --lfilter LABEL_FILTER
                        CFXQL style query to narrow down workers using their
                        labels
  --rfilter RESOURCE_FILTER
                        CFXQL style query to narrow down workers using their
                        resources
  --dryrun              Do not run pipeline but show which worker nodes would
                        have been selected for run
  --save_jobid SAVE_JOBID
                        Save the jobid to a specified file

Sub Command: run-get-output

Description: Run a pipeline on a worker, wait for the completion, get the final output

Usage: run-get-output  [-h] [--config CONFIG] [--site SITE] [--pipeline PIPELINE]
            [--max_rows MAX_ROWS] [--md] [--onerow] [--vault] [--tocsv TO_CSV]

optional arguments:
  -h, --help           show this help message and exit
  --config CONFIG      Additional configurations defined in a YAML or JSON
                       file
  --site SITE          Site name regex
  --pipeline PIPELINE  Plain text Pipeline filename. If not specified, will
                       read from STDIN.
  --max_rows MAX_ROWS  Max rows to print on screen.
  --md                 Print in markdown format on screen instead of text
                       table format
  --onerow             Print fist row in a vertical format (in addition to
                       table)
  --vault              Use RDA Vault for credentials if not specified locally
                       in a JSON file
  --tocsv TO_CSV       Save the output to CSV formatted file

Sub Command: schedule-add

Description: Add a new schedule for pipeline execution

Usage: schedule-add  [-h] --pipeline PIPELINE [--log LOGLEVEL] --name SCHEDULENAME
            --type SCHEDULE_TYPE [--startdate STARTDATE]
            [--starttime STARTTIME] [--enddate ENDDATE] [--weekdays WEEKDAYS]
            [--freq FREQUENCY] [--tz TIMEZONE] --group GROUP
            [--retries RETRIES] [--retry-intervals RETRYINTERVALS]
            [--parallel-instances PARALLELINSTANCES]

optional arguments:
  -h, --help            show this help message and exit
  --pipeline PIPELINE   File containing pipeline contents
  --log LOGLEVEL        Specify logging level as none,
                        DEBUG,INFO,WARNING,ERROR,CRITICAL
  --name SCHEDULENAME   Schedule name to use
  --type SCHEDULE_TYPE  Schedule Type (Once, Minutes, Hourly, Daily, Weekly,
                        Always)
  --startdate STARTDATE
                        Start date for schedule in YYYY-MM-DD format
  --starttime STARTTIME
                        Start time for schedule in HH:MM format
  --enddate ENDDATE     End date for schedule in YYYY-MM-DD format
  --weekdays WEEKDAYS   Comma separated Day(s) of the week. Mandatory weekly
                        schedule type. Possible values:'MON',
                        'TUE','WED','THU','FRI','SAT','SUN'
  --freq FREQUENCY      Default 1 except for minutes, where it is 15 minutes
  --tz TIMEZONE         Timezone name
  --group GROUP         Worker group name
  --retries RETRIES     Maximum Retries
  --retry-intervals RETRYINTERVALS
                        Retry intervals. Example 5,10,15. Delay time interval
                        in minutes between each retry
  --parallel-instances PARALLELINSTANCES
                        Parallel instances number should range in between
                        1-10. Example 1,2,3

Sub Command: schedule-delete

Description: Delete an existing schedule

Usage: schedule-delete  [-h] --scheduleId SCHEDULEID

optional arguments:
  -h, --help            show this help message and exit
  --scheduleId SCHEDULEID
                        Schedule ID

Sub Command: schedule-edit

Description: Edit an existing schedule

Usage: schedule-edit  [-h] --scheduleId SCHEDULEID --type SCHEDULE_TYPE
            [--startdate STARTDATE] [--starttime STARTTIME]
            [--enddate ENDDATE] [--weekdays WEEKDAYS] [--freq FREQUENCY]
            [--tz TIMEZONE] [--group GROUP] [--retries RETRIES]
            [--retry-intervals RETRYINTERVALS]
            [--parallel-instances PARALLELINSTANCES]

optional arguments:
  -h, --help            show this help message and exit
  --scheduleId SCHEDULEID
                        Schedule ID
  --type SCHEDULE_TYPE  Schedule Type (Once, Minutes, Hourly, Daily, Weekly,
                        Always)
  --startdate STARTDATE
                        Start date for schedule in YYYY-MM-DD format
  --starttime STARTTIME
                        Start time for schedule in HH:MM format
  --enddate ENDDATE     End Date for schedule in YYYY-MM-DD format
  --weekdays WEEKDAYS   Comma separated Day(s) of the week. Mandatory weekly
                        schedule type. Possible values:'MON',
                        'TUE','WED','THU','FRI','SAT','SUN'
  --freq FREQUENCY      Default 1 except for minutes, where it is 15 minutes
  --tz TIMEZONE         Timezone name
  --group GROUP         Worker group name
  --retries RETRIES     Maximum Retries
  --retry-intervals RETRYINTERVALS
                        Retry intervals
  --parallel-instances PARALLELINSTANCES
                        Parallel instances number should range in between 1-10

Sub Command: schedule-info

Description: Get details of a schedule

Usage: schedule-info  [-h] --scheduleId SCHEDULEID [--json]

optional arguments:
  -h, --help            show this help message and exit
  --scheduleId SCHEDULEID
                        Schedule ID
  --json                Print detailed information in JSON format instead of
                        tabular format

Sub Command: schedule-list

Description: List all schedules

Usage: schedule-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: schedule-update-status

Description: Update status of an existing schedule

Usage: schedule-update-status  [-h] --scheduleId SCHEDULEID --status STATUS

optional arguments:
  -h, --help            show this help message and exit
  --scheduleId SCHEDULEID
                        Schedule ID
  --status STATUS       Status

Sub Command: schema-add

Description: Add a new schema to the object store

Usage: schema-add  [-h] --name NAME --file INPUT_FILE

optional arguments:
  -h, --help         show this help message and exit
  --name NAME        Schema name
  --file INPUT_FILE  File (or URL) containing the json schema as per
                     (https://json-schema.org/specification.html)

Sub Command: schema-delete

Description: Delete a schema from the object store

Usage: schema-delete  [-h] --name NAME [--yes]

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Schema name
  --yes        Delete without prompting

Sub Command: schema-get

Description: Download a schema from the object store

Usage: schema-get  [-h] --name NAME

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Schema name

Sub Command: schema-list

Description: List schemas from the object store

Usage: schema-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: secret-add

Description: Add a new secret to the vault

Usage: secret-add  [-h] --type SECRET_TYPE

optional arguments:
  -h, --help          show this help message and exit
  --type SECRET_TYPE  Secret type (use secret-list command to see available
                      secret types)

Sub Command: secret-list

Description: List names and types of all secrets in vault

Usage: secret-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: secret-types

Description: List of all available secret types

Usage: secret-types  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: site-profile-add

Description: Add a new site profile

Usage: site-profile-add  [-h] --name NAME --site SITE [--description DESCRIPTION]
            [--sources SOURCES]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Name of Site Profile
  --site SITE           Site name or a regular expression
  --description DESCRIPTION
                        Description of Site Profile
  --sources SOURCES     Comma separated list of sources

Sub Command: site-profile-delete

Description: Delete a site profile

Usage: site-profile-delete  [-h] --name NAME

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Name of the site profile to delete

Sub Command: site-profile-edit

Description: Update a site profile

Usage: site-profile-edit  [-h] --name NAME [--site SITE] [--description DESCRIPTION]
            [--sources SOURCES]

optional arguments:
  -h, --help            show this help message and exit
  --name NAME           Name of Site Profile
  --site SITE           Site name or a regular expression
  --description DESCRIPTION
                        Description of Site Profile
  --sources SOURCES     Comma separated list of sources

Sub Command: site-profile-get

Description: Get a site profile data

Usage: site-profile-get  [-h] --name NAME

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Name of the site profile to display

Sub Command: site-profile-list

Description: List all site profiles.

Usage: site-profile-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: site-summary

Description: Show summary by Site and Overall

Usage: site-summary  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: stack-cache-list

Description: List cached stack entries from asset-dependency service

Usage: stack-cache-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print results in JSON format

Sub Command: stack-impact-distance

Description: Find the impact distances in a stack using asset-dependency service, load search criteria from a JSON file

Usage: stack-impact-distance  [-h] --name STACK_NAME --search_file SEARCH_FILE [--json]

optional arguments:
  -h, --help            show this help message and exit
  --name STACK_NAME     Stack name
  --search_file SEARCH_FILE
                        Filename with JSON based search criteria
  --json                Print results in JSON format

Description: Search in a stack using asset-dependency service

Usage: stack-search  [-h] --name STACK_NAME --values VALUES --attrs ATTRS --types TYPES
            [--exclude EXCLUDE] [--depth DEPTH]

optional arguments:
  -h, --help         show this help message and exit
  --name STACK_NAME  Stack name
  --values VALUES    Attribute values to search for. Multiple values may be
                     specified separated by a comma
  --attrs ATTRS      Comma separated list of node attribute names
  --types TYPES      Comma separated list of node types to search
  --exclude EXCLUDE  Comma separated list of node types to exclude in search
  --depth DEPTH      Max depth

Sub Command: stack-search-json

Description: Search in a stack using asset-dependency service, load search criteria from a JSON file

Usage: stack-search-json  [-h] --name STACK_NAME --search_file SEARCH_FILE

optional arguments:
  -h, --help            show this help message and exit
  --name STACK_NAME     Stack name
  --search_file SEARCH_FILE
                        Filename with JSON based search criteria

Sub Command: staging-area-add

Description: Add or update staging area

Usage: staging-area-add  [-h] --file INPUT_FILE [--overwrite]

optional arguments:
  -h, --help         show this help message and exit
  --file INPUT_FILE  YAML file containing staging area definition
  --overwrite        Overwrite even if a staging area already exists with a
                     name.

Sub Command: staging-area-delete

Description: Delete a staging area

Usage: staging-area-delete  [-h] --name STAGING_AREA_NAME

optional arguments:
  -h, --help            show this help message and exit
  --name STAGING_AREA_NAME
                        Name of the staging area to delete

Sub Command: staging-area-get

Description: Get YAML data for a staging area

Usage: staging-area-get  [-h] --name STAGING_AREA_NAME

optional arguments:
  -h, --help            show this help message and exit
  --name STAGING_AREA_NAME
                        Name of the staging area

Sub Command: staging-area-list

Description: List all staging areas.

Usage: staging-area-list  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: subscription

Description: Show current CloudFabrix RDA subscription details

Usage: subscription  [-h] [--json] [--details]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format
  --details   Show full details when showing plain text format

Sub Command: verify-pipeline

Description: Verify the pipeline on a worker pod

Usage: verify-pipeline  [-h] --pipeline PIPELINE [--group WORKER_GROUP]
            [--site WORKER_SITE] [--lfilter LABEL_FILTER]
            [--rfilter RESOURCE_FILTER] [--maxwait MAX_WAIT]

optional arguments:
  -h, --help            show this help message and exit
  --pipeline PIPELINE   File containing pipeline contents
  --group WORKER_GROUP  Deprecated. Use --site option. Specify a worker site
                        name. If not specified, will use any available worker.
  --site WORKER_SITE    Specify a worker site name. If not specified, will use
                        any available worker.
  --lfilter LABEL_FILTER
                        CFXQL style query to narrow down workers using their
                        labels
  --rfilter RESOURCE_FILTER
                        CFXQL style query to narrow down workers using their
                        resources
  --maxwait MAX_WAIT    Maximum wait time (seconds) for credential check to
                        complete.

Sub Command: viz

Description: Visualize data from a file within the console (terminal)

Usage: viz  [-h] --file INPUT_FILE [--format FILE_FORMAT]

optional arguments:
  -h, --help            show this help message and exit
  --file INPUT_FILE     CSV or parquet or JSON formatted file which will be
                        visualized
  --format FILE_FORMAT  Input file format (csv or parquet or json). 'auto'
                        means format will be derived from file extension

Sub Command: watch-logs

Description: Start watching logs produced by the pipelines

Usage: watch-logs  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: watch-registry

Description: Start watching updates published by the RDA pod registry

Usage: watch-registry  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: watch-scheduler

Description: Start watching updates published by the scheduler pods

Usage: watch-scheduler  [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format

Sub Command: watch-scheduler-admin

Description: Start watching updates published by the scheduler admin pods

Usage: watch-scheduler-admin  [-h]

optional arguments:
  -h, --help  show this help message and exit

Sub Command: watch-traces

Description: Start watching traces produced by the pipelines

Usage: watch-traces  [-h] [--json] [--ts]

optional arguments:
  -h, --help  show this help message and exit
  --json      Print detailed information in JSON format instead of tabular
              format
  --ts        Show timestamp when printing traces in plain text format

Sub Command: worker-obj-info

Description: List all worker pods with their current Object Store configuration

Usage: worker-obj-info  [-h]

optional arguments:
  -h, --help  show this help message and exit

Sub Command: write-stream

Description: Write data to the specified stream

Usage: write-stream  [-h] --name STREAM_NAME --data DATA [--delay DELAY] [--compress]

optional arguments:
  -h, --help          show this help message and exit
  --name STREAM_NAME  Stream name to write to
  --data DATA         File containing either single JSON dict or a list
  --delay DELAY       Delay between each publish message
  --compress          Enable compression of the data