Skip to main content

On-Premise PoC Deployment Guide – Xshield

⚠️ Important: This guide is intended only for Proof-of-Concept (PoC) deployments of Xshield on a single VM. It is not suitable for production use.

Important Notices

  • This is a single-VM Xshield deployment with no redundancy.
  • Customer is responsible for any data backup requirements.
  • The PoC supports securing up to 15 assets only.
  • Intended strictly for evaluation of the Xshield platform in on-prem environments.

Pre-requisites

  • Virtual Machine Requirements:
    • 8 vCPU
    • 32 GB RAM
    • 256 GB Disk

Deployment Steps

1. Create the VM

Deploy the OVA (provided by ColorTokens) with the minimum specs listed above

2. (Optional) Configure Network

🛑 Skip this step if your VM already has an IP address configured

Manually configure static IP, DNS, and gateway:

cd $HOME/onprem-infrastructure/single-node
bash setup-static-ip.sh

3. SSH Into the VM

Default credentials:

Username: ctuser
Password: colors321

4. Deployment Options

IP-Based Deployment (HTTP) – Default (⚠️ Please follow Gatekeeper on-prem POC guide if Gatekeeper is involved in the PoC)

The platform is accessible via the VM's IP over HTTP.

cd $HOME/onprem-infrastructure/single-node

# Platform setup
./deploy.sh --poc

Domain-Based Deployment (HTTP or HTTPS)

You may deploy Xshield on:

  • A default domain:
    The subdomain (e.g., <subdomain>.colortokenspoc.com) is auto-generated and displayed during next step.

  • A custom domain, using either:

    • HTTP (if TLS certs are unavailable)
    • HTTPS (requires TLS certs)
cd $HOME/onprem-infrastructure/single-node

# Domain setup
./deploy.sh --domain

# Platform setup
./deploy.sh --poc

5. Access the Platform

After deployment completes, an invite link is printed and also saved at:

/home/ctuser/tenant_invite_link.txt

Use this link in a browser to access the Xshield platform. No authentication password is required by default.

6. Enable feature flags

Do this step after accepting the invite link and logging into the platform. This is a pre-requisite before enabling feature flags

cd $HOME/onprem-infrastructure/single-node

./deploy.sh --feature-flags

Notes:

  1. Xshield agent installation

    If platform is configured with custom TLS certificates signed by an internal CA, make sure to import the root CA into every VM where the agent will be installed. Failure to do so will result in TLS verification errors.

  2. Container security agent installation

    If PoC involves deploying container segmentation please follow the instructions in below section

Container security agent installation (expand)

Non-airgapped environments

If the target environment has internet access we can follow the instructions in the portal to install container agents.

Air-gapped environments

The images required for container security are preloaded in the single node OVA. Follow the steps to install the container agents on the target cluster

Prerequisites

  • There should be port 22 access from the single node VM to the bastion VM that has access to the target Kubernetes cluster
  • An OCI registry to upload the container security images

Steps

  • Copy the contsec helm chart package, located at /home/ctuser/contsec-images/helm-charts/ct-contsec-<chart version>.tgz, to the bastion that has access to the target Kubernetes cluster. For eg. 10.40.111.126 is the system that has access to the target kubernetes cluster
scp /home/ctuser/contsec-images/helm-charts/ct-contsec-24.9.2.tgz ctuser@10.40.111.126:/home/user/
  • Upload the contsec container images to the OCI registry
  • Tag the images to target registry. In the sample below "registry.colortokens.com/container-agent/" is the target OCI registry
docker tag colortokenspublic.azurecr.io/ct-policy:24.9.2 registry.colortokens.com/container-agent/ct-policy:24.9.2
docker tag colortokenspublic.azurecr.io/ct-collector:24.9.2 registry.colortokens.com/container-agent/ct-collector:24.9.2
docker tag colortokenspublic.azurecr.io/openpolicyagent/opa:0.63.0-istio-4-rootless registry.colortokens.com/container-agent/openpolicyagent/opa:0.63.0-istio-4-rootless
  • Push images to the OCI registry
docker push registry.colortokens.com/container-agent/openpolicyagent/opa:0.63.0-istio-4-rootless
docker push registry.colortokens.com/container-agent/ct-collector:24.9.2
docker push registry.colortokens.com/container-agent/ct-policy:24.9.2
  • Deploy container security on target cluster
  • Install istio as prerequisite following the instructions from the install page in portal
  • You may choose to download install istio via internet or any other means. If istio images are required it is present in /home/ctuser/contsec-images/istio-images and the istio helm chart is present in /home/ctuser/contsec-images/helm-charts/ (Colortokens does not recommend any istio version, the version packages are the latest at the time of creation of the OVA)
  • Follow the instructions in the install page to install ct-contsec helm chart on to the target cluster. While installing enable "User Local registry" and provide the path to respository where ct-policy and ct-collector are hosted. Also change the installation path to local tgz file

Example:

helm -n ct-system install ct-contsec ct-contsec-24.9.2.tgz --version 24.9.2 --set global.colortokensDomainSuffix=https://<platform_domain> --set global.clusterIdentifier=<cluster_id> --set global.colortokensAuthKey=<auth_key> --set global.service.classicMode="false" --set global.registryAccount=registry.colortokens.com/container-agent  

Troubleshooting

Run the below script to check the cluster/services health if experiencing any issues

cd $HOME/onprem-infrastructure/single-node
bash status.sh

Known Issues

Cluster Issues

During deployment, ./deploy.sh --poc script, a failure with the following error indicates a faulty cluster:

ctuser@localhost:~/onprem-infrastructure/single-node$ ./deploy.sh --poc

Setting up..
waiting for cluster configuration..
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xeu k3s.service" for details.
ERROR: Cluster is inaccessible. Please verify k3s service.
See 'systemctl status k3s.service' and 'journalctl -xeu k3s.service' for details.
  • Inspect the K3s or RKE2 service status using:
    • Ubuntu VM: sudo systemctl status k3s
    • RHEL VM: sudo systemctl status rke2-server
  • Review service logs for detailed errors using:
    • Ubuntu VM: sudo journalctl -xeu k3s
    • RHEL VM: sudo journalctl -xeu rke2-server

These logs provide insights into the root cause of the cluster failure. Address the issue based on the specific errors reported.

If the logs shows the below error:

localhost k3s[741306]: level=fatal msg="no default routes found in \"/proc/net/route\" or \"/proc/net/ipv6_route\""

Run ip route show to check the routing table. If there is NO default route configured then cluster will not work as expected. Rerun bash setup-static-ip.sh with the valid gateway IP address.

Invite link not generated

If the invite link did not get generated during ./deploy.sh --poc step, run the below command to re-generate the link

./deploy.sh --invite

SSL validation errors

When an SSL certificate validation fails, the following issues will occur:

  • The platform URL displays the site as "Not Secure" on the browser
  • Agent registration fails with an SSL certificate validation error.

This issue is often caused by a missing root certificate on the system.

  • If you are using *.colortokenspoc.com, note that SSL.com is the Certificate Authority (CA) for the certificate. Verify whether "SSL.com Root Certification Authority RSA" is listed as a trusted root certificate on the system. If it is missing, download and install the root certificate from the provided link