Skip to main content

Overview

This guide describes how to deploy the Poolside AI platform on a GPU workstation or server host. The installation supports both online and air-gapped environments. Installation time typically takes less than 30 minutes when the installation bundle is cached on the host. The Poolside platform includes:
  • Chat model (Malibu): Quantized/compressed to run on 2x RTX 6000 GPUs
  • Code completion model (Point): Runs on 1x RTX 6000 GPU
  • Poolside Web Assistant: Browser-based chat interface
  • Poolside Console: Browser-based interface for managing models, configuring agents and integrations, controlling access, and monitoring usage
  • Built-in authentication: Keycloak identity provider
  • S3-compatible object storage: SeaweedFS for models and telemetry
  • Database: PostgreSQL for Poolside deployment
  • cert-manager: Self-signed certificate provider

Prerequisites

Install the following prerequisites on the host before starting the installation procedure. The installation process installs and configures GPU drivers and Poolside platform services. Before you begin, ensure you have:
  • A supported operating system: Ubuntu 22.04 or Red Hat Enterprise Linux 9.6
  • sudo access on the host
  • Network access for online installations, or the full bundle and artifacts available locally for air-gapped installations

Prerequisites for RHEL 9.6

  • Install iptables-nft using yum (version 1.8.10-11.el9)
  • Install container-selinux using yum
  • Install jq using yum (version 1.6 or later)
  • Install yq (version v4.49.2 or later) from https://github.com/mikefarah/yq/releases/tag/v4.49.2
  • Install unzip using yum (version 6.00 or later)
  • Install skopeo using yum (package skopeo-1.18.1-2.el9_6.x86_64 or later)
  • Install kubectl by adding the Kubernetes repository (ensure that the kubectl version is the same as or newer than the RKE2 Kubernetes version):
    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/
    enabled=1
    gpgcheck=1
    gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key
    EOF
    
    Then run:
    sudo yum install -y kubectl
    
  • Install terraform (version 1.8.5) from https://releases.hashicorp.com/terraform/1.8.5
    • Download and install the binary to /usr/local/bin/terraform
    • unzip is required to extract the Terraform binary
In RHEL 9.x, /usr/local/bin is not included in the secure_path setting in /etc/sudoers by default. As a result, sudo terraform can return a command not found error.
Run Terraform with the absolute path: /usr/local/bin/terraform.

Prerequisites for Ubuntu 22.04


Installation

The Poolside installation bundle includes all required Terraform providers for a linux/amd64 host to support fully offline (air-gapped) deployments.

Step 0 (optional): Air-gapped installation setup

This configuration is required only for air-gapped installations. This method is used instead of a standard .terraformrc file to ensure that Terraform behaves consistently when commands are run as both root and a non-root user.
To use the local Terraform provider cache included in the bundle, configure Terraform to load providers from the bundled terraform.d directory.
  1. Locate poolside-terraform.tfrc in the root of the unpacked installation bundle.
  2. Replace the $POOLSIDE_INSTALL_DIR placeholder with the fully qualified path to the bundle’s root directory.
  3. For all Terraform commands in this guide, prefix the command with the Terraform CLI configuration file path:
    TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc terraform <command>
    
Setting this variable ensures that both root and non-root users reference the same cached Terraform providers.
You can configure Terraform using alternative methods, such as a .terraformrc file, as described in the official HashiCorp documentation. Because the installation process runs as both root and a local user, you must ensure that both accounts are configured to reference the cached providers correctly.

Step 1: Install and configure RKE2 on the host

The 01-infra-rke2 directory contains the Terraform module that installs RKE2 on the host system. Using sudo, run the following commands from the 01-infra-rke2 directory.
You must run the RKE2 installation using sudo from the same user account that will run the Poolside platform after deployment.
Terraform uses the original user and group IDs from the sudo environment to set ownership and permissions required by later installation stages.
Air-gapped environment:
sudo TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc /usr/local/bin/terraform init
sudo TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc /usr/local/bin/terraform apply
Online environment:
sudo /usr/local/bin/terraform init
sudo /usr/local/bin/terraform apply

Step 2: Generate RKE2 credentials and access files

The 02-rke2-credentials directory contains the Terraform module that connects to the RKE2 cluster and extracts the configuration files required for later installation stages. These files allow Terraform to authenticate to the RKE2 cluster and complete the Poolside platform deployment. Using sudo, run the following commands from the 02-rke2-credentials directory.
You must run this step using sudo from the same user account that will run the Poolside platform after deployment.
Terraform uses the original user and group IDs from the sudo environment to set the permissions required for RKE2 cluster access in later stages.
Air-gapped environment:
sudo TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc /usr/local/bin/terraform init
sudo TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc /usr/local/bin/terraform apply
Online environment:
sudo /usr/local/bin/terraform init
sudo /usr/local/bin/terraform apply
If RKE2 certificates or credentials change, re-run this step to regenerate the configuration files and restore cluster access.

Step 3: Install supporting infrastructure services

The 03-infra-services directory contains the Terraform module that uses the previously generated RKE2 credentials to access the cluster and deploy the supporting infrastructure required by the Poolside platform. This step installs:
  • A local container registry
  • A PostgreSQL database
  • An S3-compatible object store
  • Keycloak for user authentication
Using sudo, run the following commands from the 03-infra-services directory.
You must run this step using sudo from the same user account that will run the Poolside platform after deployment.
Terraform uses the original user and group IDs from the sudo environment to set the permissions required for RKE2 cluster access in later stages.
Air-gapped environment:
sudo TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc /usr/local/bin/terraform init
sudo TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc /usr/local/bin/terraform apply
Online environment:
sudo /usr/local/bin/terraform init
sudo /usr/local/bin/terraform apply
This step can take some time to complete, as container images are loaded into the local RKE2 registry to support air-gapped operation and improve Poolside platform startup performance.

Step 4: Deploy the Poolside platform

The 04-poolside-deployment directory contains the Terraform module that deploys the Poolside platform into the RKE2 cluster using the infrastructure services installed in the previous steps. This deployment includes optional configuration and must align with any modifications made in earlier phases. Review and update the terraform.tfvars file as needed before running Terraform. Run the following commands from the 04-poolside-deployment directory. Air-gapped environment:
TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc terraform init
TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc terraform apply
Online environment:
terraform init
terraform apply

Step 5: Upload Poolside models

The final step uploads Poolside models to the S3-compatible storage used by the deployment. The 05-poolside-model-upload directory contains a Terraform module that creates a Kubernetes job to sync model files from a local host directory into the poolside-models S3 bucket.
  1. Copy the Poolside model files (Malibu and/or Point) into the local host directory: /opt/poolside/poolside-model-uploads If you customized host volume paths during Step 1 (01-infra-rke2), use the corresponding directory instead.
  2. Run the following commands from the 05-poolside-model-upload directory to trigger the model upload job. Air-gapped environment:
    TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc terraform init
    TF_CLI_CONFIG_FILE=<path-to-bundle>/poolside-terraform.tfrc terraform apply
    
    Online environment:
    terraform init
    terraform apply
    
  3. To upload additional or updated models later, repeat these steps. Uploads are additive and do not remove existing models from the deployment.

Post-installation configuration

Step 1: Configure local DNS

Add hostname resolution to your system:
# Add to /etc/hosts (replace with your actual IP if different)
echo "127.0.0.1 poolside.poolside.local" | sudo tee -a /etc/hosts
echo "127.0.0.1 keycloak.poolside.local" | sudo tee -a /etc/hosts
echo "127.0.0.1 seaweedfs.poolside.local" | sudo tee -a /etc/hosts
echo "127.0.0.1 seaweedfs-s3.poolside.local" | sudo tee -a /etc/hosts

Step 2: Set up the Poolside platform

The following steps describe how to complete initial setup using the internal Keycloak service as the identity provider. If you are integrating with an external identity provider, see OIDC authentication.
  1. Open a web browser and navigate to: https://poolside.poolside.local If you configured a different hostname, use that value instead. Ensure DNS resolution is working for the selected hostname.
  2. Retrieve identity configuration values from Terraform output.
    • Change to the 03-infra-services directory.
    • Run the following command:
      terraform output identity_config
      
    • Note the values for client_api_credentials and oidc_provider_url.
  3. Create the organization.
    • When prompted in the Poolside Console, create your organization. For Poolside realm settings, enter the following values:
      • Provider URL: oidc_provider_url from the Terraform output
      • Client ID: client_api_credentials.id from the Terraform output
      • Client Secret: client_api_credentials.secret from the Terraform output
  4. Register the first user.
    • Click Login, then Register.
    • Create your administrator user account. This user is automatically assigned the tenant-admin role.

Verification

Your installation is successful when:
  • All pods are running: kubectl get pods -A
  • The web interface is accessible at https://poolside.poolside.local
  • Models show a Healthy status in the Poolside Console
  • Chat functionality works in the Poolside Web Assistant

Troubleshooting

Common issues

Model pods stuck in ContainerCreating
  • Verify GPU availability on the host using nvidia-smi
  • Scale the affected deployment to zero replicas, then scale it back up:
    • Identify the deployment: kubectl get deploy -n poolside-models
    • Scale down and back up as needed
Certificate warnings in the browser
  • Trust the cluster CA certificate from the installation bundle (for example, in your poolside-install directory), or from the OS trust store location under /usr/local/... after it has been installed on the host
  • Certificate installation steps vary by operating system
Models not loading
  • Verify that services are running:
    kubectl get pods -n poolside-services
    
  • Confirm that models were successfully uploaded during the model upload step

Useful commands

# Check overall cluster status
kubectl get pods -A

# Monitor key namespaces
kubectl get pods -n poolside-models    # Model workloads
kubectl get pods -n poolside           # API and platform services
kubectl get pods -n poolside-services  # Database supporting services

# Inspect pod and deployment details
kubectl describe pod <pod-name> -n <namespace>
kubectl describe deploy <deployment-name> -n <namespace>

# View logs
kubectl logs <pod-name> -n <namespace>

# View logs for model download or initialization containers
kubectl logs <pod-name> -c model-downloader -n poolside-models

# View recent events in a namespace
kubectl get events -n <namespace>