Skip to main content

Overview

Use this guide when you deploy Poolside on Red Hat OpenShift. If you deploy on upstream Kubernetes, use Install on upstream Kubernetes. If you deploy on Amazon Elastic Kubernetes Service (Amazon EKS), use Install on Amazon EKS.

Prerequisites

Poolside distributes the Helm deployment bundle as a .tar.gz archive. Extract it before you start:
tar -xzf <bundle-name>.tar.gz
cd <bundle-name>
Confirm that you are working from the root of the extracted bundle. The bundle root contains the following directories:
./scripts/
./containers/
./charts/
./binaries/
Platform requirements Provide the following dependencies before you install the platform:
  • OpenShift 4.16 or later
  • DNS records that resolve to the cluster ingress or router endpoint
  • An S3-compatible object storage service such as Amazon S3, SeaweedFS, MinIO, or NooBaa
  • A container registry that your cluster can access
  • TLS certificates for the application, or cert-manager
  • PostgreSQL 16 or later
  • An OpenID Connect (OIDC)-compliant identity provider such as Amazon Cognito, Keycloak, Microsoft Entra ID, Auth0, or Okta
Workstation tools Install the following tools on the host you use to run the deployment:
  • helm 3.12 or later
  • oc or kubectl
  • skopeo
  • openssl
  • aws CLI (required only if you run local model inference and need to upload checkpoints to S3-compatible object storage)
Minimum resource requirements Ensure that your cluster has the required hardware, such as enough GPUs for local inference. If you have questions about the required specs, contact Poolside support.

Step 1: Create values files

Create values files for the poolside-deployment and inference-stack Helm charts in the bundle root. The remainder of this guide describes how to populate them as each step is covered.
cp ./charts/poolside-deployment/values.yaml ./poolside_values.yaml
cp ./charts/inference-stack/values.yaml ./inference_values.yaml

Step 2: Prepare the cluster

Create the namespaces that Poolside uses:
oc create namespace poolside
oc create namespace poolside-models
oc create namespace poolside-sandbox
Use poolside for the core application components and poolside-models for inference workloads. Use poolside-sandbox if you enable sandboxing.

Step 3: Upload container images

Before installation, copy the bundled images into your registry. Log in to your target registry using docker login or podman login before running any upload commands. Authenticate skopeo against your target registry:
skopeo login <host> --username <username> --password <password>
If you use the provided upload script:
chmod +x ./scripts/upload_images.sh
./scripts/upload_images.sh <registry-host>
If you want to upload images manually, use skopeo:
skopeo copy oci-archive:./containers/atlas__202604-1__amd64.tar docker://<registry-host>/atlas:202604-1
If you run skopeo on macOS with an Arm host and upload images for an amd64 cluster, add --override-arch amd64 --override-os linux. Create an image pull secret in poolside and poolside-models if your registry requires authentication:
oc create secret docker-registry poolside-registry-secret \
  --docker-server=<registry-host> \
  --docker-username=<registry-user> \
  --docker-password=<registry-password> \
  -n poolside
oc create secret docker-registry poolside-registry-secret \
  --docker-server=<registry-host> \
  --docker-username=<registry-user> \
  --docker-password=<registry-password> \
  -n poolside-models
Point the chart at your registry and reference the pull secret in poolside_values.yaml:
global:
  # -- Registry path where the bundled images were uploaded.
  # -- On OpenShift, this is typically the route to the internal image registry.
  containerRepositoryOverride: "<registry-host>"
  # -- List of registry pull secrets to mount on all pods.
  # -- Example: `[{name: "poolside-registry-secret"}]`
  imagePullSecrets:
    - name: poolside-registry-secret
If you use the OpenShift internal registry, grant cross-namespace image pull access:
oc create rolebinding image-puller-poolside-models \
  --clusterrole=system:image-puller \
  --serviceaccount=poolside-models:default \
  --serviceaccount=poolside-models:inference \
  --serviceaccount=poolside-models:inference-extproc \
  -n poolside

Step 4: Upload model checkpoints

If you do not run local GPU inference, skip this step. Uploading checkpoints to the S3 bucket is time consuming. Start it now and continue with the remaining steps in parallel. The inference stack downloads model weights from your S3 bucket on pod startup, so the checkpoints must be in place before you install the inference chart. Poolside provides the checkpoint files with the deployment bundle. Confirm the local path and the destination prefix with your Poolside contact. Upload the checkpoints to the bucket you set as global.s3.bucket:
aws s3 cp ./checkpoints s3://<bucket-name>/checkpoints --recursive --region <aws-region>
For a non-AWS S3 endpoint (MinIO, NooBaa, SeaweedFS), add --endpoint-url:
aws s3 cp ./checkpoints s3://<bucket-name>/checkpoints \
  --recursive \
  --endpoint-url https://<s3-endpoint> \
  --region <aws-region>
Checkpoints are typically tens of GiB per model. For faster throughput, or for backends sensitive to upload concurrency such as NooBaa or SeaweedFS, run the upload from a host inside the cluster and tune aws configure set default.s3.max_concurrent_requests and default.s3.multipart_chunksize.

Step 5: Configure external dependencies

Create the secrets and values the chart expects before you install the platform. Configure PostgreSQL Create a secret in the poolside namespace that contains the password for the Poolside PostgreSQL role:
oc create secret generic poolside-db-password \
  --from-literal=POSTGRESQL_PASSWORD=<postgres-password> \
  -n poolside
Configure poolside_values.yaml with the PostgreSQL connection details:
global:
  # --- Database Configuration ---
  # An external PostgreSQL 16+ database is required.
  database:
    # -- Name of the secret containing the POSTGRESQL_PASSWORD key.
    passwordSecret: "poolside-db-password"
    # -- DB Hostname (required). Set this to your PostgreSQL provider's endpoint.
    host: "<postgres-hostname>"
    # -- DB Port.
    port: 5432
    # -- DB User. Must have ownership of the database.
    user: "poolside"
    # -- SSL Mode for connection.
    # -- If set, this value is used directly (e.g. "verify-ca", "disable").
    # -- If unset (default), the chart determines the mode based on openshiftCompatibility:
    # --   - true: "verify-full sslrootcert=/etc/ssl/certs/service-ca.crt"
    # --   - false: "require"
    sslMode: ""
    # -- Enable TLS for Temporal server database connections.
    # -- Set to "false" only if your database does not require TLS.
    temporalDbEnableTls: "true"
On OpenShift with openshiftCompatibility: true (the default), the chart expects the cluster’s service CA at /etc/ssl/certs/service-ca.crt. Set sslMode: "require" (or "disable") if you use an external PostgreSQL endpoint that is not covered by the service CA, and set temporalDbEnableTls accordingly. Configure the encryption key Create the application encryption key secret:
oc create secret generic encryption-key-secret \
  --from-literal=ENCRYPTION_KEY=$(openssl rand -hex 16) \
  -n poolside
Reference the encryption key in poolside_values.yaml:
global:
  # -- Name of the secret containing the ENCRYPTION_KEY (32-byte hex).
  # -- Leave empty when using IRSA+KMS (set encryptionKMSArn instead).
  encryptionKeySecret: "encryption-key-secret"
Configure S3-compatible object storage Create the storage credentials secret in poolside:
oc create secret generic aws-credentials \
  --from-literal=AWS_ACCESS_KEY_ID=<access-key-id> \
  --from-literal=AWS_SECRET_ACCESS_KEY=<secret-access-key> \
  -n poolside
If you plan to run local inference in the cluster, create the same secret in poolside-models:
oc create secret generic aws-credentials \
  --from-literal=AWS_ACCESS_KEY_ID=<access-key-id> \
  --from-literal=AWS_SECRET_ACCESS_KEY=<secret-access-key> \
  -n poolside-models
Configure the chart to use your object store:
global:
  s3:
    # -- Name of the S3 bucket for application data.
    bucket: "<bucket-name>"
    # -- AWS Region for the bucket.
    region: "<aws-region>"
    # -- Name of the secret containing AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
    # -- Leave empty when using IRSA (pod service account assumes the role).
    secretName: "aws-credentials"
    # -- Optional: Full text of a CA bundle to mount for S3 connections (useful for MinIO/NooBaa).
    caBundle: ""
    # -- Optional: Name of the ConfigMap containing the CA bundle to mount for S3 connections.
    caBundleConfigMap: ""
    # -- Optional: Name of the key in the ConfigMap containing the CA bundle.
    caBundleConfigMapKey: ""
    # -- Optional: Override the S3 provider endpoint. Required for non-AWS S3 (MinIO, NooBaa, etc.).
    apiUri: ""
    # -- Optional: Override the STS API endpoint.
    apiStsUrl: ""

  # --- Audit ---
  # Audit events are written to a hot-storage (database) ring buffer and
  # periodically exported to S3 for long-term retention.
  audit:
    hotStorage:
      # -- Number of days to retain audit events in hot storage (database) before export/purge.
      retentionDays: 90
      # -- How often the hot-storage maintenance loop runs (Go duration, e.g. "5m", "1h").
      pollInterval: "5m"
    s3:
      # -- S3 bucket for audit cold-storage exports. Defaults to global.s3.bucket when empty.
      bucket: ""
      # -- AWS region for the audit bucket. Defaults to global.s3.region when empty.
      region: ""
      # -- Optional: KMS key ARN/ID used for server-side encryption of audit objects.
      kmsKeyId: ""
      # -- Object key prefix for audit exports within the bucket.
      prefix: "audit"
      # -- How often the audit export loop runs (Go duration, e.g. "1h", "15m").
      pollInterval: "1h"

Step 6: Configure connectivity and TLS

Set the public hostnames in your values file:
global:
  # -- The public hostname for the API server (e.g. api.poolside.example.com)
  apiHost: "<api-hostname>"

  # -- The public hostname for the Web Assistant (e.g. chat.poolside.example.com)
  webHost: "<web-hostname>"
By default, the chart creates OpenShift Routes for core-api, web-assistant, and public-docs. Choose one TLS approach. Use cert-manager If cert-manager is installed, add the issuer annotation to both ingress resources:
core-api:
  ingress:
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
web-assistant:
  ingress:
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
Use an existing TLS secret Create the TLS secret in the poolside namespace:
oc create secret tls poolside-tls \
  --cert=tls.crt \
  --key=tls.key \
  -n poolside
Reference the secret in the values file:
global:
  ingressTlsSecretName: "poolside-tls"
When you set global.ingressTlsSecretName, the chart creates Ingress objects instead of Routes, even if openshiftCompatibility is true. OpenShift Routes cannot reference an existing TLS secret by name, so the chart uses Ingress objects instead and lets OpenShift create the corresponding Route from each Ingress path. Inject certificate files during installation If you keep tls.crt and tls.key in the bundle root, pass them with --set-file during installation:
helm install poolside-deployment ./charts/poolside-deployment \
  --namespace poolside \
  -f ./poolside_values.yaml \
  --set-file global.ingressTlsCert=./tls.crt \
  --set-file global.ingressTlsKey=./tls.key
If your cert chain requires a separate CA bundle file, also include --set-file global.ingressCaBundle=./ca-bundle.crt. Add a custom CA bundle When global.openshiftCompatibility is true, the chart automatically mounts both the internal workload CA and the trusted enterprise CA. If you need to mount an additional CA bundle, create a configmap in the poolside namespace:
oc create configmap poolside-ca-bundle \
  --from-file=ca-bundle.crt=tls.crt \
  -n poolside
Reference it in your values file:
global:
  # -- Name of a config map containing a CA bundle to mount
  caBundleConfigMap: "poolside-ca-bundle"
  # -- Name of the file (key) in the config map containing the CA bundle
  caBundleConfigMapKey: "ca-bundle.crt"

Step 7: Install the platform

Install the poolside-deployment Helm chart:
helm install poolside-deployment ./charts/poolside-deployment \
  --namespace poolside \
  -f ./poolside_values.yaml
If you use manual TLS injection, install with:
helm install poolside-deployment ./charts/poolside-deployment \
  --namespace poolside \
  -f ./poolside_values.yaml \
  --set-file global.ingressTlsCert=./tls.crt \
  --set-file global.ingressTlsKey=./tls.key
If your cert chain requires a separate CA bundle file, also include --set-file global.ingressCaBundle=./ca-bundle.crt.

Step 8: Verify the installation

After the Helm release finishes, verify routes or ingress, pods, and core API health. Check the routes:
oc get routes -n poolside
Check the pods:
oc get pods -n poolside
oc get pods -n poolside-models
Check the recent core-api logs:
oc logs -n poolside -l app.kubernetes.io/name=core-api --tail=50
Look for log lines that show the level=INFO msg="server is running" message.

Step 9: Complete initial access setup

After the platform is up, open the webHost URL and complete the first-login flow. If you have not already created an OIDC application for this service in your identity provider, create one now. Configure the login URI to match your webHost URL, and allow the following callback URLs:
  • https://<web-hostname>/auth/callback
  • https://<web-hostname>/authorize/callback
  • https://<api-hostname>/auth/callback
  • https://<api-hostname>/authorize/callback
During setup, provide the following values from your identity provider:
  • Provider URL
  • Client ID
  • Client secret
The provider URL must be reachable from both the browser and the core-api pods. This is especially important if you run a self-hosted identity provider inside your network. After you create the administrator account, sign in to https://<web-hostname>/console to verify model status.

Model deployment

After the platform is initialized, continue with Model inference on OpenShift.

Troubleshooting

  • If pods fail to pull images, run oc describe pod <pod-name> -n <namespace> and verify that global.imagePullSecrets references the correct secret.
  • If the platform cannot connect to PostgreSQL, verify that the database configuration matches the deployment and that sslMode matches your database requirements.
  • If core-api does not start cleanly, inspect recent logs with oc logs -n poolside -l app.kubernetes.io/name=core-api --tail=50.
For questions about hardware requirements, infrastructure configuration, or deployment issues, contact Poolside support.