Skip to main content

Overview

Use this guide when you deploy Poolside on your AWS infrastructure by running the platform on Amazon Elastic Kubernetes Service (Amazon EKS). This guide covers the Helm-based Amazon EKS path. If you already run the Terraform-based AWS deployment bundle, see Amazon EKS with Terraform (legacy) and the migration guide.

Reference architecture

Poolside publishes an optional reference architecture for running the platform on Amazon EKS. The architecture shows a validated, opinionated configuration for a minimal, production-ready Poolside deployment, and ships a Terraform starting point for provisioning the AWS infrastructure. If you use the reference architecture, the AWS infrastructure, the Kubernetes prerequisites (namespaces, gp3 StorageClass, AWS Load Balancer Controller, External Secrets Operator, and the optional NVIDIA GPU Operator), and the Helm install are all driven from a single terraform apply. The remaining manual steps are DNS and the first-login OIDC binding. If you assemble the AWS foundation yourself, the rest of this page documents the chart’s expectations so you can match them with your own infrastructure-as-code.

Prerequisites

Before you start, confirm that you are working from the root of the extracted deployment bundle. The bundle root contains the following directories:
./scripts/
./containers/
./charts/
./binaries/
Platform requirements Provide the following dependencies before you install the platform:
  • An Amazon EKS cluster running Kubernetes 1.29 or later with an OIDC provider enabled
  • An ingress controller capable of routing HTTP and HTTPS traffic to the cluster
  • DNS records that resolve to the load balancer
  • An Amazon S3 bucket for application data and model checkpoints
  • Amazon Elastic Container Registry (Amazon ECR), or another registry that your cluster nodes can access
  • TLS certificates for your deployment host names
  • Amazon RDS for PostgreSQL 16 or later
  • An OpenID Connect (OIDC)-compliant identity provider
  • An AWS KMS key for encryption, or a static encryption key for testing
  • IAM roles for service accounts (IRSA), or another supported credential path for S3 and encryption
Workstation tools Install the following tools on the host you use to run the deployment:
  • helm 3.12 or later
  • kubectl 1.29 or later
  • skopeo
  • aws
  • openssl

Step 1: Prepare the cluster

Create the namespaces that Poolside uses:
kubectl create namespace poolside
kubectl create namespace poolside-models
Use poolside-models for optional, local inference workloads.

Step 2: Upload container images

Before installation, copy the bundled images into your registry. If you use Amazon ECR, create repositories for the bundled images first:
for tar in ./containers/*.tar; do
  basename="${tar%.tar}"
  basename="${basename#./containers/}"
  image_name="${basename%__*}"
  image_name="${image_name%__*}"
  aws ecr create-repository \
    --repository-name "<ecr-prefix>/$image_name" \
    --region <region> 2>/dev/null || true
done
Authenticate to Amazon ECR and upload the images:
aws ecr get-login-password --region <region> | \
  skopeo login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com

chmod +x ./scripts/upload_images.sh
./scripts/upload_images.sh <account-id>.dkr.ecr.<region>.amazonaws.com/<ecr-prefix>
If you run skopeo on macOS with an Arm host and upload images for an amd64 cluster, add --override-arch amd64 --override-os linux. If the EKS node role already has AmazonEC2ContainerRegistryReadOnly, you do not need an image pull secret for same-account ECR pulls. Otherwise, create one:
kubectl create secret docker-registry poolside-registry-secret \
  --docker-server=<account-id>.dkr.ecr.<region>.amazonaws.com \
  --docker-username=AWS \
  --docker-password=$(aws ecr get-login-password --region <region>) \
  -n poolside
Amazon ECR authorization tokens expire after 12 hours. For production deployments, prefer the node role approach for same-account pulls.

Step 3: Configure external dependencies

Create the secrets and values the chart expects before you install the platform. Configure PostgreSQL Create a secret in the poolside namespace that contains POSTGRESQL_PASSWORD. The chart’s default secret name is poolside-db-secret:
kubectl create secret generic poolside-db-secret \
  --from-literal=POSTGRESQL_PASSWORD=<rds-password> \
  -n poolside
The key must be POSTGRESQL_PASSWORD. The application does not recognize POSTGRES_PASSWORD. If you use the reference architecture, the External Secrets Operator syncs the AWS-managed RDS master password from AWS Secrets Manager into poolside-db-secret automatically. You don’t need to create this secret by hand. Configure the chart to use Amazon RDS:
global:
  database:
    host: "<rds-endpoint>"
    port: 5432
    user: "poolside"
    sslMode: "require"
    temporalDbEnableTls: "true"
Configure the encryption key Choose one encryption path. For AWS Key Management Service (AWS KMS), set the KMS key ARN:
global:
  encryptionKMSArn: "<kms-key-arn>"
Use either KMS or a static key—not both. If both are configured, the application refuses to start. For a static key in test environments:
kubectl create secret generic encryption-key-secret \
  --from-literal=ENCRYPTION_KEY=$(openssl rand -hex 16) \
  -n poolside
global:
  encryptionKeySecret: "encryption-key-secret"
The encryption key protects data stored in the database, including SSO client secrets and API tokens. Switching between KMS and a static key on a running deployment makes previously encrypted data unreadable.
Configure Amazon S3 For IRSA (recommended), leave global.s3.secretName empty. Pods authenticate using the web identity token from the IRSA role:
global:
  s3:
    bucket: "<s3-bucket-name>"
    region: "<region>"
    secretName: ""
For static credentials, create the secret in both namespaces:
kubectl create secret generic aws-credentials \
  --from-literal=AWS_ACCESS_KEY_ID=<key-id> \
  --from-literal=AWS_SECRET_ACCESS_KEY=<secret-key> \
  -n poolside

kubectl create secret generic aws-credentials \
  --from-literal=AWS_ACCESS_KEY_ID=<key-id> \
  --from-literal=AWS_SECRET_ACCESS_KEY=<secret-key> \
  -n poolside-models
Then set secretName:
global:
  s3:
    bucket: "<s3-bucket-name>"
    region: "<region>"
    secretName: "aws-credentials"
Configure IAM roles for service accounts When you use IRSA, grant the pod IAM role access to the AWS services the platform uses:
  • Amazon S3: s3:ListBucket, s3:GetObject, and s3:PutObject on your bucket
  • AWS KMS: kms:Decrypt, kms:GenerateDataKey, and kms:GenerateDataKeyWithoutPlaintext on your KMS key if you use a KMS-backed key
The IAM trust policy must include all Poolside service accounts. Refer to values_template.yaml in the bundle for the full list of service accounts that require IRSA annotations.

Step 4: Configure ingress and TLS

Set the public host names in your values file:
global:
  openshiftCompatibility: false
  ingressClass: "alb"
  apiHost: "<api-hostname>"
  webHost: "<web-hostname>"
Poolside requires a configured Kubernetes ingress controller, such as AWS Load Balancer Controller or the legacy NGINX ingress controller. If you use AWS Load Balancer Controller, configure both ingress resources to share a single application load balancer:
core-api:
  ingress:
    create: true
    annotations:
      alb.ingress.kubernetes.io/certificate-arn: "<acm-certificate-arn>"
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
      alb.ingress.kubernetes.io/ssl-redirect: "443"
      alb.ingress.kubernetes.io/group.name: poolside

web-assistant:
  ingress:
    create: true
    annotations:
      alb.ingress.kubernetes.io/certificate-arn: "<acm-certificate-arn>"
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
      alb.ingress.kubernetes.io/ssl-redirect: "443"
      alb.ingress.kubernetes.io/group.name: poolside
If you use ingress-nginx instead of AWS Load Balancer Controller, set ingressClass: "nginx" and use your preferred TLS approach for ingress.
If you restrict ALB access using the alb.ingress.kubernetes.io/inbound-cidrs annotation, be sure to include the NAT gateway public IP addresses in addition to any developer IPs. In-cluster pods reach the internet-facing ALB through the NAT gateway, so the ALB sees the NAT gateway IP as the source. Without it, internal components get i/o timeout errors.

Step 5: Install the platform

Update charts/poolside-deployment/values.yaml with your environment-specific values, then install the chart:
helm install poolside-deployment ./charts/poolside-deployment \
  --namespace poolside \
  -f ./charts/poolside-deployment/values.yaml
Most deployments require the following values:
global:
  containerRepositoryOverride: "<account-id>.dkr.ecr.<region>.amazonaws.com/<ecr-prefix>"
  openshiftCompatibility: false
  ingressClass: "alb"
  s3:
    secretName: ""

postgres:
  enabled: false

Step 6: Verify the installation

After the Helm release finishes, verify ingress, pods, and core API health. Check the ingress objects:
kubectl get ingress -n poolside
Check the pods:
kubectl get pods -n poolside
Check the recent core-api logs:
kubectl logs -n poolside -l app.kubernetes.io/component=core-api --tail=50
Look for log lines that show the database connection succeeded and the S3 flush loops are running.

Step 7: Configure DNS

All ingresses should show the same ALB hostname in the ADDRESS column. Create a Route 53 ALIAS record pointing your webHost name (for example, poolside.example.com) to that ALB hostname.

Step 8: Complete initial access setup

After the platform is up, open the webHost URL and complete the first-login flow. If you have not already created an OIDC application for Poolside, create one now. The login URI must match your webHost URL, and the application must allow the following callback URLs:
  • https://<web-hostname>/auth/callback
  • https://<web-hostname>/authorize/callback
  • https://<api-hostname>/auth/callback
  • https://<api-hostname>/authorize/callback
During setup, provide the following values from your identity provider:
  • Provider Name (for example, Keycloak, Cognito, Entra ID, Okta)
  • Provider URL
  • Client ID
  • Client secret

Model deployment

After the platform is running, continue with Model inference on Amazon EKS.

Troubleshooting

Image pull failures Run kubectl describe pod <pod-name> -n poolside and confirm that the pod can access the expected registry credentials. If you created poolside-registry-secret, confirm that the pod references that secret. Database connection errors Confirm that sslMode in your values file matches the SSL configuration of your RDS instance. Model provisioning issues Monitor the models-reconciler logs:
kubectl logs -n poolside -l app.kubernetes.io/component=models-reconciler
Core API errors Check recent core-api logs:
kubectl logs -n poolside -l app.kubernetes.io/component=core-api --tail=50