Overview
Use this guide when you deploy Poolside on your AWS infrastructure by running the platform on Amazon Elastic Kubernetes Service (Amazon EKS). This guide covers the Helm-based Amazon EKS path. If you already run the Terraform-based AWS deployment bundle, see Amazon EKS with Terraform (legacy) and the migration guide.Reference architecture
Poolside publishes an optional reference architecture for running the platform on Amazon EKS. The architecture shows a validated, opinionated configuration for a minimal, production-ready Poolside deployment, and ships a Terraform starting point for provisioning the AWS infrastructure. If you use the reference architecture, the AWS infrastructure, the Kubernetes prerequisites (namespaces,gp3 StorageClass, AWS Load Balancer Controller, External Secrets Operator, and the optional NVIDIA GPU Operator), and the Helm install are all driven from a single terraform apply. The remaining manual steps are DNS and the first-login OIDC binding.
If you assemble the AWS foundation yourself, the rest of this page documents the chart’s expectations so you can match them with your own infrastructure-as-code.
Prerequisites
Before you start, confirm that you are working from the root of the extracted deployment bundle. The bundle root contains the following directories:- An Amazon EKS cluster running Kubernetes 1.29 or later with an OIDC provider enabled
- An ingress controller capable of routing HTTP and HTTPS traffic to the cluster
- DNS records that resolve to the load balancer
- An Amazon S3 bucket for application data and model checkpoints
- Amazon Elastic Container Registry (Amazon ECR), or another registry that your cluster nodes can access
- TLS certificates for your deployment host names
- Amazon RDS for PostgreSQL 16 or later
- An OpenID Connect (OIDC)-compliant identity provider
- An AWS KMS key for encryption, or a static encryption key for testing
- IAM roles for service accounts (IRSA), or another supported credential path for S3 and encryption
helm3.12or laterkubectl1.29or laterskopeoawsopenssl
Step 1: Prepare the cluster
Create the namespaces that Poolside uses:poolside-models for optional, local inference workloads.
Step 2: Upload container images
Before installation, copy the bundled images into your registry. If you use Amazon ECR, create repositories for the bundled images first:skopeo on macOS with an Arm host and upload images for an amd64 cluster, add --override-arch amd64 --override-os linux.
If the EKS node role already has AmazonEC2ContainerRegistryReadOnly, you do not need an image pull secret for same-account ECR pulls. Otherwise, create one:
Step 3: Configure external dependencies
Create the secrets and values the chart expects before you install the platform. Configure PostgreSQL Create a secret in thepoolside namespace that contains POSTGRESQL_PASSWORD. The chart’s default secret name is poolside-db-secret:
POSTGRESQL_PASSWORD. The application does not recognize POSTGRES_PASSWORD.
If you use the reference architecture, the External Secrets Operator syncs the AWS-managed RDS master password from AWS Secrets Manager into poolside-db-secret automatically. You don’t need to create this secret by hand.
Configure the chart to use Amazon RDS:
global.s3.secretName empty. Pods authenticate using the web identity token from the IRSA role:
secretName:
- Amazon S3:
s3:ListBucket,s3:GetObject, ands3:PutObjecton your bucket - AWS KMS:
kms:Decrypt,kms:GenerateDataKey, andkms:GenerateDataKeyWithoutPlaintexton your KMS key if you use a KMS-backed key
values_template.yaml in the bundle for the full list of service accounts that require IRSA annotations.
Step 4: Configure ingress and TLS
Set the public host names in your values file:ingress-nginx instead of AWS Load Balancer Controller, set ingressClass: "nginx" and use your preferred TLS approach for ingress.
If you restrict ALB access using the
alb.ingress.kubernetes.io/inbound-cidrs annotation, be sure to include the NAT gateway public IP addresses in addition to any developer IPs. In-cluster pods reach the internet-facing ALB through the NAT gateway, so the ALB sees the NAT gateway IP as the source. Without it, internal components get i/o timeout errors.Step 5: Install the platform
Updatecharts/poolside-deployment/values.yaml with your environment-specific values, then install the chart:
Step 6: Verify the installation
After the Helm release finishes, verify ingress, pods, and core API health. Check the ingress objects:core-api logs:
Step 7: Configure DNS
All ingresses should show the same ALB hostname in theADDRESS column. Create a Route 53 ALIAS record pointing your webHost name (for example, poolside.example.com) to that ALB hostname.
Step 8: Complete initial access setup
After the platform is up, open thewebHost URL and complete the first-login flow.
If you have not already created an OIDC application for Poolside, create one now. The login URI must match your webHost URL, and the application must allow the following callback URLs:
https://<web-hostname>/auth/callbackhttps://<web-hostname>/authorize/callbackhttps://<api-hostname>/auth/callbackhttps://<api-hostname>/authorize/callback
- Provider Name (for example, Keycloak, Cognito, Entra ID, Okta)
- Provider URL
- Client ID
- Client secret
Model deployment
After the platform is running, continue with Model inference on Amazon EKS.Troubleshooting
Image pull failures Runkubectl describe pod <pod-name> -n poolside and confirm that the pod can access the expected registry credentials. If you created poolside-registry-secret, confirm that the pod references that secret.
Database connection errors
Confirm that sslMode in your values file matches the SSL configuration of your RDS instance.
Model provisioning issues
Monitor the models-reconciler logs: