Skip to main content
This guide assumes that you deployed Poolside using the instructions in the Install on AWS guide.

Overview

This guide describes how to update an existing Poolside deployment on Amazon Web Services (AWS). The update process is organized into the following phases:
  1. Update the base infrastructure: Update the EKS cluster, VPC, RDS database, and supporting components.
  2. Update node groups and add-ons: Update CPU and GPU node groups and EKS add-ons.
  3. Upload container images and checkpoints: Upload updated container images and model checkpoints.
  4. Deploy the application update: Deploy the updated Poolside application.

Deployment bundle

The updated bundle follows the same structure as the initial install. For more information, see Install on AWS.

Prerequisites

  • A working Poolside deployment completed with the Install on AWS guide.
  • The updated deployment bundle provided by Poolside.
  • Tools installed on the host:

Environment configuration

Set the following shell variables before starting. The bash commands throughout this guide reference these variables so you can copy and paste them directly. These values should match the ones used during the initial install.
Only the Poolside-provided values should change from release to release. The values that you provide should not change unless you are making a change to your environment.
Provided by you:
export DEPLOYMENT_NAME=""        # Must match the name used in the initial install (for example, production, dev-team-name)
export AWS_REGION=""             # AWS region (for example, us-east-1, us-east-2)
export AWS_PROFILE=""            # AWS CLI profile (for example, default, work-profile)
export ACCOUNT_ID=""             # AWS account ID (for example, 123456789012)
export CONTAINER_REGISTRY_URI="" # Registry URI (for example, 123456789012.dkr.ecr.us-east-2.amazonaws.com)
export KEY_FILE=""               # SSH private key if using a bastion (for example, ~/.ssh/id_ed25519)
Provided by Poolside (contact Poolside if you do not have these):
export POOLSIDE_SHARED_BUCKET=""    # S3 bucket where Poolside release assets are stored
export POOLSIDE_RELEASE_KEY=""      # S3 key/prefix of the release
export POOLSIDE_RELEASE_VERSION=""  # Release version string
export POOLSIDE_VERSION_TAG=""      # Version tag for Terraform containers

Preparation

Step A: Download the updated deployment bundle

Download the updated bundle from the Poolside-provided S3 bucket:
aws s3 cp \
  s3://$POOLSIDE_SHARED_BUCKET/$POOLSIDE_RELEASE_KEY/$POOLSIDE_RELEASE_VERSION \
  ./poolside-bundle --recursive
Set BUNDLE_DIR to the bundle root (the directory that contains containers/ and installation_steps/):
export BUNDLE_DIR=./poolside-bundle/<release-directory>
Change into the bundle root before running the remaining steps:
cd $BUNDLE_DIR

Step B: Sync Terraform containers to your registry

  1. Authenticate to your container registry:
    podman login $CONTAINER_REGISTRY_URI
    
  2. Import the updated Terraform containers from the bundle into your registry using skopeo.
    This step syncs only the deployment containers. Application containers are synced in Step 2.5: Upload containers and model checkpoints. If your registry does not support multi-architecture images, copy a single architecture and omit the --multi-arch all flag.
    # Sync infra-phase-1 container
    skopeo copy --multi-arch all \
      dir:$BUNDLE_DIR/containers/poolside-self-managed-infra-phase-1---$POOLSIDE_VERSION_TAG \
      docker://$CONTAINER_REGISTRY_URI/poolsideai/poolside-self-managed-infra-phase-1:$POOLSIDE_VERSION_TAG
    
    # Sync infra-phase-2 container
    skopeo copy --multi-arch all \
      dir:$BUNDLE_DIR/containers/poolside-self-managed-infra-phase-2---$POOLSIDE_VERSION_TAG \
      docker://$CONTAINER_REGISTRY_URI/poolsideai/poolside-self-managed-infra-phase-2:$POOLSIDE_VERSION_TAG
    
    # Sync deployment container
    skopeo copy --multi-arch all \
      dir:$BUNDLE_DIR/containers/poolside-self-managed-deployment---$POOLSIDE_VERSION_TAG \
      docker://$CONTAINER_REGISTRY_URI/poolsideai/poolside-self-managed-deployment:$POOLSIDE_VERSION_TAG
    
    # Sync container uploader (used in the container upload step)
    skopeo copy --multi-arch all \
      dir:$BUNDLE_DIR/containers/poolside-container-uploader-full---$POOLSIDE_VERSION_TAG \
      docker://$CONTAINER_REGISTRY_URI/poolsideai/poolside-container-uploader-full:$POOLSIDE_VERSION_TAG
    

Step C: Set AWS credentials

Each Terraform container requires AWS credentials. You can pass these credentials in several ways depending on your organization’s security model:
  1. Environment variables (recommended for most users):
    export AWS_ACCESS_KEY_ID=<your-access-key>
    export AWS_SECRET_ACCESS_KEY=<your-secret-key>
    export AWS_SESSION_TOKEN=<your-session-token>
    
  2. Shared credentials volume (default for AWS CLI users): Mount your ~/.aws directory into the container:
    -v ~/.aws:/poolside/.aws -e AWS_PROFILE=$AWS_PROFILE
    
  3. Alternative authentication: If using IAM roles, SSO, or STS federation, follow your internal process for credential injection. Poolside supports all standard AWS authentication mechanisms.

Step D: Review configuration files

Verify that your remote.tf and terraform.tfvars files from the initial deployment are available for each step directory. Each step directory should contain:
  • remote.tf: Configures remote Terraform state storage
  • terraform.tfvars: Deployment variables specific to your environment
  • run_terraform.sh: Wrapper script for launching the Terraform container
If you do not have these files, obtain them from the initial deployment. Your terraform.tfvars should not need changes unless you are intentionally modifying your environment configuration.

Update

Step 1: Update the base infrastructure

Update EKS, VPC, RDS, and other supporting components using the poolside-self-managed-infra-phase-1 container.
  1. Run the 1_infra_phase_1 container: Using the wrapper script:
    cd installation_steps/aws/1_infra_phase_1/
    ./run_terraform.sh
    
    Or run manually:
    podman run -it \
      -v $(pwd)/remote.tf:/poolside/infra-phase-1/remote.tf \
      -v $(pwd)/terraform.tfvars:/poolside/infra-phase-1/terraform.tfvars \
      -v ~/.aws:/poolside/.aws \
      -e AWS_PROFILE=$AWS_PROFILE \
      -w /poolside/infra-phase-1 \
      $CONTAINER_REGISTRY_URI/poolsideai/poolside-self-managed-infra-phase-1:$POOLSIDE_VERSION_TAG
    
  2. Inside the container, run Terraform commands:
    terraform init
    terraform plan
    terraform apply
    
  3. Exit the container.
If you created a bastion host during the initial deployment, you will need to access it in later steps.

Step 2: Update node groups and add-ons

Update EKS node groups (CPU and GPU) and cluster add-ons using the poolside-self-managed-infra-phase-2 container.
  1. Run the 2_infra_phase_2 container: Using the wrapper script:
    cd installation_steps/aws/2_infra_phase_2/
    ./run_terraform.sh
    
    Or run manually:
    podman run -it \
      -v $(pwd)/remote.tf:/poolside/infra-phase-2/remote.tf \
      -v $(pwd)/terraform.tfvars:/poolside/infra-phase-2/terraform.tfvars \
      -v ~/.aws:/poolside/.aws \
      -e AWS_PROFILE=$AWS_PROFILE \
      -w /poolside/infra-phase-2 \
      $CONTAINER_REGISTRY_URI/poolsideai/poolside-self-managed-infra-phase-2:$POOLSIDE_VERSION_TAG
    
  2. Inside the container, run Terraform commands:
    terraform init
    terraform plan
    terraform apply
    
  3. Exit the container.

Step 2.5: Upload containers and model checkpoints

Upload updated Poolside and third-party images to your registries and update model checkpoints in S3. You can run this in parallel with Step 2: Update node groups and add-ons. Poolside provides a container image in the bundle at containers/poolside-container-uploader-full---<tag>. It includes the scripts and images required for fully air-gapped deployments.
  1. Run the container uploader: Using the wrapper script:
    cd installation_steps/aws/2.5_container_upload/
    ./run.sh
    
    Or run manually:
    podman run -it \
      -v ~/.aws:/poolside/.aws \
      -e AWS_PROFILE=$AWS_PROFILE \
      $CONTAINER_REGISTRY_URI/poolsideai/poolside-container-uploader-full:$POOLSIDE_VERSION_TAG
    
  2. Inside the container, run upload-containers.sh to populate ECR. The script reads registry targets from AWS Systems Manager Parameter Store (SSM).
    ./upload-containers.sh --deployment-name $DEPLOYMENT_NAME
    
  3. If new model checkpoints are available, copy them to the S3 bucket:
    aws s3 cp ./checkpoints s3://poolside-$DEPLOYMENT_NAME/checkpoints --recursive --region $AWS_REGION
    
    Poolside will provide the checkpoint details before you start this process.

Step 3: Deploy the application update

Deploy the updated Poolside workloads and ingress using the poolside-self-managed-deployment container.
If you are using a bastion host, connect to it before running this step:
ssh -i $KEY_FILE ubuntu@<bastion-host-ip>
Authenticate to your container registry on the bastion host:
podman login $CONTAINER_REGISTRY_URI
Ensure that you have the remote.tf and terraform.tfvars files available. You can copy the run_terraform.sh script from installation_steps/aws/3_deployment to the bastion host.
  1. Run the deployment container: Using the wrapper script:
    cd installation_steps/aws/3_deployment/
    ./run_terraform.sh
    
    Or run manually:
    podman run -it \
      -v $(pwd)/remote.tf:/poolside/deployment/remote.tf \
      -v $(pwd)/terraform.tfvars:/poolside/deployment/terraform.tfvars \
      -v ~/.aws:/poolside/.aws \
      -e AWS_PROFILE=$AWS_PROFILE \
      -w /poolside/deployment \
      $CONTAINER_REGISTRY_URI/poolsideai/poolside-self-managed-deployment:$POOLSIDE_VERSION_TAG
    
  2. Inside the container, run Terraform commands:
    terraform init
    terraform plan
    terraform apply
    
    Example output:
    Apply complete! Resources: 6 added, 21 changed, 0 destroyed.
    
    Outputs:
    
    inference_config = <sensitive>
    ingress = {
      "hostname" = "<hostname>.us-east-2.elb.amazonaws.com"
      "ip"       = ""
    }
    poolside_deployment_config = <sensitive>
    

Verification

Verify that your updated Poolside deployment is running successfully.
You can also view this information in the AWS Management Console using the EKS Kubernetes resources viewing guide.
Update the context to your EKS cluster:
aws eks update-kubeconfig --region $AWS_REGION --name poolside-$DEPLOYMENT_NAME
Verify all Kubernetes resources are running:
kubectl get pods -n poolside
Example output (pod suffixes will vary):
NAMESPACE   NAME                                           READY   STATUS    RESTARTS   AGE
poolside    core-api-58fb46cb65-h7zrk                      1/1     Running   0          6m6s
poolside    core-api-58fb46cb65-x8t58                      1/1     Running   0          6m6s
poolside    core-api-58fb46cb65-z2t6b                      1/1     Running   0          6m6s
poolside    core-api-models-reconciliation-loop-0          1/1     Running   0          6m6s
poolside    web-assistant-6f5977c886-hcttp                 1/1     Running   0          6m6s
poolside    web-assistant-6f5977c886-jnrjt                 1/1     Running   0          6m6s
poolside    web-assistant-6f5977c886-tb6rm                 1/1     Running   0          6m6s

Bedrock model detection

If you are using Bedrock for inference, verify that the Poolside models are being detected:
kubectl logs -n poolside -l=app.kubernetes.io/name=core-api-models-reconciliation-loop

Local GPU model verification

If you are using local GPU inference, verify that the model pods are running:
kubectl get pods -n poolside-models