Documentation Index
Fetch the complete documentation index at: https://docs.poolside.ai/llms.txt
Use this file to discover all available pages before exploring further.
BETA This feature is in beta and may change before general availability.
Overview
The Bridge SDK is a Python package for defining orchestration pipelines as code. You define pipelines and steps as decorated Python functions, declare dependencies through type annotations, and push the code to a Git repository. The Poolside Console indexes your repository and discovers the pipeline definitions automatically.
The SDK handles:
- Pipeline and step definitions with typed inputs and outputs
- Automatic DAG construction from dependency annotations
- Sandbox and credential configuration per step
- Evaluation functions that run after step execution
- Webhook-triggered pipeline actions
- A local CLI for validation and testing
Install the Bridge SDK
Initialize a Python project and add the SDK as a dependency:
uv init my_project
cd my_project
uv add bridge-sdk@git+https://github.com/poolsideai/bridge-sdk.git
uv sync
Add the required build system and Bridge configuration to pyproject.toml:
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.bridge]
modules = ["my_project.steps"]
The modules list tells the SDK which Python modules contain your pipeline and step definitions. The [build-system] section is required for the SDK to import your modules.
Create main.py at the project root to expose the CLI:
from bridge_sdk.cli import main
if __name__ == "__main__":
main()
Define a pipeline and steps
A pipeline groups related steps. Define it as a Pipeline object, then use the @pipeline.step decorator to register functions as steps.
from bridge_sdk import Pipeline
from pydantic import BaseModel
pipeline = Pipeline(name="data_pipeline", description="Fetch and clean data")
class RawData(BaseModel):
content: str
class CleanedData(BaseModel):
content: str
word_count: int
@pipeline.step
def fetch_data() -> RawData:
return RawData(content=" Raw input text ")
@pipeline.step
def clean_data(raw: Annotated[RawData, step_result(fetch_data)]) -> CleanedData:
cleaned = raw.content.strip().lower()
return CleanedData(content=cleaned, word_count=len(cleaned.split()))
Steps use Pydantic models for input and output validation.
Declare dependencies between steps
Use Annotated with step_result to declare that a step depends on the output of another step. The SDK infers the execution order from these annotations and builds the DAG automatically.
from typing import Annotated
from bridge_sdk import step_result
@pipeline.step
def step_a() -> OutputA:
return OutputA(value="hello")
@pipeline.step
def step_b(input_from_a: Annotated[OutputA, step_result(step_a)]) -> OutputB:
return OutputB(result=input_from_a.value.upper())
In this example, step_b runs after step_a completes and receives its output. You do not need to specify execution order manually.
Async steps
Steps can be async:
@pipeline.step
async def async_step(value: str) -> str:
return f"processed: {value}"
The @pipeline.step decorator accepts configuration options.
| Option | Description |
|---|
name | Override the step name. Defaults to the function name. |
description | Human-readable description of the step. |
setup_script | Path to a shell script that runs before the step executes. |
post_execution_script | Path to a shell script that runs after the step executes. |
metadata | Custom key-value pairs, such as {"type": "agent"} to mark a step as an agent step. |
credential_bindings | Map credential UUIDs to environment variable names. See Inject credentials. |
sandbox_definition | Compute resource configuration for the step. See Configure sandbox resources. |
eval_bindings | Attach evaluation functions to the step. See Define evaluations. |
@pipeline.step(
name="process_tickets",
description="Classify and route incoming support tickets",
setup_script="setup.sh",
metadata={"type": "agent"},
)
def process_tickets(ticket: TicketInput) -> TicketResult:
...
Inject credentials
Steps that need to access external systems, such as APIs, databases, or third-party services, use credential bindings to receive secrets securely at runtime. Credentials are stored and managed in the Poolside Console, not in your code.
When a step runs, orchestration injects the bound credentials as environment variables into that step’s isolated container. Credentials are:
- Scoped per step: Only the steps that declare a binding receive the credential. Other steps in the same pipeline do not have access.
- Never in code or logs: The credential UUID in your code is a reference, not the secret value. The actual secret is resolved at runtime and does not appear in step outputs or execution logs.
- Centrally managed: Rotate a credential in the Console and every future step run picks up the new value without code changes.
Use credential_bindings to map credential UUIDs (registered in the Console) to environment variable names:
@pipeline.step(
credential_bindings={
"a1b2c3d4-5678-90ab-cdef-1234567890ab": "API_KEY",
"f0e1d2c3-b4a5-6789-0abc-def123456789": "DB_PASSWORD",
},
)
def authenticated_step() -> str:
import os
api_key = os.environ["API_KEY"]
return "done"
Register credentials in the Poolside Console before referencing their UUIDs in code. See Credentials.
Each step runs in its own isolated container. Use sandbox_definition to control the container image and compute resources for that step’s execution environment.
This per-step isolation means:
- Custom images per step: A data-processing step can use
python:3.11-slim with pandas and numpy pre-installed, while a code-generation step uses a different image with language-specific toolchains. Each step gets exactly the dependencies it needs.
- Resource boundaries: Set CPU, memory, and storage limits to prevent any single step from consuming unbounded resources. A runaway agent step cannot starve other steps of compute.
- Reproducible environments: The same sandbox definition produces the same execution environment every time. Combined with Git-backed pipeline versioning, this makes builds fully reproducible.
from bridge_sdk import SandboxDefinition
@pipeline.step(
sandbox_definition=SandboxDefinition(
image="python:3.11-slim",
cpu_request="2",
memory_request="4Gi",
memory_limit="8Gi",
storage_request="50Gi",
storage_limit="100Gi",
),
)
def resource_intensive_step() -> str:
return "done"
All SandboxDefinition fields are optional. If you do not specify a sandbox definition, the step uses default resources.
| Field | Description |
|---|
image | Container image for the step environment. |
cpu_request | Requested CPU cores. |
memory_request | Requested memory. |
memory_limit | Maximum memory. |
storage_request | Requested storage. |
storage_limit | Maximum storage. |
Use setup_script and post_execution_script on the step decorator for additional environment preparation or cleanup that goes beyond what the container image provides. For example, install a specific package version or remove temporary files after execution.
Define evaluations
Evaluations measure step output quality. Define an eval function with the @bridge_eval decorator, then bind it to a step with eval_bindings.
from bridge_sdk import bridge_eval, EvalResult, StepEvalContext
from typing import Any, TypedDict
class QualityMetrics(TypedDict):
accuracy: float
followed_format: bool
@bridge_eval
def quality_check(ctx: StepEvalContext[Any, Any]) -> EvalResult[QualityMetrics]:
is_correct = ctx.step_output.answer == ctx.step_input.expected
return EvalResult(
metrics={"accuracy": 1.0 if is_correct else 0.0, "followed_format": True},
result="Evaluation complete",
)
The StepEvalContext provides access to step_input and step_output for the step the eval is bound to. Eval results appear in the step run detail page in the Poolside Console.
Define webhook actions
Pipelines can respond to external events through webhook actions. Define a WebhookPipelineAction on the pipeline to filter incoming webhook payloads and map them to step inputs.
from bridge_sdk import Pipeline, WebhookPipelineAction
pipeline = Pipeline(
name="on_issue_create",
webhooks=[
WebhookPipelineAction(
name="linear-issues",
branch="main",
on='payload.type == "Issue" && payload.action == "create"',
transform='{"process_issue": {"issue_id": payload.data.id, "title": payload.data.title}}',
webhook_endpoint="linear_issues",
),
],
)
| Field | Description |
|---|
name | Unique identifier for the action within the pipeline and branch. |
branch | The Git branch where the webhook action is indexed. Determines which pipeline version executes. |
on | A CEL expression that returns a boolean. The action triggers only when this expression evaluates to true. |
transform | A CEL expression that maps the webhook payload to step input fields. The keys must match pipeline step input names. |
webhook_endpoint | The name of the webhook endpoint configured in the Poolside Console. |
The on and transform expressions use CEL (Common Expression Language). Both expressions have access to payload (the parsed JSON request body) and headers (a map of HTTP header values).
To receive webhook events, configure the webhook endpoint in the Poolside Console. See Automation.
Project structure
Single module
my_project/
├── pyproject.toml
├── main.py
└── my_project/
├── __init__.py
└── steps.py
[tool.bridge]
modules = ["my_project.steps"]
Multiple modules
When pipelines grow, split steps across files.
Option A: List each module explicitly.
my_project/
├── pyproject.toml
├── main.py
└── my_project/
└── steps/
├── __init__.py
├── ingestion.py
└── transform.py
[tool.bridge]
modules = ["my_project.steps.ingestion", "my_project.steps.transform"]
Option B: Re-export from the package __init__.py.
# my_project/steps/__init__.py
from .ingestion import *
from .transform import *
[tool.bridge]
modules = ["my_project.steps"]
CLI reference
Use the Bridge CLI to validate your project and test steps locally.
| Command | Description |
|---|
bridge check | Validate project setup, imports, and configuration. |
bridge config get-dsl | Export pipeline, step, and eval definitions as JSON. Use --output-file to write to a file. |
bridge run --step <step-name> | Execute a step locally. Use --input or --input-file for inputs and --results or --results-file for cached dependency results. |
bridge eval run --eval <eval-name> | Execute an eval locally. Use --context or --context-file for the eval context. |
All commands accept --modules to override the module list from pyproject.toml.
uv run bridge check
uv run bridge config get-dsl
uv run bridge run --step clean_data \
--input '{"raw": {"content": " Hello World "}}' \
--results '{}'