Documentation Index
Fetch the complete documentation index at: https://docs.poolside.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Pipelines are code-defined, multi-step workflows for structured work that needs typed data flow, per-step isolation, evaluations, and an audit trail. For simpler, self-contained work, use an agent session task instead. See Workflows and triggers for how to choose. A pipeline is a directed acyclic graph (DAG) of steps defined in Python with the Bridge SDK and version-controlled in Git. Each step declares typed inputs and outputs, and dependencies between steps are inferred automatically from type annotations. You can run pipelines manually from the Poolside Console, on a schedule, or in response to external events.Agent steps vs. programmatic steps
The distinction between agent steps and programmatic steps is central to how you design pipelines. Agent steps start a session with a managed agent. The agent receives a prompt, uses the tools, MCP servers, repositories, skills, and credentials configured on it, reasons about the task, and produces a typed output. Use agent steps for work that requires judgment, such as analyzing a document, generating code, reviewing an issue, or making a decision based on unstructured data. The same managed agent definitions you use in interactive sessions and agent session tasks are available in pipeline agent steps. Programmatic steps run deterministic Python code. Use programmatic steps for work that should produce the same output every time given the same input, such as parsing data, applying business rules, transforming formats, or validating results. A well-designed pipeline typically alternates between agent steps and programmatic steps: an agent step does the reasoning, then a programmatic step validates or transforms the output before passing it to the next agent step.Step execution and isolation
Every step in a pipeline runs in its own dedicated, isolated container. Steps do not share a filesystem, process space, or network context with each other. This execution model provides security, reproducibility, and control at every layer:- Per-step isolation: Each step runs in a separate container. A failing or compromised step cannot affect other steps in the pipeline. There is no shared state between steps other than the explicitly declared typed inputs and outputs.
- Configurable environments: Each step can specify its own container image, CPU, memory, and storage limits through a sandbox definition. One step can run in a Python data-science image with 8 GB of memory while another uses a lightweight Node image with minimal resources.
- Secure credential injection: Steps access secrets through credential bindings, not hard-coded values. Credentials are stored in the Poolside Console, injected as environment variables at runtime, and scoped to the specific steps that declare them. Credentials never appear in code, logs, or step outputs.
- Runs in your infrastructure: Pipeline execution happens within your deployment. Data stays in your environment and does not egress to external systems unless a step explicitly makes an outbound call.
- Full audit trail: Every step run records what container image it used, what credentials it accessed, what inputs it received, what it produced, and how long it took. For agent steps, the complete reasoning trace is available in the Trajectory Viewer.
- Setup and cleanup scripts: Steps can run shell scripts before and after execution for environment preparation and cleanup, such as installing dependencies or removing temporary files.
Define a pipeline
Define pipelines and steps in Python using the Bridge SDK. The SDK covers:- Pipeline and step definitions with typed inputs and outputs
- Step dependencies with automatic DAG construction
- Sandbox configuration for per-step compute environments
- Credential injection for secure access to external systems
- Evaluations for measuring step output quality
- Webhook actions for event-triggered execution
Run a pipeline from the Poolside Console
Prerequisites- You can access Orchestration > Pipelines in the Poolside Console.
- A repository is connected and at least one branch or commit is indexed. Pipelines appear after indexing succeeds.
- The pipeline you want to run appears for the selected repository and commit.
- In the Poolside Console, navigate to Orchestration > Pipelines.
- Select a repository.
- Select a pipeline.
- Select the branch and indexed commit you want to run.
- Review the pipeline graph. Each node represents a step, and edges represent dependencies between steps.
- Select the steps you want to run:
- To run every step in the pipeline, click Execute Pipeline.
- To run specific steps, select them in the graph. Orchestration includes any upstream dependencies that the selected steps require.
- In the Execute tab, review the selected steps and their input fields.
- Enter required inputs in the generated form. Input fields are determined by the step’s typed parameters defined in the Bridge SDK. If a step accepts a complex object, enter it as JSON.
- If a selected step depends on an upstream step that you did not select, choose a completed step run from a previous build to use as the cached dependency input. This lets you reuse prior results without re-running upstream steps.
- Click Execute.
Run a pipeline on a schedule
Use schedules to run pipelines at regular intervals without manual interaction. A schedule triggers one or more tasks on a cron interval. Each task starts an agent session with a configured agent, sandbox, and prompt. Examples of scheduled pipelines:- Nightly data quality checks: Run a validation pipeline every night that scans ingested data for anomalies and posts a summary to Slack.
- Weekly report generation: Generate customer-facing reports every Monday morning from the latest data.
- Daily compliance scans: Check code repositories or configuration files against compliance rules on a recurring basis.
Run a pipeline from a webhook
Use webhooks to trigger pipelines in response to events from external systems. Any service that can send an HTTP POST request with a signed payload can trigger a pipeline. You define which events to act on and how to map the event payload to step inputs using CEL expressions in the Bridge SDK. Examples of webhook-triggered pipelines:- Code review on pull request: A GitHub webhook fires when a pull request is opened. The pipeline analyzes the diff, checks for security issues, verifies test coverage, and posts review comments back to the PR.
- Issue triage: A Linear or Jira webhook fires when a new issue is created. The pipeline classifies the issue, assigns a priority based on historical patterns, routes it to the right team, and adds initial analysis as a comment.
- Incident investigation: A PagerDuty or Datadog webhook fires when an alert triggers. The pipeline pulls recent logs, correlates related events, identifies probable root causes, and drafts an incident summary for the on-call engineer.
- Content review: A Slack webhook fires when a message is posted in a monitored channel. The pipeline reviews the content against internal policies and flags anything that needs human review.
- Customer support: A Zendesk or Intercom webhook fires when a new ticket arrives. The pipeline researches the customer’s history, checks the knowledge base, and drafts a response for the support agent.
- Data pipeline trigger: A custom webhook fires when new files land in a storage bucket. The pipeline ingests, validates, and transforms the data before loading it into the target system.
Review builds
After a pipeline runs, regardless of how it was triggered, open it from Orchestration >- Waterfall view: Shows step runs in execution order with timing information.
- Graph view: Shows the dependency graph with status indicators for each step.