Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.poolside.ai/llms.txt

Use this file to discover all available pages before exploring further.

BETA This feature is in beta and may change before general availability.

Overview

Pipelines are code-defined, multi-step workflows for structured work that needs typed data flow, per-step isolation, evaluations, and an audit trail. For simpler, self-contained work, use an agent session task instead. See Workflows and triggers for how to choose. A pipeline is a directed acyclic graph (DAG) of steps defined in Python with the Bridge SDK and version-controlled in Git. Each step declares typed inputs and outputs, and dependencies between steps are inferred automatically from type annotations. You can run pipelines manually from the Poolside Console, on a schedule, or in response to external events.

Agent steps vs. programmatic steps

The distinction between agent steps and programmatic steps is central to how you design pipelines. Agent steps start a session with a managed agent. The agent receives a prompt, uses the tools, MCP servers, repositories, skills, and credentials configured on it, reasons about the task, and produces a typed output. Use agent steps for work that requires judgment, such as analyzing a document, generating code, reviewing an issue, or making a decision based on unstructured data. The same managed agent definitions you use in interactive sessions and agent session tasks are available in pipeline agent steps. Programmatic steps run deterministic Python code. Use programmatic steps for work that should produce the same output every time given the same input, such as parsing data, applying business rules, transforming formats, or validating results. A well-designed pipeline typically alternates between agent steps and programmatic steps: an agent step does the reasoning, then a programmatic step validates or transforms the output before passing it to the next agent step.

Step execution and isolation

Every step in a pipeline runs in its own dedicated, isolated container. Steps do not share a filesystem, process space, or network context with each other. This execution model provides security, reproducibility, and control at every layer:
  • Per-step isolation: Each step runs in a separate container. A failing or compromised step cannot affect other steps in the pipeline. There is no shared state between steps other than the explicitly declared typed inputs and outputs.
  • Configurable environments: Each step can specify its own container image, CPU, memory, and storage limits through a sandbox definition. One step can run in a Python data-science image with 8 GB of memory while another uses a lightweight Node image with minimal resources.
  • Secure credential injection: Steps access secrets through credential bindings, not hard-coded values. Credentials are stored in the Poolside Console, injected as environment variables at runtime, and scoped to the specific steps that declare them. Credentials never appear in code, logs, or step outputs.
  • Runs in your infrastructure: Pipeline execution happens within your deployment. Data stays in your environment and does not egress to external systems unless a step explicitly makes an outbound call.
  • Full audit trail: Every step run records what container image it used, what credentials it accessed, what inputs it received, what it produced, and how long it took. For agent steps, the complete reasoning trace is available in the Trajectory Viewer.
  • Setup and cleanup scripts: Steps can run shell scripts before and after execution for environment preparation and cleanup, such as installing dependencies or removing temporary files.
This model means you can give an agent step access to a production database credential without exposing that credential to the rest of the pipeline. You can run experimental or third-party logic in a resource-limited sandbox without risk to other steps. And you can audit exactly what happened in every execution after the fact.

Define a pipeline

Define pipelines and steps in Python using the Bridge SDK. The SDK covers:

Run a pipeline from the Poolside Console

Prerequisites
  • You can access Orchestration > Pipelines in the Poolside Console.
  • A repository is connected and at least one branch or commit is indexed. Pipelines appear after indexing succeeds.
  • The pipeline you want to run appears for the selected repository and commit.
Steps
  1. In the Poolside Console, navigate to Orchestration > Pipelines.
  2. Select a repository.
  3. Select a pipeline.
  4. Select the branch and indexed commit you want to run.
  5. Review the pipeline graph. Each node represents a step, and edges represent dependencies between steps.
  6. Select the steps you want to run:
    • To run every step in the pipeline, click Execute Pipeline.
    • To run specific steps, select them in the graph. Orchestration includes any upstream dependencies that the selected steps require.
  7. In the Execute tab, review the selected steps and their input fields.
  8. Enter required inputs in the generated form. Input fields are determined by the step’s typed parameters defined in the Bridge SDK. If a step accepts a complex object, enter it as JSON.
  9. If a selected step depends on an upstream step that you did not select, choose a completed step run from a previous build to use as the cached dependency input. This lets you reuse prior results without re-running upstream steps.
  10. Click Execute.
Orchestration automatically fills input fields that come from selected upstream steps. You only enter values for steps that have no selected upstream dependency or that accept external inputs.

Run a pipeline on a schedule

Use schedules to run pipelines at regular intervals without manual interaction. A schedule triggers one or more tasks on a cron interval. Each task starts an agent session with a configured agent, sandbox, and prompt. Examples of scheduled pipelines:
  • Nightly data quality checks: Run a validation pipeline every night that scans ingested data for anomalies and posts a summary to Slack.
  • Weekly report generation: Generate customer-facing reports every Monday morning from the latest data.
  • Daily compliance scans: Check code repositories or configuration files against compliance rules on a recurring basis.
To set up a schedule, create a task and attach it to a schedule in the Poolside Console. See Create a task and Create a schedule.

Run a pipeline from a webhook

Use webhooks to trigger pipelines in response to events from external systems. Any service that can send an HTTP POST request with a signed payload can trigger a pipeline. You define which events to act on and how to map the event payload to step inputs using CEL expressions in the Bridge SDK. Examples of webhook-triggered pipelines:
  • Code review on pull request: A GitHub webhook fires when a pull request is opened. The pipeline analyzes the diff, checks for security issues, verifies test coverage, and posts review comments back to the PR.
  • Issue triage: A Linear or Jira webhook fires when a new issue is created. The pipeline classifies the issue, assigns a priority based on historical patterns, routes it to the right team, and adds initial analysis as a comment.
  • Incident investigation: A PagerDuty or Datadog webhook fires when an alert triggers. The pipeline pulls recent logs, correlates related events, identifies probable root causes, and drafts an incident summary for the on-call engineer.
  • Content review: A Slack webhook fires when a message is posted in a monitored channel. The pipeline reviews the content against internal policies and flags anything that needs human review.
  • Customer support: A Zendesk or Intercom webhook fires when a new ticket arrives. The pipeline researches the customer’s history, checks the knowledge base, and drafts a response for the support agent.
  • Data pipeline trigger: A custom webhook fires when new files land in a storage bucket. The pipeline ingests, validates, and transforms the data before loading it into the target system.
Webhook setup has two parts: define the webhook action in code with the Bridge SDK, then configure the webhook endpoint in the Poolside Console. See Define webhook actions for the code side and Set up a webhook endpoint for the Poolside Console side.

Review builds

After a pipeline runs, regardless of how it was triggered, open it from Orchestration > https://mintcdn.com/poolside/Tz6xG1rOCu6JtFws/images/icons/builds-icon.svg?fit=max&auto=format&n=Tz6xG1rOCu6JtFws&q=85&s=b5f63200cce3ce1d0e3e091a67325e30 Builds to track progress.
  • Waterfall view: Shows step runs in execution order with timing information.
  • Graph view: Shows the dependency graph with status indicators for each step.
Dashed nodes in the build graph represent cached dependency runs. These are step runs from other builds whose outputs are being reused as inputs for the current build. From a build, you can drill into any individual step run to see its result, evaluation metrics, sandbox configuration, credential bindings, and the Git commit that defined the step. For agent steps, open the linked session in the Trajectory Viewer to inspect the full reasoning trace, including tool calls, intermediate outputs, and decisions.