Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cirron.com/llms.txt

Use this file to discover all available pages before exploring further.

Module-level functions (ci.profile(), ci.load(), etc.) delegate to a process-wide default Cirron instance. Instantiate the class directly for self-hosted endpoints, multi-workspace setups, custom spool directories, or test harnesses.

The Cirron class

from cirron import Cirron

c = Cirron(
    api_key=None,                                 # or env CIRRON_API_KEY
    api_endpoint="https://app.cirron.com",        # self-hosted? point here
    workspace_id=None,                            # or env CIRRON_WORKSPACE_ID
    output_dir="./.cirron/",                      # local spool + snapshots root
    snapshots="stats",                            # "stats" | "sampled" | "full"
    sample_rate=0.01,
    flush_interval=1.0,
    spool_max_bytes=1_000_000_000,                # 1 GB default spool cap
    load_warn_bytes=1_000_000_000,                # 1 GB ci.load() warn threshold
    load_max_bytes=10_000_000_000,                # 10 GB ci.load() error threshold
)

c.profile()
df = c.load("training-data")
Every method on the class mirrors the module-level function: profile, scope, mark, epochs, batches, load, env, secret, inference, wrap.

Multi-workspace / multi-endpoint

Two independent instances coexist in one process. Use this for self-hosted + cloud comparisons or for driving traces into two workspaces from the same script.
prod = Cirron(api_endpoint="https://app.cirron.com")
gov  = Cirron(api_endpoint="https://cirron.internal.mil", output_dir="./.cirron-gov/")

prod.profile()
gov.profile()

Config resolution order

Every config value resolves in this order, first match wins:
  1. Explicit constructor argument (or explicit function argument)
  2. CIRRON_* environment variables (CIRRON_API_KEY, CIRRON_API_ENDPOINT, CIRRON_WORKSPACE_ID, …)
  3. ~/.cirron/config.toml: written by cirron login (planned) or by hand
  4. SDK defaults
~/.cirron/config.toml is a plain TOML file:
api_key = "..."
api_endpoint = "https://app.cirron.com"
workspace_id = "ws_..."

cirron.yaml project config

cirron.yaml is the project-level configuration file, shared across the Cirron CLI, the SDK, and the Cirron platform. It lives at the project root and is the canonical format. If you prefer JSON, cirron.json is also accepted: the CLI, SDK, and platform all resolve cirron.yamlcirron.ymlcirron.json in that order, picking the first one they find. The schema is identical across both formats. YAML is the default in new projects, JSON is fine if that is your preference. The full schema covers build, deploy, and serving metadata used by the CLI and platform. See the CLI docs on cirron.yaml for the full surface. The SDK itself reads a narrower subset:
SectionUsed by the SDK for
name, versionIdentity: stamped onto traces, shown in the dashboard
frameworkOne of pytorch, tensorflow, sklearn, onnx; narrows hook autodetect
typeOne of classification, regression, time-series, embedding, computer-vision
profilingDefaults for ci.profile(): snapshot mode, sample rate, flush interval
servingConfigRuntime, input / output JSON schemas, class labels, feature order
envEnvironment variables merged into the container at build time
secretsSecret names the project declares (platform validates they’re configured)
dataDataset registrations: aliases ci.load() resolves when source="platform"
Extra fields (for example the CLI’s build, deploy, environments sections) are tolerated without error. The SDK’s Pydantic model is configured with extra="allow" so the same cirron.yaml feeds all three tools.

Example

name: sentiment-classifier
version: 1.0.0
framework: pytorch
type: classification
description: BERT-based sentiment analysis

profiling:
  snapshots: stats
  sample_rate: 0.01
  flush_interval: 1.0

servingConfig:
  runtime: onnx
  class_labels: [negative, neutral, positive]
  feature_order: [text]
  input_schema:
    type: object
    properties:
      text:
        type: string
    required: [text]
  output_schema:
    type: object
    properties:
      label:
        type: string
      score:
        type: number

env:
  MODEL_PATH: /models/sentiment-v2

secrets:
  - openai-api-key

data:
  training: training-data-v2
  validation: validation-data-v2

Defaults when sections are omitted

  • profiling absent → SDK defaults (snapshots="stats", sample_rate=0.01, flush_interval=1.0).
  • servingConfig absent → no platform-side serving contract; the deployment falls back to the runtime’s generic request handler.
  • env / secrets / data absent → empty dict / list (the SDK treats them as “nothing declared” rather than an error).
description is optional; name, version, framework, and type are required at the top level.

Aliasing

The SDK accepts both servingConfig (camelCase, matches the CLI / platform convention) and serving_config (snake_case, matches Pydantic / Python convention). They’re interchangeable in YAML.

Environment Variables

def env(key: str, default: Any = None) -> Any: ...
A thin convenience over os.environ with .env file support and JSON auto-parsing.
api_base = ci.env("API_BASE_URL", default="https://api.example.com")
debug    = ci.env("DEBUG", default=False)

# JSON auto-parse: values starting with `{` or `[` are parsed
config   = ci.env("CONFIG")   # returns dict if CONFIG='{"threshold": 0.5}'
  • .env loading: on first call, ci.env() loads a .env file from the current working directory via python-dotenv if installed. If not installed, .env loading is skipped silently; os.environ is read directly.
  • JSON auto-parsing: values starting with { or [ are parsed as JSON. Scalars (numbers, "true", "false") stay as strings; users cast them. This avoids surprises like "123" becoming an int you didn’t expect.

Platform context

When running inside a Cirron pipeline or deployment, the runner injects these automatically:
VariablePurpose
CIRRON_RUN_IDRun this process belongs to
CIRRON_PIPELINE_IDPipeline this run executed as (if any)
CIRRON_DEPLOYMENT_IDDeployment this process is part of (if any)
CIRRON_WORKSPACE_IDOwning workspace
ci.env() reads them like any other env var. ci.profile() reads them internally to set span attribution and to pick the transport.

Secrets

def secret(name: str) -> str: ...
Reads platform-mounted secrets. Resolution order, first match wins:
  1. CIRRON_SECRET_<NAME>: env var (hyphens in name map to underscores; name.upper() is the env var suffix)
  2. File mount: /etc/cirron/secrets/<name> (trailing newline stripped). Used in air-gapped environments where env vars aren’t the injection mechanism.
  3. CirronSecretNotFound: raised if neither is present, with a message pointing at the platform’s secrets UI.
api_key = ci.secret("openai-api-key")   # reads CIRRON_SECRET_OPENAI_API_KEY
Secrets are never logged, never included in traces, never flushed to the spool.

Error types

The top-level cirron package exposes the SDK-wide error types:
from cirron import (
    CirronError,              # base class
    CirronDependencyError,    # optional extra not installed
    CirronSecretNotFound,     # ci.secret, neither env var nor file mount
    CirronYamlError,          # cirron.yaml parse / validation failure
)
The data-loader errors live in cirron.core.errors (they aren’t re-exported from the top level yet):
from cirron.core.errors import (
    CirronDatasetNotFound,    # source="platform", name doesn't resolve
    CirronPlatformRequired,   # source="platform", credentials / network unavailable
    CirronDataSizeError,      # ci.load, matched bytes ≥ load_max_bytes
)
See Errors for the full hierarchy and example handlers.

Next

Cirron class reference

Full constructor and method signatures.

Schemas

The spool JSON layout and safetensors snapshot layout.