Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cirron.com/llms.txt

Use this file to discover all available pages before exploring further.

Cirron SDK

The Cirron SDK is the Python-side profiler and data loader for the Cirron platform. It attaches to your training or serving process and records what’s happening inside it: per-epoch and per-batch timing, weight and gradient statistics, DataLoader stalls, GPU utilization, and cost attribution. It is not a model framework, a tracking dashboard, or a registration client. It is a profiler, plus a thin unified data loader.

Standalone-usable, platform-amplified

The SDK works on a disconnected laptop, in an air-gapped cluster, or connected to the Cirron platform. In all three modes it produces the same artifacts in the same open formats. The relationship to the platform is the same as git to GitHub: Git works without GitHub, the repo is a portable local artifact, and nobody calls that a lock-in play.
  • Local (SDK alone): inspect + export. ci.profile() with no credentials writes structured JSON span records and safetensors snapshots to ./.cirron/. No proprietary format. Downstream tools and the platform ingestion worker both consume this format.
  • Connected (SDK + platform): visualize + analyze + collaborate
    • attribute cost. The platform stores, aggregates across runs, diffs epoch-over-epoch, attributes dollar cost from the instance type it already knows about, streams traces live to the dashboard, and gates on team access.
If you stop using Cirron, the ./.cirron/ directory is yours. It’s documented, versioned, and already compatible with any analytics or observability tool that reads Parquet or OpenTelemetry.

The wedge

You’re 10 epochs into a training run. Loss spikes. Throughput halves. You want to know why, and you want to know it against every other run you’ve done.
import cirron as ci

ci.profile()  # attaches to the process, detects torch, installs hooks

for epoch in range(20):
    for batch in loader:          # DataLoader iteration → batch scopes, automatic
        loss = train_step(batch)  # forward / backward / optimizer_step → scopes, automatic
        ci.mark("loss", loss.item())
One line of setup. No scope wrapping, no callbacks, no manual instrumentation. With no other changes you now get wall time, GPU seconds, memory peak, per-layer weight and gradient statistics, DataLoader stall time. When connected to the platform, you also get dollar cost and epoch-over-epoch diffs against prior runs of the same pipeline.

What ships today

  • ci.profile(): config resolution, framework autodetection, flush thread, cirron.session root scope
  • ci.scope / ci.mark: lock-free thread-local scope stack + mark buffer, kind="point" | "summary"
  • ci.epochs / ci.batches: loop wrappers
  • Framework hooks: PyTorch, TensorFlow / Keras, HuggingFace transformers, and opt-in scikit-learn via ci.wrap()
  • Snapshots: snapshots="stats" | "sampled" | "full" with safetensors blob writes
  • @ci.inference: sync and async, per-request ContextVar isolation, OpenAI / HF LLM detectors (TTFT, throughput, token counts)
  • ci.load(): local-first dispatcher, scheme routing for s3:// / gs:// / azure:// / file://, SQL sources for postgres:// / mysql:// / databricks:// / snowflake://, where= pushdown, match= / ext= / columns= / map=, multi-source concat, lazy=True, five as_= return types (pandas, polars, iter, tensor, hf)
  • ci.env / ci.secret / the Cirron configuration class
  • ci.deps: in-process extras check. Reports installed versions, or raises CirronDependencyError listing every missing dep with a combined pip install command

Start here

Installation

Install the core package and the extras you need.

Quickstart

Three 5-minute paths: zero-touch training, custom loop, inference.

Core concepts

Scope tree, marks, transport, and the local-first spool.