Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cirron.com/llms.txt

Use this file to discover all available pages before exploring further.

1. Zero-touch training

ci.profile() alone auto-detects Keras or HuggingFace Trainer and produces the full scope tree without any loop changes.
import cirron as ci
import tensorflow as tf

ci.profile()

model = tf.keras.Sequential([...])
model.compile(optimizer="adam", loss="mse")
model.fit(X, y, epochs=20)   # epoch / batch scopes + metric marks, automatic
When running disconnected, traces land at ./.cirron/spool/*.json. When a CIRRON_API_KEY is set (or you’re running inside a Cirron pipeline), they stream to the platform in parallel. To inspect traces inline without leaving Python:
ci.trace()                # text scope tree
ci.trace(format="df")     # one row per span (requires pandas)
To stream a live [cirron] line per closed span to your terminal:
ci.profile(output="stdout")
See ci.trace and output= for the full surface.

2. Custom PyTorch loop

If your loop is hand-rolled (no Keras, no Trainer), wrap the iterables with ci.epochs() and ci.batches(): they yield exactly what the inner iterable yields but add indexed epoch / batch scopes around each iteration.
import cirron as ci

ci.profile()
ci.watch(model)   # bare PyTorch loops only, Keras / HF Trainer skip this

for epoch in ci.epochs(range(20)):
    for batch in ci.batches(loader):
        loss = train_step(batch)
        ci.mark("loss", loss.item())
Torch forward / backward / optimizer_step hooks still fire underneath, so epoch → step → {data_load, forward, backward, optimizer_step} is produced automatically. ci.batches(loader) additionally measures DataLoader stall time when loader is a torch.utils.data.DataLoader. Use ci.scope for regions the hooks don’t cover, and ci.mark to log scalar values into the innermost open scope:
with ci.scope("augmentation"):
    batch = augment(batch)

ci.mark("grad_norm", compute_grad_norm(model))
ci.mark("learning_rate", scheduler.get_last_lr()[0])

3. Inference

@ci.inference binds your serving function to a deployment record. Per-request scope isolation uses contextvars, so FastAPI, Flask, ASGI, and plain threaded servers all work.
import cirron as ci

@ci.inference
def predict(request):
    with ci.scope("preprocess"):
        x = preprocess(request)
    with ci.scope("model"):
        y = model(x)
    with ci.scope("postprocess"):
        return format_response(y)
For LLMs, the SDK detects OpenAI-compatible clients and HuggingFace generate() automatically: token counts, time-to-first-token, and throughput marks appear without extra code.

Where to look

All three paths write to the same place:
./.cirron/
  spool/
    <created_ns>-<batch_id>.json  # one batch per flush
  snapshots/
    <span_id>/
      weights.safetensors         # sampled / full snapshots, when enabled
The schema is documented in Schemas and is stable within a major SDK version. Any tool that reads JSON and safetensors can consume it.

Next steps

Core concepts

Scope tree model, marks, transport selection, framework hook priority.

Profiling

The full training instrumentation surface.

Inference

@ci.inference in depth, LLM detection, config-driven capture.

Data loading

ci.load(): unified data access across local, cloud, and SQL.