Documentation Index
Fetch the complete documentation index at: https://docs.cirron.com/llms.txt
Use this file to discover all available pages before exploring further.
1. Zero-touch training
ci.profile() alone auto-detects Keras or HuggingFace Trainer and
produces the full scope tree without any loop changes.
./.cirron/spool/*.json. When
a CIRRON_API_KEY is set (or you’re running inside a Cirron pipeline),
they stream to the platform in parallel.
To inspect traces inline without leaving Python:
[cirron] line per closed span to your terminal:
ci.trace and
output= for the full
surface.
2. Custom PyTorch loop
If your loop is hand-rolled (no Keras, noTrainer), wrap the iterables
with ci.epochs() and ci.batches(): they yield exactly what the
inner iterable yields but add indexed epoch / batch scopes around
each iteration.
epoch → step → {data_load, forward, backward, optimizer_step} is
produced automatically. ci.batches(loader) additionally measures
DataLoader stall time when loader is a torch.utils.data.DataLoader.
Use ci.scope for regions the hooks don’t cover, and ci.mark to log
scalar values into the innermost open scope:
3. Inference
@ci.inference binds your serving function to a deployment record.
Per-request scope isolation uses contextvars, so FastAPI, Flask, ASGI,
and plain threaded servers all work.
generate() automatically: token counts, time-to-first-token, and
throughput marks appear without extra code.
Where to look
All three paths write to the same place:Next steps
Core concepts
Scope tree model, marks, transport selection, framework hook
priority.
Profiling
The full training instrumentation surface.
Inference
@ci.inference in depth, LLM detection, config-driven capture.Data loading
ci.load(): unified data access across local, cloud, and SQL.