tscfbench¶
A benchmark-and-workflow package for time-series counterfactual inference.
python -m tscfbench helps you turn a counterfactual question into a reproducible study, a readable report, and a reusable workflow. It is not only a model package: it also provides benchmark protocols, canonical studies, teaching surfaces, and agent-friendly artifacts.
What it is¶
- A stable schema for impact and panel counterfactual tasks.
- A benchmark layer for single studies, canonical studies, and model sweeps.
- A workflow layer for reports, notebooks, docs, CI, and coding-agent use.
What it is not¶
- It is not a claim that one built-in baseline is the last word in methodology.
- It is not a giant all-in-one causal inference framework.
- It is not only a demo notebook; it is meant to survive in real research workflows.
Why people adopt it¶
- It starts from recognizable research jobs instead of source files.
- It tells users why each API exists, where it works best, and what it returns.
- It ships canonical studies, benchmark cards, tutorials, and release-facing docs.
- It is also designed for token-aware, agent-driven research workflows.
First commands to run¶
python -m tscfbench package-story
python -m tscfbench capability-map
python -m tscfbench api-atlas
python -m tscfbench scenario-matrix
python -m tscfbench tutorial-index