EchoTime Explainable time-series similarity for humans and agents.
Documentation

Getting Started

The job of this page is simple: make the first successful EchoTime interaction obvious, fast, and realistic.

Install
pip install echotime

EchoTime keeps the install surface light, but mixed scientific Python stacks can still produce resolver noise. Use a clean environment when possible, and use the compatibility / doctor flow when you cannot.

What this page is for
  • Read data with pandas or NumPy.
  • Call one EchoTime function.
  • Print a summary card first; export HTML second.
Compare two columns
import pandas as pd
from echotime import compare_series

df = pd.read_csv("my_metrics.csv")
report = compare_series(df["sessions"], df["signups"])
print(report.to_summary_card_markdown())

Use this when your question is "does column A move like column B?"

Profile a wide table
import pandas as pd
from echotime import profile_dataset

df = pd.read_csv("my_timeseries.csv").rename(columns={"date": "timestamp"})
profile = profile_dataset(df, domain="energy")
print(profile.to_summary_card_markdown())

Use this when you have one timestamp column and multiple numeric measurements.

Profile an irregular long table
import pandas as pd
from echotime import profile_dataset

df = pd.read_csv("patient_vitals.csv").rename(columns={
    "patient_id": "subject",
    "charttime": "timestamp",
    "lab_name": "channel",
    "lab_value": "value",
})
profile = profile_dataset(df, domain="clinical")
print(profile.to_summary_card_markdown())

Use this when your data are sparse or irregular and live in rows instead of a clean matrix.

Expected output
# EchoTime similarity summary
overall similarity: ...
top components: shape similarity, trend similarity, spectral similarity

Once the summary card looks sensible, write to_html_report() to disk if you want a shareable artifact.

Static playground

Preview similarity reports, visuals, and flagship cases without installing Python or starting a server.

Colab quickstart

Open a starter notebook in a hosted notebook environment.

uvx CLI

Run the CLI in an isolated ephemeral environment when packaging allows it.

Local demo server

Run a tiny local web app that turns pasted values into similarity verdicts on your own machine.

Practical notes
  • Core install is intentionally small: numpy and scipy.
  • Older aeon / sktime / numba-heavy stacks may still surface resolver warnings.
  • If your column names differ, rename them to aliases like timestamp, subject, channel, and value before the first run.
  • If you only need proof of value first, use the static Pages bundle or local demo instead of installing into a crowded environment immediately.