compare_series
Signature: compare_series(left, right, *, left_timestamps=None, right_timestamps=None, left_name='left', right_name='right', n_points=256) -> SimilarityReport
Purpose: Compare two raw trajectories using shape, DTW, trend, derivative, and spectral similarity.
Why this exists: Cross-disciplinary users often want to say 'does this curve look like that one?' before they want a full forecasting or classification model. This API gives that question a structured answer.
When to use it: Use for GitHub star growth, crypto or commodity price windows, launch-week traffic curves, and any pair of trajectories where shape matters.
Returns: SimilarityReport
Recommended environments: notebook, python_script, cli_batch, pandas_pipeline
Accepted inputs- univariate arrays
- multichannel arrays
- optional timestamps
Inspect these outputs- reference_metrics
- component_mean
- component_scores
- to_summary_card_markdown()
- to_narrative_report()
compare_profiles
Signature: compare_profiles(left, right, *, left_name='left profile', right_name='right profile') -> SimilarityReport
Purpose: Compare two EchoTime profiles or raw datasets at the ontology-axis level.
Why this exists: Sometimes raw units and scales differ too much for direct shape matching, but the datasets are still structurally analogous. Profile similarity answers that higher-level question.
When to use it: Use for cross-domain analogies, benchmark curation, or when you want to explain that two datasets are 'the same kind of temporal problem'.
Returns: SimilarityReport
Recommended environments: notebook, python_script, ml_benchmark, pandas_pipeline
Accepted inputs- DatasetProfile
- SeriesProfile
- or raw inputs accepted by profile_dataset
Inspect these outputs- overall_axis_similarity
- dynamic_similarity
- multivariate_similarity
- metadata['axis_similarity']
rolling_similarity
Signature: rolling_similarity(left, right, *, window, step=1, left_timestamps=None, right_timestamps=None, n_points=128) -> list[dict]
Purpose: Track how similarity changes over aligned rolling windows.
Why this exists: Many high-traffic stories are regime stories: BTC and gold are similar in some windows but not others, and launch-week growth patterns drift over time.
When to use it: Use for windowed market comparisons, launch tracking, and changing relationships over time.
Returns: list of per-window similarity summaries
Recommended environments: notebook, python_script, pandas_pipeline
Accepted inputs- pair of arrays or multichannel arrays
- window length
- optional timestamps
Inspect these outputs- component_mean
- pearson_r
- shape_similarity
- trend_similarity
- spectral_similarity
ncc_sequence / max_ncc / best_shift / sbd / independent_max_ncc / independent_sbd / acf_distance / periodogram_distance / trend_distance / ordinal_pattern_js_distance / linear_trend_model_distance / lcss_similarity / lcss_distance / edr_distance / erp_distance / twed_distance
Signature: ncc_sequence(x, y, *, normalize=True) -> tuple[np.ndarray, np.ndarray]; max_ncc(...) -> float; best_shift(...) -> int; sbd(...) -> float; independent_max_ncc(...) -> float; independent_sbd(...) -> float; acf_distance(x, y, *, max_lag=10) -> float; periodogram_distance(x, y, *, n_coeffs=32) -> float; trend_distance(x, y) -> float; ordinal_pattern_js_distance(x, y, *, order=3, delay=1) -> float; linear_trend_model_distance(x, y) -> float; lcss_similarity(x, y, *, epsilon=1.0, window=None, mode='exact') -> float; lcss_distance(x, y, *, epsilon=1.0, window=None, mode='exact') -> float; edr_distance(x, y, *, epsilon=1.0, normalized=True, window=None, mode='exact') -> float; erp_distance(x, y, *, gap_value=0.0, window=None, mode='exact') -> float; twed_distance(x, y, *, lambda_=1.0, nu=0.001, t_x=None, t_y=None, window=None, mode='exact') -> float
Purpose: Expose the extracted low-level similarity primitives directly when you need one explicit metric instead of a report bundle, including a fast screening path for the elastic distances.
Why this exists: EchoTime's main surface is intentionally report-first, but advanced users still need direct access to shift-aware, rhythm-aware, and elastic distances for retrieval, thresholding, and custom pipelines.
When to use it: Use when you already know which similarity family you need and want a scalar score or lag estimate to plug into downstream logic; use `mode='fast'` for shortlist screening and `mode='exact'` for final reporting.
Returns: NumPy arrays, scalar similarities, scalar distances, or a best-lag integer depending on the function
Recommended environments: notebook, python_script, ml_benchmark, pandas_pipeline
Accepted inputs- 1D arrays
- 2D multichannel arrays
- optional timestamps for TWED
- optional gap, tolerance, or band-width hyperparameters for elastic methods
- `mode='fast'` for shortlist screening, `mode='exact'` for final scoring
Inspect these outputs- the returned scalar score or distance
- the lag array from ncc_sequence
- best_shift for lead-lag interpretation