Scaling curves — time vs sample count
Loading…
Loading v2 data…
v2 · shipped packages Scaling curves — wall time vs m (facets = batch mode)
v2 · shipped packages Scaling curves — peak CPU memory vs m
v2 · shipped packages Speedup heatmap — rows: backend, cols: dataset×mode, cell: pure-R time ÷ backend time
v2 · shipped packages Bit-parity matrix — max|Δβ| per (backend × mode) vs pure-R reference
Archival. Benchmarks from the 2024-era ridge_inference Python package exploring which implementation strategy won. The winning strategies (GSL CPU, CUDA GPU, pure-NumPy fallback) subsequently shipped as RidgeFast, RidgeCuda, and SecActpy's fallback path. These numbers are correctness + timing parity, not a mature-product benchmark.

Coverage. 51 completed 2024 runs across the three datasets (18 + 18 small/medium; 11 for the large GSE131907_Lung_Cancer at 208k samples). Rows were assembled from per-run comparison JSONs plus the earlier dashboard_data.json.bk backup — the "current" dashboard_data.json had been regenerated with some rows pruned, so the backup was the authoritative source for GSE131907.
legacy · 2024 dev iterations Bar chart — Python-backend time across 3 real datasets
legacy · 2024 dev iterations Bar chart — peak memory by Python backend
legacy · 2024 dev iterations Bar chart — speedup of Python backends vs upstream R
legacy · 2024 dev iterations β correlation scatter — 500 sampled values per (dataset × backend) pair