{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Embedding Benchmark Analysis\n",
"\n",
"Comparative evaluation of time series embedding methods for piezometric groundwater stations. \n",
"**Dataset**: ~2000 French groundwater stations, daily frequency, 2010--2024. \n",
"**Encoders**: MiniRocket, TS2Vec, SoftCLT, Catch22, PCA brut, Random baseline. \n",
"**Input spaces**: univariate (groundwater level only) and multivariate (+ temperature, precipitation, evapotranspiration). \n",
"\n",
"This notebook reproduces and extends the analysis from `reports/benchmark_20260318/comparison_report.md`."
]
},
{
"cell_type": "code",
"source": "# === Setup: auto-download data from HuggingFace if not available locally ===\nimport os\nfrom pathlib import Path\n\nREPORTS_DIR = Path(\"../reports/benchmark_20260318\")\nHF_REPO = \"xairon/piezo-embedding-benchmark\"\n\nMETRICS_PATH = REPORTS_DIR / \"data\" / \"metrics.json\"\n\nif not METRICS_PATH.exists():\n print(\"Local results not found. Downloading from HuggingFace...\")\n try:\n from huggingface_hub import hf_hub_download\n except ImportError:\n os.system(\"pip install -q huggingface_hub\")\n from huggingface_hub import hf_hub_download\n\n REPORTS_DIR.mkdir(parents=True, exist_ok=True)\n (REPORTS_DIR / \"data\").mkdir(exist_ok=True)\n\n for fname in [\"data/metrics.json\", \"comparison_report.md\"]:\n print(f\" Downloading {fname}...\")\n path = hf_hub_download(HF_REPO, f\"reports/benchmark_20260318/{fname}\", repo_type=\"dataset\")\n target = REPORTS_DIR / fname\n if not target.exists():\n os.symlink(path, str(target))\n\n print(\"Done. Results available.\")\nelse:\n print(f\"Using local results from {REPORTS_DIR.resolve()}\")",
"metadata": {},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"import pathlib\n",
"\n",
"import numpy as np\n",
"import pandas as pd\n",
"import plotly.express as px\n",
"import plotly.graph_objects as go\n",
"from plotly.subplots import make_subplots\n",
"\n",
"pd.set_option(\"display.max_columns\", 30)\n",
"pd.set_option(\"display.precision\", 4)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 1. Load Results\n",
"\n",
"Parse `metrics.json` and build a flat DataFrame with one row per method, containing all metrics."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"METRICS_PATH = pathlib.Path(\"../reports/benchmark_20260318/data/metrics.json\")\n",
"\n",
"with open(METRICS_PATH) as f:\n",
" raw = json.load(f)\n",
"\n",
"print(f\"Loaded {len(raw)} method entries.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rows = []\n",
"for entry in raw:\n",
" row = {\n",
" \"method\": entry[\"space\"],\n",
" \"input_space\": entry.get(\"input_space\", \"unknown\"),\n",
" \"dim\": entry[\"embedding_dim\"],\n",
" \"n_stations\": entry[\"n_stations\"],\n",
" # Classification — milieu_eh\n",
" \"lp_balacc_milieu\": entry[\"linear_milieu\"][\"balanced_accuracy\"],\n",
" \"lp_f1_milieu\": entry[\"linear_milieu\"][\"macro_f1\"],\n",
" \"fisher_milieu\": entry[\"fisher_milieu\"][\"ratio\"],\n",
" # Retrieval — milieu_eh\n",
" \"p1_milieu\": entry[\"knn_retrieval\"][\"precision@1\"],\n",
" \"p5_milieu\": entry[\"knn_retrieval\"][\"precision@5\"],\n",
" \"p10_milieu\": entry[\"knn_retrieval\"][\"precision@10\"],\n",
" # Clustering\n",
" \"ami_milieu\": entry[\"clustering_milieu\"].get(\"ami\"),\n",
" # Dynamic typology\n",
" \"lp_balacc_typo\": entry[\"linear_typology\"][\"balanced_accuracy\"],\n",
" \"lp_f1_typo\": entry[\"linear_typology\"][\"macro_f1\"],\n",
" \"fisher_typo\": entry[\"fisher_typology\"][\"ratio\"],\n",
" \"p5_typo\": entry[\"knn_typology\"][\"precision@5\"],\n",
" # Spatial\n",
" \"mantel_r\": entry[\"mantel_geo\"][\"r\"],\n",
" \"mantel_p\": entry[\"mantel_geo\"][\"p\"],\n",
" # Regression\n",
" \"altitude_r2\": entry[\"regression\"][\"altitude\"][\"r2\"],\n",
" \"altitude_rho\": entry[\"regression\"][\"altitude\"][\"spearman\"],\n",
" # Intrinsic quality\n",
" \"participation_ratio\": entry[\"participation_ratio\"],\n",
" \"uniformity\": entry[\"uniformity\"],\n",
" }\n",
" rows.append(row)\n",
"\n",
"df = pd.DataFrame(rows)\n",
"\n",
"# Derive helper columns\n",
"df[\"encoder\"] = df[\"method\"].str.replace(r\" \\((uni|multi)\\)\", \"\", regex=True).str.replace(\" +W\", \"\", regex=False)\n",
"df[\"whitened\"] = df[\"method\"].str.contains(\"+W\", regex=False)\n",
"df[\"is_baseline\"] = df[\"encoder\"].isin([\"Random\", \"PCA brut\"])\n",
"\n",
"# Remove duplicate Random entries (one per space)\n",
"df = df.drop_duplicates(subset=[\"method\", \"input_space\"])\n",
"\n",
"print(f\"DataFrame shape: {df.shape}\")\n",
"df[[\"method\", \"input_space\", \"dim\", \"whitened\", \"lp_balacc_milieu\", \"p5_milieu\", \"mantel_r\", \"participation_ratio\"]].sort_values(\"lp_balacc_milieu\", ascending=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Color Scheme\n",
"\n",
"Consistent color mapping: one hue per encoder, lighter shade for whitened (+W) variants."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "# Base colors per encoder\nENCODER_COLORS = {\n \"MiniRocket\": \"#1f77b4\", # blue\n \"TS2Vec\": \"#ff7f0e\", # orange\n \"SoftCLT\": \"#2ca02c\", # green\n \"Catch22\": \"#9467bd\", # purple\n \"PCA brut\": \"#8c564b\", # brown\n \"Random\": \"#7f7f7f\", # grey\n}\n\n# Lighter shades for +W variants\nENCODER_COLORS_W = {\n \"MiniRocket\": \"#aec7e8\",\n \"TS2Vec\": \"#ffbb78\",\n \"SoftCLT\": \"#98df8a\",\n \"Catch22\": \"#c5b0d5\",\n \"PCA brut\": \"#c49c94\",\n \"Random\": \"#c7c7c7\",\n}\n\n\ndef method_color(method_name: str) -> str:\n \"\"\"Return color for a method name.\"\"\"\n encoder = method_name.replace(\" (uni)\", \"\").replace(\" (multi)\", \"\").replace(\" +W\", \"\")\n if \"+W\" in method_name:\n return ENCODER_COLORS_W.get(encoder, \"#c7c7c7\")\n return ENCODER_COLORS.get(encoder, \"#7f7f7f\")\n\n\ndef method_colors(methods: list[str]) -> list[str]:\n \"\"\"Return list of colors matching a list of method names.\"\"\"\n return [method_color(m) for m in methods]\n\n\ndef hex_to_rgba(hex_color: str, opacity: float = 0.6) -> str:\n \"\"\"Convert hex color to rgba string for Plotly compatibility.\"\"\"\n h = hex_color.lstrip(\"#\")\n r, g, b = int(h[0:2], 16), int(h[2:4], 16), int(h[4:6], 16)\n return f\"rgba({r},{g},{b},{opacity})\"\n\n\n# Common layout for publication-quality figures\nLAYOUT_DEFAULTS = dict(\n template=\"plotly_white\",\n font=dict(family=\"Serif\", size=13),\n width=900,\n height=500,\n margin=dict(l=60, r=30, t=50, b=80),\n)"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 2. Classification Performance (milieu_eh)\n",
"\n",
"Linear probe balanced accuracy and macro F1 for hydrogeological environment classification. \n",
"Grouped by input space (univariate vs. multivariate). The random baseline (BalAcc = 0.1147) is shown as a horizontal line."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "for space_label, space_val in [(\"Univariate\", \"uni\"), (\"Multivariate\", \"multi\")]:\n sub = df[df[\"input_space\"] == space_val].sort_values(\"lp_balacc_milieu\", ascending=False).copy()\n if sub.empty:\n continue\n\n methods = sub[\"method\"].tolist()\n colors = method_colors(methods)\n\n fig = go.Figure()\n\n fig.add_trace(go.Bar(\n x=methods, y=sub[\"lp_balacc_milieu\"],\n name=\"LP Balanced Accuracy\",\n marker_color=colors,\n text=sub[\"lp_balacc_milieu\"].round(3).astype(str),\n textposition=\"outside\",\n ))\n\n fig.add_trace(go.Bar(\n x=methods, y=sub[\"lp_f1_milieu\"],\n name=\"LP Macro F1\",\n marker_color=[hex_to_rgba(c) for c in colors],\n text=sub[\"lp_f1_milieu\"].round(3).astype(str),\n textposition=\"outside\",\n ))\n\n # Random baseline\n random_balacc = df[df[\"method\"] == \"Random\"][\"lp_balacc_milieu\"].iloc[0]\n fig.add_hline(y=random_balacc, line_dash=\"dash\", line_color=\"#7f7f7f\",\n annotation_text=f\"Random baseline ({random_balacc:.3f})\",\n annotation_position=\"top left\")\n\n # Highlight best method\n best_idx = sub[\"lp_balacc_milieu\"].idxmax()\n best_method = sub.loc[best_idx, \"method\"]\n best_val = sub.loc[best_idx, \"lp_balacc_milieu\"]\n\n fig.update_layout(\n **LAYOUT_DEFAULTS,\n title=f\"Classification Performance (milieu_eh) -- {space_label} Input\",\n yaxis_title=\"Score\",\n barmode=\"group\",\n yaxis_range=[0, max(sub[\"lp_balacc_milieu\"].max(), sub[\"lp_f1_milieu\"].max()) * 1.2],\n legend=dict(x=0.75, y=0.95),\n annotations=[\n dict(\n x=best_method, y=best_val + 0.02,\n text=f\"Best: {best_method}\", showarrow=False,\n font=dict(size=11, color=\"#d62728\"),\n )\n ] + list(fig.layout.annotations or []),\n )\n\n fig.show()"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Observations**:\n",
"- In multivariate mode, TS2Vec achieves the highest balanced accuracy (0.489), followed by SoftCLT (0.473). Both contrastive methods benefit from the additional climate covariates.\n",
"- In univariate mode, all methods cluster between 0.26--0.30, with MiniRocket +W slightly ahead (0.302).\n",
"- F1 tracks BalAcc closely. The large gap between BalAcc and F1 reflects the class imbalance (8 classes, Porous = 48%).\n",
"- All learned methods substantially outperform the random baseline."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 3. Retrieval Quality (k-NN Precision@5)\n",
"\n",
"Precision@5 measures whether the 5 nearest neighbors in embedding space share the same hydrogeological label. \n",
"The random baseline is the expected precision under class frequency distribution: $\\sum p_i^2 = 0.2944$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RANDOM_BASELINE_P5 = 0.2944\n",
"\n",
"for space_label, space_val in [(\"Univariate\", \"uni\"), (\"Multivariate\", \"multi\")]:\n",
" sub = df[df[\"input_space\"] == space_val].sort_values(\"p5_milieu\", ascending=False).copy()\n",
" if sub.empty:\n",
" continue\n",
"\n",
" methods = sub[\"method\"].tolist()\n",
" colors = method_colors(methods)\n",
"\n",
" fig = go.Figure()\n",
" fig.add_trace(go.Bar(\n",
" x=methods, y=sub[\"p5_milieu\"],\n",
" marker_color=colors,\n",
" text=sub[\"p5_milieu\"].round(3).astype(str),\n",
" textposition=\"outside\",\n",
" ))\n",
"\n",
" fig.add_hline(y=RANDOM_BASELINE_P5, line_dash=\"dash\", line_color=\"#7f7f7f\",\n",
" annotation_text=f\"Random baseline ({RANDOM_BASELINE_P5})\",\n",
" annotation_position=\"top left\")\n",
"\n",
" fig.update_layout(\n",
" **LAYOUT_DEFAULTS,\n",
" title=f\"Retrieval Quality: Precision@5 (milieu_eh) -- {space_label}\",\n",
" yaxis_title=\"Precision@5\",\n",
" yaxis_range=[0, sub[\"p5_milieu\"].max() * 1.2],\n",
" showlegend=False,\n",
" )\n",
" fig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Observations**:\n",
"- Whitened variants (+W) consistently improve retrieval. In multivariate space, SoftCLT +W and TS2Vec +W reach P@5 > 0.52, a +14% relative gain over their raw counterparts.\n",
"- Whitening isotropizes the embedding, reducing hubness and making k-NN distances more meaningful.\n",
"- Even the raw MiniRocket (multi) embeddings already reach P@5 = 0.44, well above the random baseline of 0.29."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 4. Dynamic Typology (inertial / annual / reactive)\n",
"\n",
"Data-driven labels from lag-365 autocorrelation. Tests whether embeddings capture temporal dynamics (not geological structure). \n",
"Note: labels are derived from the same time series, so this is a **consistency check**, not an independent evaluation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "for space_label, space_val in [(\"Univariate\", \"uni\"), (\"Multivariate\", \"multi\")]:\n sub = df[df[\"input_space\"] == space_val].sort_values(\"lp_balacc_typo\", ascending=False).copy()\n if sub.empty:\n continue\n\n methods = sub[\"method\"].tolist()\n colors = method_colors(methods)\n\n fig = go.Figure()\n\n fig.add_trace(go.Bar(\n x=methods, y=sub[\"lp_balacc_typo\"],\n name=\"LP Balanced Accuracy\",\n marker_color=colors,\n text=sub[\"lp_balacc_typo\"].round(3).astype(str),\n textposition=\"outside\",\n ))\n\n fig.add_trace(go.Bar(\n x=methods, y=sub[\"lp_f1_typo\"],\n name=\"LP Macro F1\",\n marker_color=[hex_to_rgba(c) for c in colors],\n text=sub[\"lp_f1_typo\"].round(3).astype(str),\n textposition=\"outside\",\n ))\n\n # Random baseline for typology\n random_typo = df[df[\"method\"] == \"Random\"][\"lp_balacc_typo\"].iloc[0]\n fig.add_hline(y=random_typo, line_dash=\"dash\", line_color=\"#7f7f7f\",\n annotation_text=f\"Random baseline ({random_typo:.3f})\",\n annotation_position=\"top left\")\n\n fig.update_layout(\n **LAYOUT_DEFAULTS,\n title=f\"Dynamic Typology Performance -- {space_label} Input\",\n yaxis_title=\"Score\",\n barmode=\"group\",\n yaxis_range=[0, sub[\"lp_balacc_typo\"].max() * 1.15],\n legend=dict(x=0.75, y=0.95),\n )\n fig.show()"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Observations**:\n",
"- MiniRocket dominates the dynamic typology task in both spaces. MiniRocket +W (uni) reaches BalAcc = 0.762, the highest score in the entire benchmark.\n",
"- This makes sense: MiniRocket extracts random convolutional features that directly capture temporal patterns (frequency content, trend, seasonality).\n",
"- Contrastive methods (TS2Vec, SoftCLT) are optimized for instance discrimination, not temporal pattern classification, which explains their lower performance here.\n",
"- The gap between MiniRocket and contrastive methods is larger in univariate mode, where covariates cannot compensate."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 5. Spatial Coherence (Mantel Test)\n",
"\n",
"The Mantel statistic *r* measures the correlation between pairwise embedding distances and pairwise geographic distances. \n",
"A positive *r* means that stations close in embedding space tend to be geographically close."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for space_label, space_val in [(\"Univariate\", \"uni\"), (\"Multivariate\", \"multi\")]:\n",
" sub = df[df[\"input_space\"] == space_val].sort_values(\"mantel_r\", ascending=False).copy()\n",
" if sub.empty:\n",
" continue\n",
"\n",
" methods = sub[\"method\"].tolist()\n",
" colors = method_colors(methods)\n",
"\n",
" fig = go.Figure()\n",
" fig.add_trace(go.Bar(\n",
" x=methods, y=sub[\"mantel_r\"],\n",
" marker_color=colors,\n",
" text=sub[\"mantel_r\"].round(4).astype(str),\n",
" textposition=\"outside\",\n",
" ))\n",
"\n",
" fig.add_hline(y=0, line_dash=\"dot\", line_color=\"#7f7f7f\")\n",
"\n",
" fig.update_layout(\n",
" **LAYOUT_DEFAULTS,\n",
" title=f\"Spatial Coherence: Mantel r -- {space_label}\",\n",
" yaxis_title=\"Mantel r (embedding vs. geographic distance)\",\n",
" showlegend=False,\n",
" )\n",
" fig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Observations**:\n",
"- Whitening dramatically improves spatial coherence. SoftCLT +W (multi) reaches r = 0.214, the highest overall.\n",
"- In multivariate mode, raw embeddings already show positive Mantel r (0.09--0.12), indicating that climate covariates encode spatial proximity.\n",
"- In univariate mode, some raw methods have near-zero or negative Mantel r, which means their embedding geometry has no geographic structure.\n",
"- The whitened multivariate embeddings (r = 0.17--0.21) confirm that post-hoc isotropization reveals latent spatial structure that was masked by dimension collapse."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 6. Intrinsic Quality\n",
"\n",
"### 6a. Participation Ratio\n",
"\n",
"The Participation Ratio (PR) measures the effective dimensionality of the embedding. A PR of *d* means the embedding variance is spread across *d* effective dimensions. \n",
"Higher PR = more dimensions are used (less information waste)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for space_label, space_val in [(\"Univariate\", \"uni\"), (\"Multivariate\", \"multi\")]:\n",
" sub = df[df[\"input_space\"] == space_val].sort_values(\"participation_ratio\", ascending=False).copy()\n",
" if sub.empty:\n",
" continue\n",
"\n",
" methods = sub[\"method\"].tolist()\n",
" colors = method_colors(methods)\n",
"\n",
" fig = go.Figure()\n",
" fig.add_trace(go.Bar(\n",
" x=methods, y=sub[\"participation_ratio\"],\n",
" marker_color=colors,\n",
" text=sub[\"participation_ratio\"].round(1).astype(str),\n",
" textposition=\"outside\",\n",
" ))\n",
"\n",
" fig.update_layout(\n",
" **LAYOUT_DEFAULTS,\n",
" title=f\"Participation Ratio (effective dimensionality) -- {space_label}\",\n",
" yaxis_title=\"Participation Ratio\",\n",
" showlegend=False,\n",
" )\n",
" fig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6b. Uniformity\n",
"\n",
"Uniformity measures how well embeddings are spread on the unit hypersphere. \n",
"More negative = better spread (ideal uniform distribution on the sphere). \n",
"Values near 0 indicate **dimensional collapse**: all embeddings cluster in a low-dimensional subspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "for space_label, space_val in [(\"Univariate\", \"uni\"), (\"Multivariate\", \"multi\")]:\n sub = df[df[\"input_space\"] == space_val].sort_values(\"uniformity\").copy() # more negative = better\n if sub.empty:\n continue\n\n methods = sub[\"method\"].tolist()\n colors = method_colors(methods)\n\n fig = go.Figure()\n fig.add_trace(go.Bar(\n x=methods, y=sub[\"uniformity\"],\n marker_color=colors,\n text=sub[\"uniformity\"].round(2).astype(str),\n textposition=\"outside\",\n ))\n\n # Highlight collapse zone\n fig.add_hrect(y0=-0.5, y1=0, fillcolor=\"rgba(255,0,0,0.08)\", line_width=0,\n annotation_text=\"Collapse zone\", annotation_position=\"top right\")\n\n fig.update_layout(\n **LAYOUT_DEFAULTS,\n title=f\"Uniformity (embedding spread) -- {space_label}\",\n yaxis_title=\"Uniformity (more negative = better)\",\n showlegend=False,\n )\n fig.show()"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Observations**:\n",
"- **Dimensional collapse is severe in contrastive methods.** TS2Vec and SoftCLT (raw) have uniformity near 0 (-0.12 multi, -0.34/-0.50 uni) and PR of only 1.7--3.0. This means they collapse onto 2--3 effective dimensions out of 320.\n",
"- MiniRocket (raw) fares better (uniformity -2.15 to -2.39, PR ~2.3--3.0) but still underuses its 320 dimensions.\n",
"- **Whitening fully resolves the collapse.** All +W variants achieve PR = 64 (the target PCA dimension) and uniformity near -3.8, matching the ideal Random baseline.\n",
"- This explains why whitened methods improve so much on retrieval (P@5) and Mantel r: isotropic embeddings make distance-based metrics meaningful."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 7. Whitening Effect\n",
"\n",
"For each encoder, show the delta (whitened - raw) on key metrics. \n",
"Positive delta means whitening helped; negative means it hurt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Build pairs: raw vs whitened\n",
"deltas = []\n",
"for space_val in [\"uni\", \"multi\"]:\n",
" sub = df[df[\"input_space\"] == space_val]\n",
" raw_methods = sub[~sub[\"whitened\"] & ~sub[\"is_baseline\"]]\n",
" for _, raw_row in raw_methods.iterrows():\n",
" w_name = raw_row[\"method\"] + \" +W\"\n",
" w_row = sub[sub[\"method\"] == w_name]\n",
" if w_row.empty:\n",
" continue\n",
" w_row = w_row.iloc[0]\n",
" deltas.append({\n",
" \"encoder\": raw_row[\"method\"],\n",
" \"space\": space_val,\n",
" \"delta_balacc_milieu\": w_row[\"lp_balacc_milieu\"] - raw_row[\"lp_balacc_milieu\"],\n",
" \"delta_f1_milieu\": w_row[\"lp_f1_milieu\"] - raw_row[\"lp_f1_milieu\"],\n",
" \"delta_p5_milieu\": w_row[\"p5_milieu\"] - raw_row[\"p5_milieu\"],\n",
" \"delta_mantel\": w_row[\"mantel_r\"] - raw_row[\"mantel_r\"],\n",
" \"delta_balacc_typo\": w_row[\"lp_balacc_typo\"] - raw_row[\"lp_balacc_typo\"],\n",
" \"delta_pr\": w_row[\"participation_ratio\"] - raw_row[\"participation_ratio\"],\n",
" \"delta_uniformity\": w_row[\"uniformity\"] - raw_row[\"uniformity\"],\n",
" })\n",
"\n",
"df_delta = pd.DataFrame(deltas)\n",
"df_delta"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Scatter: method vs delta on key metrics\n",
"metrics_to_plot = [\n",
" (\"delta_balacc_milieu\", \"Delta LP BalAcc (milieu_eh)\"),\n",
" (\"delta_p5_milieu\", \"Delta Precision@5 (milieu_eh)\"),\n",
" (\"delta_mantel\", \"Delta Mantel r\"),\n",
" (\"delta_balacc_typo\", \"Delta LP BalAcc (dynamic typology)\"),\n",
"]\n",
"\n",
"fig = make_subplots(\n",
" rows=2, cols=2,\n",
" subplot_titles=[t for _, t in metrics_to_plot],\n",
" vertical_spacing=0.15,\n",
" horizontal_spacing=0.12,\n",
")\n",
"\n",
"for idx, (col, title) in enumerate(metrics_to_plot):\n",
" row, c = divmod(idx, 2)\n",
" row += 1\n",
" c += 1\n",
"\n",
" labels = df_delta[\"encoder\"] + \" (\" + df_delta[\"space\"] + \")\"\n",
" colors_list = [method_color(m) for m in df_delta[\"encoder\"]]\n",
"\n",
" fig.add_trace(\n",
" go.Bar(\n",
" x=labels, y=df_delta[col],\n",
" marker_color=colors_list,\n",
" text=df_delta[col].round(3).astype(str),\n",
" textposition=\"outside\",\n",
" showlegend=False,\n",
" ),\n",
" row=row, col=c,\n",
" )\n",
" fig.add_hline(y=0, line_dash=\"dot\", line_color=\"#7f7f7f\", row=row, col=c)\n",
"\n",
"fig.update_layout(\n",
" template=\"plotly_white\",\n",
" font=dict(family=\"Serif\", size=11),\n",
" width=1000,\n",
" height=700,\n",
" title=\"Effect of Whitening (+W) on Key Metrics\",\n",
" margin=dict(l=50, r=30, t=80, b=80),\n",
")\n",
"fig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Observations**:\n",
"- Whitening **always improves** Precision@5 and Mantel r. The effect is strongest for contrastive methods that suffer from dimensional collapse.\n",
"- On classification (LP BalAcc), whitening has mixed effects: it slightly helps MiniRocket but can slightly hurt TS2Vec and SoftCLT. The linear probe can compensate for anisotropy, so whitening is less critical here.\n",
"- On dynamic typology, whitening has a small positive effect for MiniRocket but a small negative effect for contrastive methods.\n",
"- The main benefit of whitening is for **distance-based evaluation** (k-NN, Mantel), not for **linear evaluation** (probes)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 8. Uni vs Multi Comparison\n",
"\n",
"For each encoder (MiniRocket, TS2Vec, SoftCLT), compare univariate vs. multivariate performance side by side."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Filter to encoders available in both spaces (exclude Catch22, PCA brut, Random)\n",
"encoders_both = [\"MiniRocket\", \"TS2Vec\", \"SoftCLT\"]\n",
"metrics_compare = [\n",
" (\"lp_balacc_milieu\", \"LP BalAcc (milieu_eh)\"),\n",
" (\"p5_milieu\", \"P@5 (milieu_eh)\"),\n",
" (\"lp_balacc_typo\", \"LP BalAcc (dynamic typology)\"),\n",
" (\"mantel_r\", \"Mantel r\"),\n",
"]\n",
"\n",
"fig = make_subplots(\n",
" rows=2, cols=2,\n",
" subplot_titles=[t for _, t in metrics_compare],\n",
" vertical_spacing=0.18,\n",
" horizontal_spacing=0.12,\n",
")\n",
"\n",
"for idx, (metric_col, metric_label) in enumerate(metrics_compare):\n",
" row, c = divmod(idx, 2)\n",
" row += 1\n",
" c += 1\n",
"\n",
" for space_val, pattern_suffix in [(\"uni\", \"(uni)\"), (\"multi\", \"(multi)\")]:\n",
" vals = []\n",
" for enc in encoders_both:\n",
" method_name = f\"{enc} {pattern_suffix}\"\n",
" match = df[(df[\"method\"] == method_name) & (df[\"input_space\"] == space_val)]\n",
" if not match.empty:\n",
" vals.append(match.iloc[0][metric_col])\n",
" else:\n",
" vals.append(None)\n",
"\n",
" fig.add_trace(\n",
" go.Bar(\n",
" x=encoders_both, y=vals,\n",
" name=space_val.capitalize(),\n",
" marker_color=\"#1f77b4\" if space_val == \"uni\" else \"#ff7f0e\",\n",
" marker_opacity=0.8,\n",
" text=[f\"{v:.3f}\" if v is not None else \"\" for v in vals],\n",
" textposition=\"outside\",\n",
" showlegend=(idx == 0), # only show legend once\n",
" ),\n",
" row=row, col=c,\n",
" )\n",
"\n",
"fig.update_layout(\n",
" template=\"plotly_white\",\n",
" font=dict(family=\"Serif\", size=11),\n",
" width=1000,\n",
" height=700,\n",
" title=\"Univariate vs. Multivariate: Side-by-Side Comparison\",\n",
" barmode=\"group\",\n",
" margin=dict(l=50, r=30, t=80, b=60),\n",
" legend=dict(x=0.85, y=1.0),\n",
")\n",
"fig.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Quantify the multivariate gain\n",
"print(\"Multivariate gain (multi - uni) for raw encoders:\")\n",
"print(\"=\" * 65)\n",
"for enc in encoders_both:\n",
" uni = df[(df[\"method\"] == f\"{enc} (uni)\") & (df[\"input_space\"] == \"uni\")]\n",
" multi = df[(df[\"method\"] == f\"{enc} (multi)\") & (df[\"input_space\"] == \"multi\")]\n",
" if uni.empty or multi.empty:\n",
" continue\n",
" uni, multi = uni.iloc[0], multi.iloc[0]\n",
" print(f\"\\n{enc}:\")\n",
" for col, label in [(\"lp_balacc_milieu\", \"LP BalAcc (milieu)\"), (\"p5_milieu\", \"P@5\"),\n",
" (\"lp_balacc_typo\", \"LP BalAcc (typo)\"), (\"mantel_r\", \"Mantel r\")]:\n",
" delta = multi[col] - uni[col]\n",
" pct = delta / max(abs(uni[col]), 1e-6) * 100\n",
" print(f\" {label:30s}: {delta:+.4f} ({pct:+.1f}%)\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Observations**:\n",
"- Multivariate input provides a **massive boost for geological classification** (milieu_eh): TS2Vec goes from 0.267 (uni) to 0.489 (multi), an 83% relative improvement.\n",
"- For dynamic typology, multivariate input is **neutral or slightly negative**: temporal dynamics are primarily captured by the groundwater level itself, not by climate covariates.\n",
"- Spatial coherence (Mantel r) improves with multivariate input because ERA5 covariates (temperature, precipitation) carry geographic information.\n",
"- The climate covariates provide complementary geological and spatial signal that the univariate series alone cannot capture."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 9. Summary Ranking Table\n",
"\n",
"Rank all methods across key metrics. Lower mean rank = better overall performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rank_metrics = {\n",
" \"LP BalAcc (milieu)\": (\"lp_balacc_milieu\", True),\n",
" \"LP F1 (milieu)\": (\"lp_f1_milieu\", True),\n",
" \"P@5 (milieu)\": (\"p5_milieu\", True),\n",
" \"LP BalAcc (typo)\": (\"lp_balacc_typo\", True),\n",
" \"Mantel r\": (\"mantel_r\", True),\n",
" \"Participation Ratio\": (\"participation_ratio\", True),\n",
"}\n",
"\n",
"ranking_tables = {}\n",
"for space_label, space_val in [(\"Univariate\", \"uni\"), (\"Multivariate\", \"multi\")]:\n",
" sub = df[df[\"input_space\"] == space_val].copy()\n",
" if sub.empty:\n",
" continue\n",
"\n",
" rank_df = pd.DataFrame(index=sub[\"method\"])\n",
" for metric_label, (col, ascending) in rank_metrics.items():\n",
" rank_df[metric_label] = sub[col].rank(ascending=not ascending, method=\"min\").values\n",
"\n",
" rank_df[\"Mean Rank\"] = rank_df.mean(axis=1)\n",
" rank_df = rank_df.sort_values(\"Mean Rank\")\n",
" ranking_tables[space_label] = rank_df\n",
"\n",
" print(f\"\\n{'=' * 80}\")\n",
" print(f\"Overall Ranking -- {space_label} Input Space\")\n",
" print(f\"{'=' * 80}\")\n",
" print(rank_df.round(2).to_string())\n",
" print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Color-coded heatmap of ranks\n",
"for space_label, rank_df in ranking_tables.items():\n",
" # Exclude \"Mean Rank\" column for the heatmap body\n",
" display_df = rank_df.drop(columns=[\"Mean Rank\"])\n",
"\n",
" fig = go.Figure(data=go.Heatmap(\n",
" z=display_df.values,\n",
" x=display_df.columns.tolist(),\n",
" y=display_df.index.tolist(),\n",
" colorscale=[\n",
" [0.0, \"#2ca02c\"], # rank 1 = green\n",
" [0.5, \"#ffffbf\"], # middle = yellow\n",
" [1.0, \"#d62728\"], # worst = red\n",
" ],\n",
" text=display_df.values.round(0).astype(int).astype(str),\n",
" texttemplate=\"%{text}\",\n",
" textfont=dict(size=12),\n",
" hovertemplate=\"Method: %{y}
Metric: %{x}
Rank: %{z}\",\n",
" colorbar=dict(title=\"Rank\"),\n",
" ))\n",
"\n",
" # Add Mean Rank as annotation on the right\n",
" for i, (method, mean_rank) in enumerate(rank_df[\"Mean Rank\"].items()):\n",
" fig.add_annotation(\n",
" x=len(display_df.columns) - 0.3,\n",
" y=method,\n",
" text=f\" Avg: {mean_rank:.2f}\",\n",
" showarrow=False,\n",
" xanchor=\"left\",\n",
" font=dict(size=11, color=\"black\"),\n",
" )\n",
"\n",
" fig.update_layout(\n",
" template=\"plotly_white\",\n",
" font=dict(family=\"Serif\", size=12),\n",
" width=900,\n",
" height=50 + 40 * len(display_df),\n",
" title=f\"Method Ranking Heatmap -- {space_label} Input\",\n",
" margin=dict(l=200, r=120, t=50, b=60),\n",
" xaxis=dict(side=\"top\"),\n",
" yaxis=dict(autorange=\"reversed\"),\n",
" )\n",
" fig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 10. Key Takeaways\n",
"\n",
"1. **Contrastive methods (TS2Vec, SoftCLT) excel at geological classification in multivariate mode.** TS2Vec multi achieves the highest LP BalAcc (0.489) for milieu_eh, a 70% relative improvement over the univariate case. The climate covariates provide geological signal that the groundwater level alone cannot capture.\n",
"\n",
"2. **MiniRocket dominates temporal dynamics.** For the data-driven dynamic typology (inertial/annual/reactive), MiniRocket +W (uni) reaches BalAcc = 0.762, outperforming all contrastive methods by 15+ points. Random convolutional features are well suited to capturing frequency content and autocorrelation structure.\n",
"\n",
"3. **Dimensional collapse is the main failure mode of contrastive learning.** Raw TS2Vec and SoftCLT embeddings collapse onto 2--3 effective dimensions (out of 320), with uniformity near 0. This makes k-NN retrieval and distance-based metrics unreliable.\n",
"\n",
"4. **Post-hoc whitening (ZCA + PCA to 64d) consistently improves retrieval and spatial metrics.** P@5 gains of +6 to +8 points; Mantel r doubles or triples. Whitening resolves the dimensional collapse without requiring retraining.\n",
"\n",
"5. **No single encoder wins everywhere.** The best choice depends on the downstream task: TS2Vec (multi) for geology-aware embeddings, MiniRocket (uni) +W for temporal pattern classification, SoftCLT (multi) +W for the best overall retrieval and spatial coherence.\n",
"\n",
"6. **All learned methods substantially outperform the random baseline** (BalAcc 0.115), confirming that the embeddings capture meaningful structure. Even the simplest approach (PCA brut) outperforms random on most metrics."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}