MRO-Bench Dataset
MRO-Bench is a specialized benchmark designed to evaluate next-generation LLMs (equipped with tool-calling, RAG, and complex reasoning) in the domain of Maintenance, Repair, and Operations (MRO) procurement.
1. Architectural Vision
In Industrial MRO scenarios, queries are often unstructured and contain high-density engineering parameters. Unlike general search, a 1mm difference in thread size or a variation in material grade results in a complete system failure. MRO-Bench requires models to act as "industrial selection experts" to identify correct SKUs among highly homogeneous candidates.
2. Taxonomy & Category Descriptions
The benchmark covers 15 major industrial material groups. Each category evaluates specific reasoning capabilities:
| Category | Evaluation Focus |
|---|---|
electrical |
Voltage, current, phase, and industrial safety certifications. |
hand_tools |
Physical dimensions, drive types, and material alloys. |
pumps_pipes_valves |
Fluid dynamics, thread standards, and pressure ratings. |
security |
Access protocols, encryption, and physical protection levels. |
ppe |
OSHA/ANSI compliance and material resistance ratings. |
welding |
Output power, duty cycles, and filler metal compatibility. |
tape_labels |
Adhesion strength, substrate type, and temperature resistance. |
power_tools |
RPM, torque, and power transmission mechanisms. |
pneumatics_hydraulics |
Nominal diameters, tolerance limits, and interface types. |
factory_automation |
Sensor mechanisms and industrial communication protocols. |
lighting |
Photometric parameters (lumens, color temp) and IP ratings. |
office_supplies |
Compatibility with existing industrial office hardware. |
industrial_control |
I/O configurations, relay architectures, and bus protocols. |
test_measurement |
Metrological ranges, resolution, and accuracy indicators. |
chemical_reagents |
Purity grades (ACS/GR), concentration, and volume. |
3. Dataset Schema
query_text: Raw user inquiry from real-world business pipelines.recall_sku: Candidate pool containingdynamicAttrList.dynamicAttrList: Nested technical attributes (e.g., "Thread Size: M10"). This is the core challenge for parametric matching.expect_recommended_sku: Ground Truth SKU ID(s).
4. Evaluation Metrics
- Recall@1: The primary metric. The correct SKU must be ranked #1.
- Exact Match (EM): Precise match of the recommended ID sequence.
5. Load Dataset
import os
from datasets import load_dataset
from huggingface_hub import snapshot_download
# Download the repository
repo_id = "jing666888/mro-benchmark"
data_dir = snapshot_download(repo_id=repo_id, repo_type="dataset")
# Load a specific category
dataset = load_dataset(repo_id, "hand_tools", split="validation")
# Example access
sample = dataset[0]
print(f"Query: {sample['query_text']}")
print(f"Target SKU: {sample['expect_recommended_sku']}")
- Downloads last month
- 73