license: mit
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- logical-reasoning
- multi-query
- consistency
- satisfiability
- SAT
- SMT
- belief-revision
- LLM-evaluation
- benchmark
pretty_name: Cross-Query Contradiction Benchmark
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: benchmark_train_v2.json
- split: dev
path: benchmark_dev_v2.json
- split: test
path: benchmark_test_v2.json
Cross-Query Contradiction Benchmark
Quantifying Cross-Query Contradictions in Multi-Query LLM Reasoning
Accepted at the ICLR 2026 Workshop on Logical Reasoning of Large Language Models
Rohit Kumar Salla (Virginia Tech), Ramya Manasa Amancherla (Columbia University), Manoj Saravanan (Virginia Tech)
Overview
Large language models frequently produce mutually inconsistent answers when reasoning over multiple related queries derived from the same premises. This benchmark evaluates case-file logical consistency: whether a model can maintain a globally satisfiable belief state across a bundle of interdependent queries.
The benchmark contains 390 case files across four reasoning domains, with 2,515 queries organized into bundles of 5–8 interdependent questions. Every label is machine-verified via Z3/CaDiCaL solvers, and each case includes formal representations and cross-query dependency annotations.
Key Features
- Z3-verified labels: All ENTAILED / CONTRADICTED / UNKNOWN labels are verified by SAT/SMT solvers — not just human annotation
- Cross-query dependency annotations: Each query is annotated with which other queries in its bundle it logically interacts with
- Formal representations: SMT-LIB and CNF formal encodings included for every case file
- 82.4% unique queries: Minimal duplication across bundles (vs. typical synthetic benchmarks)
- Four reasoning domains: Relational (SAT), Temporal (SMT), Policy/Rules, and Underspecified/Abductive
- Detailed reasoning traces: Average 95 characters per query (not trivial one-liners)
Benchmark Statistics
| Cases | Bundles | Queries | Logic Fragment | |
|---|---|---|---|---|
| Relational (SAT) | 120 | 120 | 787 | Propositional (seating, assignment, coloring) |
| Temporal (SMT) | 100 | 100 | 637 | Linear arithmetic (scheduling, ordering) |
| Policy / Rules | 80 | 80 | 506 | Ground first-order (access control, eligibility) |
| Underspecified | 90 | 90 | 585 | Partial information (diagnosis, investigation, fault) |
| Total | 390 | 390 | 2,515 |
Label Distribution
| Domain | ENTAILED | CONTRADICTED | UNKNOWN |
|---|---|---|---|
| Relational | 16% | 30% | 54% |
| Temporal | 11% | 32% | 57% |
| Policy | 6% | 32% | 62% |
| Underspecified | 18% | 36% | 46% |
| Overall | 13% | 32% | 55% |
Splits
| Split | Cases | Bundles |
|---|---|---|
| Train | 312 | 312 |
| Dev | 39 | 39 |
| Test | 39 | 39 |
Splits are stratified by domain at the case-file level to prevent leakage.
File Structure
cross_query_benchmark/
├── benchmark_full_v2.json # Complete benchmark (all cases + metadata + statistics)
├── benchmark_train_v2.json # Training split (312 cases)
├── benchmark_dev_v2.json # Development split (39 cases)
├── benchmark_test_v2.json # Test split (39 cases)
├── evaluation_schema_v2.json # Metric definitions
├── prompt_templates_v2.json # Prompt templates for extraction & repair
└── README.md
Data Format
Case File Structure
Each case file in the JSON has the following schema:
{
"id": "rel_seating_0042",
"domain": "relational",
"subdomain": "seating_arrangement",
"logic_type": "propositional",
"premises": [
"There are 5 people seated at a circular table: Alice, Bob, Charlie, Diana, and Eve.",
"Each person occupies exactly one seat and all seats are distinct.",
"Alice must sit directly next to Bob.",
"Charlie must not sit directly next to Diana."
],
"formal_representation": "; Seating: 5 people, circular\n; Variables: pos_X in [0, 4] ...",
"entities": ["Alice", "Bob", "Charlie", "Diana", "Eve"],
"bundles": [
{
"bundle_id": "bundle_0",
"num_queries": 6,
"label_distribution": {"ENTAILED": 2, "CONTRADICTED": 2, "UNKNOWN": 2},
"queries": [
{
"query_id": "q0",
"query": "Must Alice always be seated next to Bob?",
"label": "ENTAILED",
"reasoning": "The premises explicitly require Alice to sit next to Bob, so adjacency is guaranteed in every valid arrangement.",
"query_type": "adj_forced",
"depends_on": []
},
{
"query_id": "q1",
"query": "Can Charlie sit at position 3?",
"label": "UNKNOWN",
"reasoning": "Position 3 is available to Charlie in some arrangements but not others, depending on how other constraints resolve.",
"query_type": "pos",
"depends_on": [0]
}
]
}
]
}
Field Descriptions
| Field | Description |
|---|---|
id |
Unique case identifier ({domain}_{subdomain}_{index}) |
domain |
One of: relational, temporal, policy, underspecified |
subdomain |
Specific scenario type (e.g., seating_arrangement, event_scheduling, access_control, diagnostic) |
logic_type |
Formal logic fragment: propositional, linear_arithmetic, first_order_ground, partial_information |
premises |
List of natural-language premise strings |
formal_representation |
SMT-LIB or CNF-style formal encoding of the case file |
entities |
Named entities in the case file |
bundles[].queries[].label |
Gold label: ENTAILED, CONTRADICTED, or UNKNOWN |
bundles[].queries[].reasoning |
Reasoning trace explaining the label (avg. 95 chars) |
bundles[].queries[].depends_on |
Indices of earlier queries in the bundle that this query logically interacts with |
bundles[].queries[].query_type |
Query category (e.g., adj, pos, overlap, order, eligible, is_guilty) |
Evaluation Metrics
Per-Query Metrics
| Metric | Description |
|---|---|
| Accuracy | Fraction of correct labels |
| Macro-F1 | F1 averaged across ENTAILED, CONTRADICTED, UNKNOWN |
| Unknown-F1 | F1 for the UNKNOWN class specifically |
Set-Level Metrics
| Metric | Description | Direction |
|---|---|---|
| SetConsRate | Fraction of bundles where the final belief state is satisfiable | ↑ higher is better |
| AUC-PrefixCons | Average prefix satisfiability in sequential settings | ↑ higher is better |
| RevisionCost | Average commitments revised to restore satisfiability | ↓ lower is better |
| ContradictionDensity | Fraction of steps where a new contradiction emerges | ↓ lower is better |
Compute Metrics
| Metric | Description |
|---|---|
| Overhead (OH) | Wall-clock time normalized to No-CoT baseline |
| Solver calls/bundle | Average SAT/SMT solver invocations per bundle |
Quick Start
Loading the Data
import json
# Load full benchmark
with open("benchmark_full_v2.json") as f:
benchmark = json.load(f)
print(f"Cases: {benchmark['statistics']['total_cases']}")
print(f"Queries: {benchmark['statistics']['total_queries']}")
# Iterate over cases
for case in benchmark["cases"]:
print(f"{case['id']}: {case['domain']} / {case['subdomain']}")
for bundle in case["bundles"]:
for query in bundle["queries"]:
print(f" Q: {query['query']}")
print(f" A: {query['label']} | Deps: {query['depends_on']}")
Loading Splits
# Load train/dev/test
for split in ["train", "dev", "test"]:
with open(f"benchmark_{split}_v2.json") as f:
data = json.load(f)
print(f"{split}: {data['num_cases']} cases, {data['num_bundles']} bundles")
Evaluating a Model
def evaluate_bundle(predictions, gold_labels):
"""
predictions: list of predicted labels (ENTAILED/CONTRADICTED/UNKNOWN)
gold_labels: list of gold labels
"""
correct = sum(p == g for p, g in zip(predictions, gold_labels))
accuracy = correct / len(gold_labels)
return accuracy
# Per-query evaluation
for case in benchmark["cases"]:
for bundle in case["bundles"]:
gold = [q["label"] for q in bundle["queries"]]
# your_predictions = model.predict(case["premises"], [q["query"] for q in bundle["queries"]])
# acc = evaluate_bundle(your_predictions, gold)
Using Formal Representations
# Access the formal (SMT-LIB / CNF) encoding for solver-based evaluation
for case in benchmark["cases"]:
if case["domain"] == "temporal":
print(case["formal_representation"])
# Parse and feed to Z3 or CaDiCaL
break
Prompt Templates
The repository includes prompt templates (prompt_templates_v2.json) for three tasks:
- Multi-query reasoning: Main evaluation prompt for LLMs to classify queries
- Commitment extraction: Extracts symbolic commitments (atoms/constraints) from model answers
- Repair prompting: Guides the LLM to propose minimal repairs for conflicting commitments
Domain Details
Relational (SAT) — 120 cases
Constraint satisfaction over discrete structures: circular/linear seating arrangements, office/team/color/shift assignments. Constraints include adjacency, non-adjacency, forced assignments, and bans. Verified via propositional SAT (CaDiCaL).
Temporal (SMT) — 100 cases
Event scheduling with ordering, duration, gap, overlap, and fixed-time constraints over bounded integer timelines. Also includes pure ordering problems with precedence and adjacency constraints. Verified via Z3 with QF_LIA (linear integer arithmetic).
Policy / Rules — 80 cases
Role-based access control (grant/deny/conditional rules over roles and resources) and eligibility checking (age/income/experience thresholds for programs). Verified via Z3 with Boolean + integer constraints.
Underspecified / Abductive — 90 cases
Scenarios with deliberately incomplete information: medical diagnosis (partial symptom observations, exactly-one-condition), criminal investigation (partial evidence, exactly-one-guilty), and system fault diagnosis (partial sensor data, dependency cascades). Designed to produce high UNKNOWN rates by construction.
Generation and Verification
All case files are procedurally generated with Z3 solver verification. The generation pipeline:
- Constraint generation: Random but structurally valid constraint systems per domain
- Z3 verification: Every query label is determined by checking entailment (true in ALL models), contradiction (false in ALL models), or unknown (true in some, false in others)
- Quality filtering: Cases with mono-label bundles or extreme label imbalance are filtered
- Dependency annotation: Cross-query entity sharing is detected and annotated
- Deduplication: Query uniqueness is enforced within and across bundles (82.4% unique)
Generator scripts are available in the repository.
Paper Results (Summary)
Our solver-augmented belief-state tracking method achieves:
| Method | Acc | SetCons | RevCost | Overhead |
|---|---|---|---|---|
| No-CoT (baseline) | 0.80 | 0.56 | 1.9 | 1.00× |
| CoT | 0.84 | 0.60 | 1.7 | 1.16× |
| Self-Consistency (K=20) | 0.86 | 0.63 | 1.6 | 2.75× |
| Ours (Check+Repair) | 0.85 | 0.94 | 0.3 | 1.55× |
Results shown for DeepSeek-R1 on the deep evaluation subset. See paper for full results across 8 models.
License
This benchmark is released under the MIT License.
Contact
- Rohit Kumar Salla — rohits25@vt.edu
- Ramya Manasa Amancherla — ra3439@columbia.edu
- Manoj Saravanan — manoj663@vt.edu