paper_id uint32 0 3.26k | title stringlengths 15 150 | paper_url stringlengths 42 42 | authors listlengths 1 21 | type stringclasses 3
values | abstract stringlengths 393 2.58k | keywords stringlengths 5 409 | TL;DR stringlengths 7 250 ⌀ | submission_number int64 1 16.4k | arxiv_id stringlengths 10 10 ⌀ | embedding listlengths 768 768 | github stringlengths 26 123 ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Implicit Regularization for Tubal Tensor Factorizations via Gradient Descent | https://openreview.net/forum?id=2GmXJnyNM4 | [
"Santhosh Karnik",
"Anna Veselovska",
"Mark Iwen",
"Felix Krahmer"
] | Oral | We provide a rigorous analysis of implicit regularization in an overparametrized tensor factorization problem beyond the lazy training regime. For matrix factorization problems, this phenomenon has been studied in a number of works. A particular challenge has been to design universal initialization strategies which pro... | overparameterization, implicit regularization, tensor factorization | We provide a rigorous analysis of implicit regularization in an overparametrized tensor factorization problem beyond the lazy training regime. | 16,047 | 2410.16247 | [
-0.019181201234459877,
-0.038270384073257446,
0.01611342281103134,
0.024747205898165703,
0.02661350928246975,
0.048398762941360474,
0.011871577240526676,
0.00596786430105567,
-0.016043782234191895,
-0.05787587910890579,
-0.01238834485411644,
-0.0017976615345105529,
-0.05827592313289642,
0.... | https://github.com/AnnaVeselovskaUA/tubal-tensor-implicit-reg-GD |
1 | Algorithm Development in Neural Networks: Insights from the Streaming Parity Task | https://openreview.net/forum?id=3go0lhfxd0 | [
"Loek van Rossem",
"Andrew M Saxe"
] | Oral | Even when massively overparameterized, deep neural networks show a remarkable ability to generalize. Research on this phenomenon has focused on generalization within distribution, via smooth interpolation. Yet in some settings neural networks also learn to extrapolate to data far beyond the bounds of the original train... | Out-of-distribution generalization, Algorithm discovery, Deep learning theory, Mechanistic Interpretability | We explain in a simple setting how out-of-distribution generalization can occur. | 16,013 | 2507.09897 | [
-0.0326198972761631,
-0.014057177118957043,
-0.01832646317780018,
0.03963460400700569,
0.026646170765161514,
0.017422083765268326,
0.04410151392221451,
0.03895360231399536,
-0.05122928321361542,
-0.02704167366027832,
-0.0036526948679238558,
-0.021394891664385796,
-0.0678320974111557,
-0.00... | null |
2 | Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection | https://openreview.net/forum?id=GFpjO8S8Po | [
"Zhiyuan Yan",
"Jiangming Wang",
"Peng Jin",
"Ke-Yue Zhang",
"Chengchun Liu",
"Shen Chen",
"Taiping Yao",
"Shouhong Ding",
"Baoyuan Wu",
"Li Yuan"
] | Oral | Detecting AI-generated images (AIGIs), such as natural images or face images, has become increasingly important yet challenging. In this paper, we start from a new perspective to excavate the reason behind the failure generalization in AIGI detection, named the asymmetry phenomenon, where a naively trained detector ten... | AI-Generated Image Detection, Face Forgery Detection, Deepfake Detection, Media Forensics | We introduce a novel approach via orthogonal subspace decomposition for generalizing AI-generated images detection. | 15,222 | 2411.15633 | [
-0.00003362178904353641,
-0.037818778306245804,
0.04007361829280853,
0.027026934549212456,
0.0204058475792408,
0.015415623784065247,
0.03986102715134621,
-0.005431922152638435,
-0.005959442351013422,
-0.05781417712569237,
-0.03166253864765167,
0.014723767526447773,
-0.0881330817937851,
0.0... | https://github.com/YZY-stack/Effort-AIGI-Detection |
3 | Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies | https://openreview.net/forum?id=vQubr1uBUw | [
"Nadav Timor",
"Jonathan Mamou",
"Daniel Korat",
"Moshe Berchansky",
"Gaurav Jain",
"Oren Pereg",
"Moshe Wasserblat",
"David Harel"
] | Oral | Accelerating the inference of large language models (LLMs) is a critical challenge in generative AI. Speculative decoding (SD) methods offer substantial efficiency gains by generating multiple tokens using a single target forward pass. However, existing SD approaches require the drafter and target models to share the s... | Speculative Decoding, Large Language Models, Vocabulary Alignment, Heterogeneous Vocabularies, Efficient Inference, Inference Acceleration, Rejection Sampling, Tokenization, Transformer Architectures, Text Generation, Open Source. | null | 15,148 | 2502.05202 | [
0.0034581993240863085,
-0.026665760204195976,
-0.014417539350688457,
0.04200471565127373,
0.043362341821193695,
0.04652798920869827,
0.00632978230714798,
0.015310577116906643,
-0.005913200788199902,
-0.010434215888381004,
-0.010758771561086178,
0.040639977902173996,
-0.05736065283417702,
0... | https://github.com/keyboardAnt/hf-bench |
4 | LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models | https://openreview.net/forum?id=SyQPiZJVWY | [
"Parshin Shojaee",
"Ngoc-Hieu Nguyen",
"Kazem Meidani",
"Amir Barati Farimani",
"Khoa D Doan",
"Chandan K. Reddy"
] | Oral | Scientific equation discovery is a fundamental task in the history of scientific progress, enabling the derivation of laws governing natural phenomena. Recently, Large Language Models (LLMs) have gained interest for this task due to their potential to leverage embedded scientific knowledge for hypothesis generation. Ho... | Benchmark, Scientific Discovery, Large Language Models, Symbolic Regression | We present LLM-SRBench, the first comprehensive benchmark for evaluating scientific equation discovery with LLMs, designed to rigorously assess discovery capabilities beyond memorization | 14,812 | null | [
-0.042210932821035385,
0.004859969485551119,
0.0037580966018140316,
0.02409457601606846,
0.03766670823097229,
0.01762591302394867,
0.012549391016364098,
-0.00429905578494072,
-0.02228093333542347,
0.005307744722813368,
0.030796995386481285,
0.0336964875459671,
-0.04636078327894211,
0.00487... | https://github.com/deep-symbolic-mathematics/llm-srbench |
5 | ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features | https://openreview.net/forum?id=Rc7y9HFC34 | [
"Alec Helbling",
"Tuna Han Salih Meral",
"Benjamin Hoover",
"Pinar Yanardag",
"Duen Horng Chau"
] | Oral | Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts ... | diffusion, interpretability, transformers, representation learning, mechanistic interpretability | We introduce a method for interpreting the representations of diffusion transformers by producing saliency maps of textual concepts. | 14,767 | 2502.04320 | [
0.011045873165130615,
-0.03143558278679848,
-0.0036811591126024723,
0.06845836341381073,
0.019581660628318787,
0.02596374601125717,
0.029545053839683533,
0.008918728679418564,
-0.002027298789471388,
-0.027645176276564598,
-0.04489611089229584,
-0.02071050927042961,
-0.02666446939110756,
0.... | https://github.com/helblazer811/ConceptAttention |
6 | Emergence in non-neural models: grokking modular arithmetic via average gradient outer product | https://openreview.net/forum?id=36hVB7DEB0 | [
"Neil Rohit Mallinar",
"Daniel Beaglehole",
"Libin Zhu",
"Adityanarayanan Radhakrishnan",
"Parthe Pandit",
"Mikhail Belkin"
] | Oral | Neural networks trained to solve modular arithmetic tasks exhibit grokking, a phenomenon where the test accuracy starts improving long after the model achieves 100% training accuracy in the training process. It is often taken as an example of "emergence", where model ability manifests sharply through a phase transition... | Theory of deep learning, grokking, modular arithmetic, feature learning, kernel methods, average gradient outer product (AGOP), emergence | We show that "emergence" in the task of grokking modular arithmetic occurs in feature learning kernels using the Average Gradient Outer Product (AGOP) and that the features take the form of block-circulant features. | 14,743 | 2407.20199 | [
-0.021082431077957153,
-0.012389615178108215,
0.004305555485188961,
0.022336209192872047,
0.05508352071046829,
0.025000594556331635,
0.02121897228062153,
0.004627522546797991,
-0.06073524057865143,
-0.01705952361226082,
0.010001549497246742,
0.017953528091311455,
-0.057553987950086594,
0.0... | https://github.com/nmallinar/rfm-grokking |
7 | Hierarchical Refinement: Optimal Transport to Infinity and Beyond | https://openreview.net/forum?id=EBNgREMoVD | [
"Peter Halmos",
"Julian Gold",
"Xinhao Liu",
"Benjamin Raphael"
] | Oral | Optimal transport (OT) has enjoyed great success in machine learning as a principled way to align datasets via a least-cost correspondence, driven in large part by the runtime efficiency of the Sinkhorn algorithm (Cuturi, 2013). However, Sinkhorn has quadratic space complexity in the number of points, limiting scalabil... | Optimal transport, low-rank, linear complexity, sparse, full-rank | Linear-complexity optimal transport, using low-rank optimal transport to progressively refine the solution to a Monge map. | 14,649 | 2503.03025 | [
-0.027334941551089287,
-0.01004324946552515,
0.011843880638480186,
0.034024015069007874,
0.029421843588352203,
0.02161589451134205,
0.016120072454214096,
-0.018252994865179062,
-0.00789616722613573,
-0.0722673311829567,
-0.008323721587657928,
-0.020554102957248688,
-0.07072562724351883,
0.... | https://github.com/raphael-group/HiRef |
8 | Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions | https://openreview.net/forum?id=DjJmre5IkP | [
"Jaeyeon Kim",
"Kulin Shah",
"Vasilis Kontonis",
"Sham M. Kakade",
"Sitan Chen"
] | Oral | In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponenti... | Discrete Diffusion models, Masked Diffusion Models, Diffusion Models, Learning Theory, Inference-Time Strategy | null | 14,095 | 2502.06768 | [
-0.03017411381006241,
-0.015793848782777786,
-0.03172555938363075,
0.06325984746217728,
0.05338553711771965,
0.03736850991845131,
0.02657005749642849,
-0.011509117670357227,
-0.01983768306672573,
-0.034276317805051804,
0.0005438215448521078,
0.0016898621106520295,
-0.03530674800276756,
0.0... | null |
9 | Statistical Test for Feature Selection Pipelines by Selective Inference | https://openreview.net/forum?id=4EYwwVuhtG | [
"Tomohiro Shiraishi",
"Tatsuya Matsukawa",
"Shuichi Nishino",
"Ichiro Takeuchi"
] | Oral | A data analysis pipeline is a structured sequence of steps that transforms raw data into meaningful insights by integrating various analysis algorithms. In this paper, we propose a novel statistical test to assess the significance of data analysis pipelines. Our approach enables the systematic development of valid stat... | Data Analysis Pipeline, AutoML, Statistical Test, Selective Inference, Missing Value Imputation, Outlier Detection, Feature Selection | We introduce a statistical test for data analysis pipeline in feature selection problems, which allows for the systematic development of valid statistical tests applicable to any pipeline configuration composed of a set of predefined components. | 13,925 | 2406.18902 | [
0.009413988329470158,
-0.01155618391931057,
-0.02200639247894287,
0.030097780749201775,
0.067814402282238,
0.045050036162137985,
0.06003964692354202,
-0.025692462921142578,
-0.0019961593206971884,
-0.038121238350868225,
-0.011850292794406414,
0.0272664912045002,
-0.06766638159751892,
0.000... | https://github.com/Takeuchi-Lab-SI-Group/si4pipeline |
10 | All-Purpose Mean Estimation over R: Optimal Sub-Gaussianity with Outlier Robustness and Low Moments Performance | https://openreview.net/forum?id=qR7YsQdFxV | [
"Jasper C.H. Lee",
"Walter McKelvie",
"Maoyuan Song",
"Paul Valiant"
] | Oral | We consider the basic statistical challenge of designing an "all-purpose" mean estimation algorithm that is recommendable across a variety of settings and models.
Recent work by [Lee and Valiant 2022] introduced the first 1-d mean estimator whose error in the standard finite-variance+i.i.d. setting is optimal even in i... | mean estimation, instance optimality, robust statistics | null | 13,863 | null | [
-0.00802331231534481,
0.014707943424582481,
0.03284085914492607,
-0.002028175862506032,
0.03645622730255127,
0.03510919585824013,
0.05296678468585014,
-0.0028722728602588177,
-0.031114203855395317,
-0.043116677552461624,
0.008942199870944023,
-0.018073150888085365,
-0.061460230499506,
-0.0... | null |
11 | SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering? | https://openreview.net/forum?id=xZXhFg43EI | [
"Samuel Miserendino",
"Michele Wang",
"Tejal Patwardhan",
"Johannes Heidecke"
] | Oral | We introduce SWE-Lancer, a benchmark of over 1400 freelance software engineering tasks from Upwork, valued at \\\$1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks — ranging from \\\$50 bug fixes to \\\$32000 feature implementations — and managerial tasks, where models... | software engineering, benchmark, evals, evaluations, dataset, tasks, real-world, swe, coding, delegation, agents, language models, full-stack engineering | We introduce SWE-Lancer, a benchmark of over 1500 real-world full-stack engineering tasks from Upwork, worth $1 million USD in payouts made to freelance software engineers. | 13,597 | null | [
0.0029941946268081665,
-0.03447999432682991,
-0.015928905457258224,
0.05767872557044029,
0.05825301632285118,
0.0072783213108778,
0.035821590572595596,
0.045332059264183044,
-0.017995379865169525,
-0.02635982446372509,
-0.00825938768684864,
0.016640881076455116,
-0.06718537956476212,
-0.01... | https://github.com/openai/SWELancer-Benchmark |
12 | Training a Generally Curious Agent | https://openreview.net/forum?id=UeB3Hdrhda | [
"Fahim Tajwar",
"Yiding Jiang",
"Abitha Thankaraj",
"Sumaita Sadia Rahman",
"J Zico Kolter",
"Jeff Schneider",
"Russ Salakhutdinov"
] | Oral | Efficient exploration is essential for intelligent systems interacting with their environment, but existing language models often fall short in scenarios that require strategic information gathering. In this paper, we present **Paprika**, a fine-tuning approach that enables language models to develop general decision-m... | LLM Agent, Synethic Data, Multiturn finetuning | Method for training on synthetic data to improve LLMs' sequential decision making capabilities | 13,556 | 2502.17543 | [
-0.028104690834879875,
-0.04003501310944557,
-0.027146564796566963,
0.07006682455539703,
0.03761666640639305,
0.014136054553091526,
0.03961124271154404,
0.017447875812649727,
-0.005066996905952692,
-0.03331589326262474,
-0.03607496991753578,
0.046626731753349304,
-0.06950157135725021,
-0.0... | https://github.com/tajwarfahim/paprika |
13 | High-Dimensional Prediction for Sequential Decision Making | https://openreview.net/forum?id=uRAgIVnAO6 | [
"Georgy Noarov",
"Ramya Ramalingam",
"Aaron Roth",
"Stephan Xie"
] | Oral | We give an efficient algorithm for producing multi-dimensional forecasts in an online adversarial environment that have low bias subject to any polynomial number of conditioning events, that can depend both on external context and on our predictions themselves. We demonstrate the use of this algorithm with several appl... | online decision making, combinatorial optimization, multicalibration, calibration, swap regret, no-regret, conditional guarantees | A general framework for online adversarial vector-valued prediction with decision making applications | 13,513 | 2310.17651 | [
-0.05363934487104416,
-0.0182009506970644,
-0.009881662204861641,
0.047327443957328796,
0.025602882727980614,
0.043326638638973236,
0.016372084617614746,
0.004297033883631229,
-0.010872195474803448,
-0.03978506848216057,
-0.01545311976224184,
0.02295861579477787,
-0.06837444007396698,
-0.0... | null |
14 | EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents | https://openreview.net/forum?id=DgGF2LEBPS | [
"Rui Yang",
"Hanyang Chen",
"Junyu Zhang",
"Mark Zhao",
"Cheng Qian",
"Kangrui Wang",
"Qineng Wang",
"Teja Venkat Koripella",
"Marziyeh Movahedi",
"Manling Li",
"Heng Ji",
"Huan Zhang",
"Tong Zhang"
] | Oral | Leveraging Multi-modal Large Language Models (MLLMs) to create embodied agents offers a promising avenue for tackling real-world tasks. While language-centric embodied agents have garnered substantial attention, MLLM-based embodied agents remain underexplored due to the lack of comprehensive evaluation frameworks. To b... | Embodied Agent, Multi-modal Large Language Models | We introduces EmbodiedBench, a benchmark designed to evaluate the finegrained capabilities of vision-driven embodied agents. | 13,392 | 2502.09560 | [
0.008102494291961193,
-0.031402576714754105,
-0.004811062943190336,
0.01644778996706009,
0.01479186862707138,
0.005239791236817837,
0.024295896291732788,
0.013423384167253971,
-0.027022909373044968,
-0.03032533824443817,
-0.008588694967329502,
0.010017422027885914,
-0.056606318801641464,
-... | https://github.com/EmbodiedBench/EmbodiedBench |
15 | Auditing $f$-differential privacy in one run | https://openreview.net/forum?id=OZSXYeqpI1 | [
"Saeed Mahloujifar",
"Luca Melis",
"Kamalika Chaudhuri"
] | Oral | Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms. Existing auditing mechanisms, however, are either computationally inefficient -- requiring multiple runs of the machine learning algorithms —- or suboptimal in calculating an empirical privacy... | Empirical privacy, Auditing, Differential Privacy | We perform tests to audit whether a privacy mechanism satisfies $f$-differential privacy. We only invoke the privacy mechanism once. | 13,343 | null | [
-0.0005898004164919257,
0.0049765040166676044,
0.008300714194774628,
0.0568443238735199,
0.05580795928835869,
0.014809120446443558,
0.03369252383708954,
-0.042706333100795746,
-0.005942849908024073,
-0.02082650363445282,
0.0243896022439003,
-0.026539292186498642,
-0.047694411128759384,
-0.... | null |
16 | Learning with Expected Signatures: Theory and Applications | https://openreview.net/forum?id=yDTwamN4LQ | [
"Lorenzo Lucchese",
"Mikko S. Pakkanen",
"Almut E. D. Veraart"
] | Oral | The expected signature maps a collection of data streams to a lower dimensional representation, with a remarkable property: the resulting feature tensor can fully characterize the data generating distribution. This "model-free"' embedding has been successfully leveraged to build multiple domain-agnostic machine learnin... | Probabilistic Machine Learning, Signature, Expected Signature, Time Series, Rough Paths | null | 13,314 | 2505.20465 | [
-0.0007773066172376275,
-0.028051353991031647,
0.03080562688410282,
0.037091247737407684,
0.059443484991788864,
0.02375682070851326,
0.01625978574156761,
-0.020463086664676666,
-0.03334105387330055,
-0.029408017173409462,
0.03140993416309357,
0.01726703532040119,
-0.0677134096622467,
0.010... | https://github.com/lorenzolucchese/esig |
17 | Statistical Query Hardness of Multiclass Linear Classification with Random Classification Noise | https://openreview.net/forum?id=EZV4edMGM1 | [
"Ilias Diakonikolas",
"Mingchen Ma",
"Lisheng Ren",
"Christos Tzamos"
] | Oral | We study the task of Multiclass Linear Classification (MLC)
in the distribution-free PAC model
with Random Classification Noise (RCN).
Specifically, the learner is given a set of
labeled examples $(x, y)$, where $x$ is drawn
from an unknown distribution on $R^d$
and the labels are generated by a
multiclass linea... | Multiclass Linear Classification, Random Classification Noise, Statistical Query Learning | We prove the first sets of SQ lower bounds for multiclass linear classification under random classification noise. | 13,262 | 2502.11413 | [
-0.0143427150323987,
-0.0011680200695991516,
-0.009333953261375427,
0.04942682385444641,
0.02539379522204399,
0.021230213344097137,
0.01417323388159275,
-0.01944510079920292,
-0.03157773241400719,
-0.022118523716926575,
-0.03745904564857483,
-0.001103476039133966,
-0.06367432326078415,
0.0... | null |
18 | Expected Variational Inequalities | https://openreview.net/forum?id=LCbHsdtvOR | [
"Brian Hu Zhang",
"Ioannis Anagnostides",
"Emanuel Tewolde",
"Ratip Emin Berker",
"Gabriele Farina",
"Vincent Conitzer",
"Tuomas Sandholm"
] | Oral | *Variational inequalities (VIs)* encompass many fundamental problems in diverse areas ranging from engineering to economics and machine learning. However, their considerable expressivity comes at the cost of computational intractability. In this paper, we introduce and analyze a natural relaxation—which we refer to as ... | variational inequalities, correlated equilibria, game theory | We introduce a computationally tractable relaxation of variational inequalities. | 13,159 | 2502.18605 | [
-0.03574097901582718,
0.02103162556886673,
-0.004538395907729864,
0.06039280444383621,
0.022891594097018242,
0.056309137493371964,
0.014718127436935902,
0.009795711375772953,
-0.03980286419391632,
-0.03676144406199455,
-0.03281750530004501,
-0.00036916634417138994,
-0.087186299264431,
-0.0... | null |
19 | Improving the Scaling Laws of Synthetic Data with Deliberate Practice | https://openreview.net/forum?id=0LZRtvK871 | [
"Reyhane Askari-Hemmat",
"Mohammad Pezeshki",
"Elvis Dohmatob",
"Florian Bordes",
"Pietro Astolfi",
"Melissa Hall",
"Jakob Verbeek",
"Michal Drozdzal",
"Adriana Romero-Soriano"
] | Oral | Inspired by the principle of deliberate practice in human learning, we propose Deliberate Practice for Synthetic Data Generation (DP), a novel framework that improves sample efficiency through dynamic synthetic data generation. Prior work has shown that scaling synthetic data is inherently challenging, as naively addin... | Synthetic Data, Deliberate Practice, Active Learning, Sample Efficiency, Scaling Laws, Data Curation, Diffusion Models, Dataset Pruning | Deliberate Practice for Synthetic Data Generation (DP) dynamically generates informative samples to improve scaling efficiency, reducing sample requirements and training iterations while achieving superior performance. | 13,125 | 2502.15588 | [
0.016795460134744644,
-0.053353745490312576,
-0.02776230126619339,
0.03093685396015644,
0.05777501314878464,
0.02796720713376999,
0.02489941008388996,
-0.021697156131267548,
-0.032354019582271576,
-0.03472407907247543,
-0.00979627761989832,
-0.025996461510658264,
-0.07226517796516418,
0.00... | null |
20 | Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks | https://openreview.net/forum?id=Fvq9ogLnLN | [
"Shikai Qiu",
"Lechao Xiao",
"Andrew Gordon Wilson",
"Jeffrey Pennington",
"Atish Agarwala"
] | Oral | Understanding neural network training dynamics at scale is an important open problem. Although realistic model architectures, optimizers, and data interact in complex ways that make predictive theory challenging, we show that compute-optimally trained models exhibit remarkably precise collective regularities. Specifica... | Scaling Laws, Optimization | Loss curves from compute-optimally trained models collapse onto a universal shape, from which we can derive both theoretical insights and practical diagnostics for scaling. | 13,037 | 2507.02119 | [
-0.049824509769678116,
-0.014151462353765965,
0.01629544235765934,
0.03716396167874336,
0.04012789577245712,
0.02790668047964573,
0.013468660414218903,
0.005660598166286945,
-0.05387628450989723,
-0.017211025580763817,
-0.0017867827555164695,
-0.017190899699926376,
-0.05675266683101654,
0.... | https://github.com/shikaiqiu/supercollapse |
21 | Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination | https://openreview.net/forum?id=zBBYsVGKuB | [
"Kunal Jha",
"Wilka Carvalho",
"Yancheng Liang",
"Simon Shaolei Du",
"Max Kleiman-Weiner",
"Natasha Jaques"
] | Oral | Zero-shot coordination (ZSC), the ability to adapt to a new partner in a cooperative task, is a critical component of human-compatible AI. While prior work has focused on training agents to cooperate on a single task, these specialized models do not generalize to new tasks, even if they are highly similar. Here, we stu... | Zero-shot Coordination, Human-AI Collaboration, Multi-agent Interactions | Learning to generalize to novel coordination tasks with one partner lets you generalize to novel partners as well | 12,934 | 2504.12714 | [
-0.021278109401464462,
-0.017738960683345795,
-0.021609904244542122,
0.05005832761526108,
0.052537646144628525,
-0.0005299110198393464,
0.02377132698893547,
0.024538880214095116,
-0.023794569075107574,
-0.040716733783483505,
-0.03664292395114899,
-0.007514335680752993,
-0.07557303458452225,
... | https://github.com/KJha02/crossEnvCooperation |
22 | Layer by Layer: Uncovering Hidden Representations in Language Models | https://openreview.net/forum?id=WGXb7UdvTX | [
"Oscar Skean",
"Md Rifat Arefin",
"Dan Zhao",
"Niket Nikul Patel",
"Jalal Naghiyev",
"Yann LeCun",
"Ravid Shwartz-Ziv"
] | Oral | From extracting features to generating text, the outputs of large language models (LLMs) typically rely on their final layers, following the conventional wisdom that earlier layers capture only low-level cues. However, our analysis shows that intermediate layers can encode even richer representations, often improving p... | large language model, entropy, augmentation, intermediate layer, vision transformer | An investigation into the quality and characteristics of intermediate LLM layers | 12,891 | 2502.02013 | [
-0.026006441563367844,
-0.02086828462779522,
-0.007215751800686121,
0.04836123436689377,
0.03967675566673279,
0.018962277099490166,
0.009429673664271832,
0.009220421314239502,
-0.013386423699557781,
-0.011372769251465797,
-0.03507737070322037,
0.0190715454518795,
-0.024339398369193077,
-0.... | https://github.com/OFSkean/information_flow |
23 | The dark side of the forces: assessing non-conservative force models for atomistic machine learning | https://openreview.net/forum?id=OEl3L8osas | [
"Filippo Bigi",
"Marcel F. Langer",
"Michele Ceriotti"
] | Oral | The use of machine learning to estimate the energy of a group of atoms, and the forces that drive them to more stable configurations, have revolutionized the fields of computational chemistry and materials discovery.
In this domain, rigorous enforcement of symmetry and conservation laws has traditionally been considere... | geometric machine learning, energy conservation, atomistic modelling, molecular dynamics, statistical mechanics | An assessment of non-energy-conserving geometric machine learning models for atomic-scale systems | 12,593 | 2412.11569 | [
-0.03452867642045021,
-0.00976346805691719,
-0.015490608289837837,
0.02492554485797882,
0.0373053215444088,
-0.007992483675479889,
0.015757368877530098,
0.004835061263293028,
-0.04769488796591759,
-0.04791427403688431,
0.01646692305803299,
0.01679636724293232,
-0.04162917286157608,
0.01870... | null |
24 | AdaSplash: Adaptive Sparse Flash Attention | https://openreview.net/forum?id=OWIPDWhUcO | [
"Nuno Gonçalves",
"Marcos V Treviso",
"Andre Martins"
] | Oral | The computational cost of softmax-based attention in transformers limits their applicability to long-context tasks. Adaptive sparsity, of which $\alpha$-entmax attention is an example, offers a flexible data-dependent alternative, but existing implementations are inefficient and do not leverage the sparsity to obtain r... | Sparse Attention, Flash Attention, Adaptive Sparsity, Long Context Transformers | An efficient flash attention implementation for adaptive sparsity. | 12,577 | 2502.12082 | [
-0.032633934170007706,
-0.02573561854660511,
0.01130899041891098,
0.04579123109579086,
0.00015240539505612105,
0.025791626423597336,
0.025826113298535347,
0.03706442564725876,
-0.03386441245675087,
-0.03210118040442467,
-0.046713780611753464,
0.030590010806918144,
-0.04805473983287811,
0.0... | https://github.com/deep-spin/adasplash |
25 | Temporal Difference Flows | https://openreview.net/forum?id=j6H7c3aQyb | [
"Jesse Farebrother",
"Matteo Pirotta",
"Andrea Tirinzoni",
"Remi Munos",
"Alessandro Lazaric",
"Ahmed Touati"
] | Oral | Predictive models of the future are fundamental for an agent's ability to reason and plan. A common strategy learns a world model and unrolls it step-by-step at inference, where small errors can rapidly compound. Geometric Horizon Models (GHMs) offer a compelling alternative by directly making predictions of future sta... | Reinforcement Learning, Geometric Horizon Model, Gamma-Model, Temporal Difference Learning, Successor Measure, Flow Matching | null | 12,288 | 2503.09817 | [
-0.008205203339457512,
-0.014231509529054165,
-0.016766242682933807,
0.05690567195415497,
0.0355713777244091,
0.029459649696946144,
0.018935445696115494,
0.031183460727334023,
-0.020828183740377426,
-0.044238731265068054,
-0.007117788773030043,
-0.02141254022717476,
-0.0682157427072525,
-0... | null |
26 | Score Matching with Missing Data | https://openreview.net/forum?id=mBstuGUaXo | [
"Josh Givens",
"Song Liu",
"Henry Reeve"
] | Oral | Score matching is a vital tool for learning the distribution of data with applications across many areas including diffusion processes, energy based modelling, and graphical model estimation. Despite all these applications, little work explores its use when data is incomplete. We address this by adapting score matching... | Score Matching, Missing Data, Variational Inference | We adapt score matching to missing data and demonstrate its general applicability by applying it to graphical model estimation. | 12,176 | 2506.00557 | [
-0.03508879989385605,
-0.012734951451420784,
0.009348061867058277,
0.07846322655677795,
0.061772655695676804,
0.04189708083868027,
0.005374596454203129,
-0.017704281955957413,
-0.02730174921452999,
-0.0655917152762413,
-0.0006144159706309438,
0.009631562046706676,
-0.06835714727640152,
-0.... | https://github.com/joshgivens/ScoreMatchingwithMissingData |
27 | Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction | https://openreview.net/forum?id=Hi0SyHMmkd | [
"Vaishnavh Nagarajan",
"Chen Henry Wu",
"Charles Ding",
"Aditi Raghunathan"
] | Oral | We design a suite of minimal algorithmic tasks that are a loose abstraction of _open-ended_ real-world tasks. This allows us to cleanly and controllably quantify the creative limits of the present-day language model.
Much like real-world tasks that require a creative, far-sighted leap of thought, our tasks require an i... | next-token prediction, multi-token prediction, creativity | We design a series of open-ended algorithmic tasks inspired by creative tasks and show that multi-token prediction and seed-conditioning lead to much more creative planning than next-token prediction. | 12,175 | 2504.15266 | [
-0.01032083947211504,
-0.013407246209681034,
-0.015579601749777794,
0.047970764338970184,
0.04064083844423294,
0.007713907398283482,
0.013698582537472248,
0.025523610413074493,
-0.010253200307488441,
-0.014853627420961857,
-0.016113806515932083,
0.007540527731180191,
-0.04518773406744003,
... | https://github.com/ChenWu98/algorithmic-creativity |
28 | How Do Large Language Monkeys Get Their Power (Laws)? | https://openreview.net/forum?id=QqVZ28qems | [
"Rylan Schaeffer",
"Joshua Kazdan",
"John Hughes",
"Jordan Juravsky",
"Sara Price",
"Aengus Lynch",
"Erik Jones",
"Robert Kirk",
"Azalia Mirhoseini",
"Sanmi Koyejo"
] | Oral | Recent research across mathematical problem solving, proof assistant programming and multimodal jailbreaking documents a striking finding: when (multimodal) language model tackle a suite of tasks with multiple attempts per task -- succeeding if any attempt is correct -- then the negative log of the average success rate... | scaling laws, inference compute, scaling inference compute, language models, evaluations, scaling-predictable evaluations | null | 12,080 | 2502.17578 | [
-0.031062928959727287,
-0.0013453139690682292,
-0.010926391929388046,
0.016029799357056618,
0.03916487097740173,
0.031803280115127563,
0.06212916225194931,
0.04069236293435097,
-0.058745820075273514,
0.0029771628323942423,
0.01877235807478428,
0.006440302357077599,
-0.06285161525011063,
-0... | https://github.com/RylanSchaeffer/KoyejoLab-Large-How-Do-Language-Monkey-Power-Get-Their-Power |
29 | Addressing Misspecification in Simulation-based Inference through Data-driven Calibration | https://openreview.net/forum?id=y3d4Bs2r7r | [
"Antoine Wehenkel",
"Juan L. Gamella",
"Ozan Sener",
"Jens Behrmann",
"Guillermo Sapiro",
"Joern-Henrik Jacobsen",
"marco cuturi"
] | Oral | Driven by steady progress in deep generative modeling, simulation-based inference (SBI) has emerged as the workhorse for inferring the parameters of stochastic simulators. However, recent work has demonstrated that model misspecification can harm SBI's reliability, preventing its adoption in important applications wher... | Simulation-based Inference, Misspecification, Optimal Transport, Bayesian Inference, Neural Posterior Estimation, Robust Inference | null | 11,889 | 2405.08719 | [
-0.020830919966101646,
0.0005135148530825973,
-0.023213397711515427,
0.047280412167310715,
0.04517439007759094,
0.05152469128370285,
0.05856301635503769,
-0.028203090652823448,
-0.036936499178409576,
-0.057177238166332245,
0.02932669036090374,
0.01175543013960123,
-0.06431825459003448,
0.0... | null |
30 | A Unified Framework for Entropy Search and Expected Improvement in Bayesian Optimization | https://openreview.net/forum?id=LbJQYNSH41 | [
"Nuojin Cheng",
"Leonard Papenmeier",
"Stephen Becker",
"Luigi Nardi"
] | Oral | Bayesian optimization is a widely used method for optimizing expensive black-box functions, with Expected Improvement being one of the most commonly used acquisition functions. In contrast, information-theoretic acquisition functions aim to reduce uncertainty about the function’s optimum and are often considered fundam... | Bayesian optimization, Entropy search, Variational inference, Acquisition function | We propose a unified framework that bridges entropy search and expected improvement, enhancing expected improvement for improved performance. | 11,682 | 2501.18756 | [
-0.006754751317203045,
0.027685968205332756,
-0.0025930432602763176,
0.033487237989902496,
0.04000762477517128,
0.05135595425963402,
0.01708454079926014,
-0.013318556360900402,
-0.013452869839966297,
-0.044828541576862335,
-0.008138779550790787,
0.03510747477412224,
-0.03511787950992584,
0... | https://github.com/NUOJIN/variational-entropy-search |
31 | Conformal Prediction as Bayesian Quadrature | https://openreview.net/forum?id=PNmkjIzHB7 | [
"Jake C. Snell",
"Thomas L. Griffiths"
] | Oral | As machine learning-based prediction systems are increasingly used in high-stakes situations, it is important to understand how such predictive models will perform upon deployment. Distribution-free uncertainty quantification techniques such as conformal prediction provide guarantees about the loss black-box models wil... | bayesian quadrature, probabilistic numerics, conformal prediction, distribution-free uncertainty quantification | We propose an alternative to conformal prediction based on Bayesian quadrature that produces a distribution over test-time risk. | 11,432 | 2502.13228 | [
0.01858520694077015,
-0.002762889489531517,
0.0006300908280536532,
0.02871488220989704,
0.055223312228918076,
-0.001418139785528183,
0.009991664439439774,
-0.01981346495449543,
-0.031169258058071136,
-0.041913118213415146,
-0.011241389438509941,
0.002461442956700921,
-0.08134104311466217,
... | https://github.com/jakesnell/conformal-as-bayes-quad |
32 | Learning Time-Varying Multi-Region Brain Communications via Scalable Markovian Gaussian Processes | https://openreview.net/forum?id=pOAEfqa26i | [
"Weihan Li",
"Yule Wang",
"Chengrui Li",
"Anqi Wu"
] | Oral | Understanding and constructing brain communications that capture dynamic communications across multiple regions is fundamental to modern system neuroscience, yet current methods struggle to find time-varying region-level communications or scale to large neural datasets with long recording durations. We present a novel ... | Multiple Brain Region Communications; Markovian Gaussian Processes; State Space Model | We developed a scalable method using Markovian Gaussian Processes (State Space Model) to track how multiple brain regions communicate dynamically over time. | 11,292 | 2407.00397 | [
-0.03241768851876259,
0.010383646003901958,
-0.027385404333472252,
0.004705025814473629,
0.0432833768427372,
0.025788938626646996,
0.0378355048596859,
0.014515249989926815,
-0.03916799649596214,
-0.07149074226617813,
-0.0012040550354868174,
0.011307216249406338,
-0.05128505453467369,
0.016... | https://github.com/BRAINML-GT/Adaptive-Delay-Model |
33 | Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time | https://openreview.net/forum?id=up21Rwj5Fo | [
"Gramoz Goranci",
"Peter Kiss",
"Neel Patel",
"Martin P. Seybold",
"Eva Szilagyi",
"Da Wei Zheng"
] | Oral | We consider the Euclidean bi-chromatic matching problem in the dynamic setting, where the goal is to efficiently process point insertions and deletions while maintaining a high-quality solution. Computing the minimum cost bi-chromatic matching is one of the core problems in geometric optimization that has found many ap... | Euclidean bi-chromatic matching, dynamic algorithm, 1-Wasserstein distance | null | 11,181 | 2505.09010 | [
-0.01129536610096693,
-0.017938166856765747,
0.006859261076897383,
0.036380063742399216,
0.04205693304538727,
0.07222167402505875,
0.00739015219733119,
0.02645602449774742,
-0.03233019635081291,
-0.07564080506563187,
-0.02647995389997959,
-0.028999997302889824,
-0.07135188579559326,
0.0120... | null |
34 | The Value of Prediction in Identifying the Worst-Off | https://openreview.net/forum?id=26JsumCG0z | [
"Unai Fischer-Abaigar",
"Christoph Kern",
"Juan Carlos Perdomo"
] | Oral | Machine learning is increasingly used in government programs to identify and support the most vulnerable individuals, prioritizing assistance for those at greatest risk over optimizing aggregate outcomes. This paper examines the welfare impacts of prediction in equity-driven contexts, and how they compare to other poli... | algorithmic decision making, resource allocation, machine learning and public policy | null | 11,119 | 2501.19334 | [
-0.01244431734085083,
-0.03205063194036484,
-0.011076499707996845,
0.010298693552613258,
0.05924583226442337,
0.03605145588517189,
0.01330631598830223,
-0.02589959092438221,
-0.031014658510684967,
-0.0326215960085392,
-0.030589541420340538,
0.009308193810284138,
-0.06259553879499435,
-0.00... | null |
35 | Learning dynamics in linear recurrent neural networks | https://openreview.net/forum?id=KGOcrIWYnx | [
"Alexandra Maria Proca",
"Clémentine Carla Juliette Dominé",
"Murray Shanahan",
"Pedro A. M. Mediano"
] | Oral | Recurrent neural networks (RNNs) are powerful models used widely in both machine learning and neuroscience to learn tasks with temporal dependencies and to model neural dynamics. However, despite significant advancements in the theory of RNNs, there is still limited understanding of their learning process and the impac... | deep learning, learning dynamics, RNNs, rich and lazy learning, teacher-student, neuroscience | We study the learning dynamics of linear RNNs using a novel framework that accounts for task dynamics. | 11,083 | null | [
-0.02129901945590973,
0.005219988990575075,
-0.005816278979182243,
0.030236227437853813,
0.05530635267496109,
0.060753338038921356,
0.026367289945483208,
0.017647719010710716,
-0.07115199416875839,
-0.035384830087423325,
-0.005333644337952137,
0.013746881857514381,
-0.05385548993945122,
0.... | https://github.com/aproca/LRNN_dynamics |
36 | AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models | https://openreview.net/forum?id=xmbdACI0xu | [
"Zheng Lian",
"Haoyu Chen",
"Lan Chen",
"Haiyang Sun",
"Licai Sun",
"Yong Ren",
"Zebang Cheng",
"Bin Liu",
"Rui Liu",
"Xiaojiang Peng",
"Jiangyan Yi",
"Jianhua Tao"
] | Oral | The emergence of multimodal large language models (MLLMs) advances multimodal emotion recognition (MER) to the next level—from naive discriminative tasks to complex emotion understanding with advanced video understanding abilities and natural language description. However, the current community suffers from a lack of l... | multimodal emotion recognition, AffectGPT, MER-Caption, MER-UniBench | null | 11,009 | 2501.16566 | [
0.005094279069453478,
-0.034526653587818146,
0.015455083921551704,
0.03082922101020813,
-0.0086019616574049,
0.012943780049681664,
0.0280462596565485,
0.010429857298731804,
0.0009863937739282846,
-0.021937742829322815,
-0.03519713506102562,
0.014350843615829945,
-0.0737331211566925,
-0.024... | https://github.com/zeroQiaoba/AffectGPT |
37 | DeFoG: Discrete Flow Matching for Graph Generation | https://openreview.net/forum?id=KPRIwWhqAZ | [
"Yiming QIN",
"Manuel Madeira",
"Dorina Thanou",
"Pascal Frossard"
] | Oral | Graph generative models are essential across diverse scientific domains by capturing complex distributions over relational data. Among them, graph diffusion models achieve superior performance but face inefficient sampling and limited flexibility due to the tight coupling between training and sampling stages. We introd... | Graph Generation, Flow Matching | We propose DeFoG, a discrete flow matching-based framework for graph generation with improved sampling efficiency and state-of-the-art performance across synthetic and molecular datasets. | 10,540 | 2410.04263 | [
0.003417256288230419,
0.0028058583848178387,
-0.011744302697479725,
0.06101195141673088,
0.04516930133104324,
0.028944414108991623,
0.023233691230416298,
0.014173071831464767,
0.016261132434010506,
-0.06322021037340164,
0.02281586267054081,
-0.0303906612098217,
-0.08233510702848434,
0.0075... | https://github.com/manuelmlmadeira/DeFoG |
38 | One-Step Generalization Ratio Guided Optimization for Domain Generalization | https://openreview.net/forum?id=Tv2JDGw920 | [
"Sumin Cho",
"Dongwon Kim",
"Kwangsu Kim"
] | Oral | Domain Generalization (DG) aims to train models that generalize to unseen target domains but often overfit to domain-specific features, known as undesired correlations. Gradient-based DG methods typically guide gradients in a dominant direction but often inadvertently reinforce spurious correlations. Recent work has em... | Domain Generalization, Optimization, Preconditioning, One-Step Generalization Ratio (OSGR), Out-of-Distribution | We propose GENIE, a novel optimizer that leverages the One-Step Generalization Ratio to dynamically balance parameter contributions, mitigating source-domain overfitting and achieving superior generalization performance in Domain Generalization. | 10,381 | null | [
-0.004090493079274893,
-0.004386196378618479,
0.0386023223400116,
0.026364056393504143,
0.03436649590730667,
0.03165575861930847,
0.06266896426677704,
-0.02846449241042137,
-0.008976617828011513,
-0.03886710852384567,
-0.0027551068924367428,
0.005210195202380419,
-0.0797378346323967,
0.009... | https://github.com/00ssum/GENIE |
39 | CodeIO: Condensing Reasoning Patterns via Code Input-Output Prediction | https://openreview.net/forum?id=feIaF6vYFl | [
"Junlong Li",
"Daya Guo",
"Dejian Yang",
"Runxin Xu",
"Yu Wu",
"Junxian He"
] | Oral | Reasoning is a fundamental capability of Large Language Models. While prior research predominantly focuses on enhancing narrow skills like math or code generation, improving performance on many other reasoning tasks remains challenging due to sparse and fragmented training data. To address this issue, we propose CodeI/... | Large Language Models, Reasoning, Code Execution | We teach the models to predict code inputs and outputs to improve their general reasoning ability. | 9,627 | null | [
0.0007106206030584872,
-0.026523977518081665,
-0.01792941242456436,
0.03797358274459839,
0.06655416637659073,
0.0296577550470829,
0.013570168986916542,
0.0163095835596323,
-0.022214587777853012,
-0.021674156188964844,
-0.0300573892891407,
0.028456632047891617,
-0.0593300387263298,
-0.01204... | https://github.com/hkust-nlp/CodeIO |
40 | In-Context Denoising with One-Layer Transformers: Connections between Attention and Associative Memory Retrieval | https://openreview.net/forum?id=F08lzoBgad | [
"Matthew Smart",
"Alberto Bietti",
"Anirvan M. Sengupta"
] | Oral | We introduce in-context denoising, a task that refines the connection between attention-based architectures and dense associative memory (DAM) networks, also known as modern Hopfield networks. Using a Bayesian framework, we show theoretically and empirically that certain restricted denoising problems can be solved opti... | attention, in-context learning, denoising, associative memory, Hopfield network, transformers | We show that one-layer transformers perform optimal in-context denoising through a single step of context-dependent associative memory inference. | 9,353 | 2502.05164 | [
-0.015932908281683922,
0.03301049396395683,
-0.008351082913577557,
0.04378288984298706,
0.009350565262138844,
0.022312836721539497,
0.014035016298294067,
0.018924318253993988,
-0.0504203625023365,
-0.03291743993759155,
-0.012440879829227924,
0.01617427170276642,
-0.04831486567854881,
-0.02... | https://github.com/mattsmart/in-context-denoising |
41 | Foundation Model Insights and a Multi-Model Approach for Superior Fine-Grained One-shot Subset Selection | https://openreview.net/forum?id=ZdqTePSV1K | [
"Zhijing Wan",
"Zhixiang Wang",
"Zheng Wang",
"Xin Xu",
"Shin'ichi Satoh"
] | Oral | One-shot subset selection serves as an effective tool to reduce deep learning training costs by identifying an informative data subset based on the information extracted by an information extractor (IE). Traditional IEs, typically pre-trained on the target dataset, are inherently dataset-dependent. Foundation models (F... | one-shot subset selection, foundation models, data-efficient learning | This paper investigates the effectiveness of using foundation models (FMs) as information extractor for one-shot subset selection on a set of image datasets, and proposes a novel multi-foundation-model subset selection method called RAM-APL. | 9,286 | 2506.14473 | [
-0.018447456881403923,
-0.02497456781566143,
-0.011003893800079823,
0.026657620444893837,
0.0618777871131897,
0.03184109181165695,
0.003223702311515808,
0.010499550960958004,
-0.033598583191633224,
-0.053819842636585236,
0.01240966934710741,
0.005014335736632347,
-0.040420785546302795,
0.0... | https://github.com/zhijingwan/ram-apl |
42 | Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation | https://openreview.net/forum?id=z19u9B2fCZ | [
"Tiansheng Wen",
"Yifei Wang",
"Zequn Zeng",
"Zhong Peng",
"Yudi Su",
"Xinyang Liu",
"Bo Chen",
"Hongwei Liu",
"Stefanie Jegelka",
"Chenyu You"
] | Oral | Many large-scale systems rely on high-quality deep representations (embeddings) to facilitate tasks like retrieval, search, and generative modeling. Matryoshka Representation Learning (MRL) recently emerged as a solution for adaptive embedding lengths, but it requires full model retraining and suffers from noticeable p... | Sparse Coding;Matryoshka representation learning;Adaptive Representation;Efficient Machine Learning; Sparse Autoencoder | A novel framework that learns high-fidelity sparse embeddings for efficient representation | 8,786 | 2503.01776 | [
0.005121247377246618,
-0.025193165987730026,
-0.0008697225712239742,
0.039170123636722565,
0.03418145328760147,
0.031752847135066986,
0.030723612755537033,
0.028378846123814583,
-0.06792884320020676,
-0.03846162557601929,
-0.01677948795258999,
-0.01652376726269722,
-0.06025973707437515,
0.... | https://github.com/neilwen987/CSR_Adaptive_Rep |
43 | Prices, Bids, Values: One ML-Powered Combinatorial Auction to Rule Them All | https://openreview.net/forum?id=4ViG4gQD3i | [
"Ermis Soumalias",
"Jakob Heiss",
"Jakob Weissteiner",
"Sven Seuken"
] | Oral | We study the design of *iterative combinatorial auctions (ICAs)*.
The main challenge in this domain is that the bundle space grows exponentially in the number of items.
To address this, recent work has proposed machine learning (ML)-based preference elicitation algorithms that aim to elicit only the most critical info... | Combinatorial Auctions, Auction Design, Auctions, Market Design, Mechanism Design, Game Theory, Spectrum Auctions, Iterative Auctions, Preference Elicitation, Machine Learning, Neural Networks, Deep Learning, Bayesian Optimization, Active Learning, Computational Economics, Demand Queries, Value Queries | Our ML-algorithm dramatically outperforms the SOTA for combinatorial auctions by combining demand queries and value queries. | 8,544 | 2411.09355 | [
-0.011331970803439617,
-0.04652026668190956,
0.00221111997961998,
0.037281766533851624,
0.03058698959648609,
0.03319965675473213,
-0.007097863592207432,
0.0015302159590646625,
-0.011353298090398312,
-0.03520441800355911,
-0.006901825778186321,
0.014165687374770641,
-0.0701884925365448,
-0.... | https://github.com/marketdesignresearch/MLHCA |
44 | Generative Social Choice: The Next Generation | https://openreview.net/forum?id=E1E6T7KHlR | [
"Niclas Boehmer",
"Sara Fish",
"Ariel D. Procaccia"
] | Oral | A key task in certain democratic processes is to produce a concise slate of statements that proportionally represents the full spectrum of user opinions. This task is similar to committee elections, but unlike traditional settings, the candidate set comprises all possible statements of varying lengths, and so it can on... | Social choice, large language models, committee elections, democratic processes, proportional fairness | null | 8,232 | 2505.22939 | [
-0.005925286095589399,
-0.0313514769077301,
-0.004692776128649712,
0.07571739703416824,
0.005805234890431166,
0.031925659626722336,
-0.004618258215487003,
0.025185780599713326,
-0.00670641241595149,
-0.03300214558839798,
-0.03983606398105621,
0.009749760851264,
-0.08471936732530594,
-0.030... | https://github.com/sara-fish/gen-soc-choice-next-gen |
45 | ITBench: Evaluating AI Agents across Diverse Real-World IT Automation Tasks | https://openreview.net/forum?id=jP59rz1bZk | [
"Saurabh Jha",
"Rohan R. Arora",
"Yuji Watanabe",
"Takumi Yanagawa",
"Yinfang Chen",
"Jackson Clark",
"Bhavya Bhavya",
"Mudit Verma",
"Harshit Kumar",
"Hirokuni Kitahara",
"Noah Zheutlin",
"Saki Takano",
"Divya Pathak",
"Felix George",
"Xinbo Wu",
"Bekir O Turkkan",
"Gerard Vanloo",
... | Oral | Realizing the vision of using AI agents to automate critical IT tasks depends on the ability to measure and understand effectiveness of proposed solutions. We introduce ITBench, a framework that offers a systematic methodology for benchmarking AI agents to address real-world IT automation tasks. Our initial release tar... | Benchmark, GenAI, Agents, IT Automation | Benchmark for IT automation tasks | 8,021 | 2502.05352 | [
0.009089733473956585,
-0.043322253972291946,
-0.004819262307137251,
0.028642643243074417,
0.05064190924167633,
0.008034205064177513,
0.011148513294756413,
0.009374073706567287,
0.006617363076657057,
-0.03415652737021446,
-0.006981564220041037,
0.020754791796207428,
-0.05296405032277107,
-0... | https://github.com/ibm/itbench |
46 | Theoretical Limitations of Ensembles in the Age of Overparameterization | https://openreview.net/forum?id=Cf0N07E1vu | [
"Niclas Dern",
"John Patrick Cunningham",
"Geoff Pleiss"
] | Oral | Classic ensembles generalize better than any single component model. In contrast, recent empirical studies find that modern ensembles of (overparameterized) neural networks may not provide any inherent generalization advantage over single but larger neural networks. This paper clarifies how modern overparameterized ens... | Ensembles, Deep Ensembles, Uncertainty Quantification, Overparameterization, Random feature regression, Kernel regression | We theoretically characterize the generalization and uncertainty properties of overparameterized random feature regressors, proving a functional equivalence between ensembles and single (but larger) models under weak assumptions. | 7,902 | 2410.16201 | [
-0.02355119213461876,
-0.027120936661958694,
-0.012224185280501842,
0.03826025873422623,
0.043106816709041595,
0.02357286773622036,
0.030617447569966316,
0.0032587810419499874,
-0.034452375024557114,
-0.047255463898181915,
0.0007192044286057353,
0.006071353331208229,
-0.0778704285621643,
-... | https://github.com/nic-dern/theoretical-limitations-overparameterized-ensembles |
47 | An analytic theory of creativity in convolutional diffusion models | https://openreview.net/forum?id=ilpL2qACla | [
"Mason Kamb",
"Surya Ganguli"
] | Oral | We obtain an analytic, interpretable and predictive theory of creativity in convolutional diffusion models. Indeed, score-matching diffusion models can generate highly original images that lie far from their training data. However, optimal score-matching theory suggests that these models should only be able to produce... | Diffusion models, Creativity, Inductive Biases, Theory, Interpretability | We obtain an end-to-end analytic, interpretable and predictive theory of creativity in convolutional diffusion model by solving the optimal score-matching problem under the conditions of locality and equivariance. | 7,878 | 2412.20292 | [
0.010545788332819939,
-0.01292910985648632,
-0.0026419602800160646,
0.05136025324463844,
0.0535413958132267,
0.026651691645383835,
0.010594981722533703,
0.016965942457318306,
-0.008480014279484749,
-0.05293968319892883,
-0.00391076784580946,
-0.020247478038072586,
-0.059724532067775726,
0.... | https://github.com/Kambm/convolutional_diffusion |
48 | MGD3 : Mode-Guided Dataset Distillation using Diffusion Models | https://openreview.net/forum?id=NIe74CY9lk | [
"Jeffrey A Chan Santiago",
"praveen tirupattur",
"Gaurav Kumar Nayak",
"Gaowen Liu",
"Mubarak Shah"
] | Oral | Dataset distillation has emerged as an effective strategy, significantly reducing training costs and facilitating more efficient model deployment.
Recent advances have leveraged generative models to distill datasets by capturing the underlying data distribution. Unfortunately, existing methods require model fine-tuning... | Dataset distillation, Dataset Condensation, Diffusion Models | null | 7,693 | 2505.18963 | [
-0.003798037301748991,
-0.025761891156435013,
-0.03486427292227745,
0.09493837505578995,
0.04991694167256355,
0.004301885142922401,
0.02865014784038067,
-0.011210456490516663,
-0.017878729850053787,
-0.054056696593761444,
-0.00400925287976861,
-0.022060969844460487,
-0.05029267072677612,
0... | null |
49 | Near-Optimal Decision Trees in a SPLIT Second | https://openreview.net/forum?id=ACyyBrUioy | [
"Varun Babbar",
"Hayden McTavish",
"Cynthia Rudin",
"Margo Seltzer"
] | Oral | Decision tree optimization is fundamental to interpretable machine learning. The most popular approach is to greedily search for the best feature at every decision point, which is fast but provably suboptimal. Recent approaches find the global optimum using branch and bound with dynamic programming, showing substantial... | Decision Tree Optimization, Interpretable Machine Learning, Discrete Optimization | We find well performing sparse trees, dramatically improving scalability while maintaining SOTA accuracy. | 7,628 | 2502.15988 | [
-0.025810422375798225,
-0.012815546244382858,
-0.008123049512505531,
0.0419137217104435,
0.04307694733142853,
0.03104439750313759,
0.012563131749629974,
-0.022971179336309433,
-0.014208859764039516,
-0.02163822390139103,
-0.029653549194335938,
0.008251373656094074,
-0.06007738038897514,
-0... | https://github.com/VarunBabbar/SPLIT-ICML |
50 | Controlling Underestimation Bias in Constrained Reinforcement Learning for Safe Exploration | https://openreview.net/forum?id=nq5bt0mRTC | [
"Shiqing Gao",
"Jiaxin Ding",
"Luoyi Fu",
"Xinbing Wang"
] | Oral | Constrained Reinforcement Learning (CRL) aims to maximize cumulative rewards while satisfying constraints. However, existing CRL algorithms often encounter significant constraint violations during training, limiting their applicability in safety-critical scenarios. In this paper, we identify the underestimation of the ... | constrained RL, safe exploration, underestimation, intrinsic cost | null | 7,532 | null | [
-0.05139840394258499,
0.014362325891852379,
-0.03756367415189743,
0.04934417083859444,
0.04883088544011116,
-0.01863854005932808,
0.0462195910513401,
-0.004491435829550028,
-0.036009397357702255,
-0.026735391467809677,
-0.01582258567214012,
0.02910141460597515,
-0.07106010615825653,
-0.037... | https://github.com/ShiqingGao/MICE |
51 | Nonlinearly Preconditioned Gradient Methods under Generalized Smoothness | https://openreview.net/forum?id=kV8oUyjdIg | [
"Konstantinos Oikonomidis",
"Jan Quan",
"Emanuel Laude",
"Panagiotis Patrinos"
] | Oral | We analyze nonlinearly preconditioned gradient methods for solving smooth minimization problems. We introduce a generalized smoothness property, based on the notion of abstract convexity, that is broader than Lipschitz smoothness and provide sufficient first- and second-order conditions. Notably, our framework encapsul... | nonconvex optimization, generalized smoothness, first-order methods | null | 7,377 | 2502.08532 | [
-0.04779544472694397,
-0.03405274450778961,
0.06987668573856354,
0.05129753053188324,
0.03709825500845909,
0.038539063185453415,
0.023838482797145844,
0.0045199343003332615,
-0.03370634466409683,
-0.06431606411933899,
-0.03597892448306084,
-0.0001519900979474187,
-0.05742815136909485,
-0.0... | null |
52 | Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models | https://openreview.net/forum?id=v77ZMzbsBA | [
"Anshuman Chhabra",
"Bo Li",
"Jian Chen",
"Prasant Mohapatra",
"Hongfu Liu"
] | Oral | A core data-centric learning challenge is the identification of training samples that are detrimental to model performance. Influence functions serve as a prominent tool for this task and offer a robust framework for assessing training data influence on model predictions. Despite their widespread use, their high comput... | Data-centric learning, Detrimental sample detection | null | 7,375 | 2405.03869 | [
-0.009915025904774666,
-0.03168289735913277,
-0.014027523808181286,
0.04999290034174919,
0.039685703814029694,
0.012417567893862724,
0.020589837804436684,
-0.011553958058357239,
-0.010532925836741924,
-0.028741251677274704,
-0.024893175810575485,
0.03850383684039116,
-0.06686978042125702,
... | null |
53 | Learning Smooth and Expressive Interatomic Potentials for Physical Property Prediction | https://openreview.net/forum?id=R0PBjxIbgm | [
"Xiang Fu",
"Brandon M Wood",
"Luis Barroso-Luque",
"Daniel S. Levine",
"Meng Gao",
"Misko Dzamba",
"C. Lawrence Zitnick"
] | Oral | Machine learning interatomic potentials (MLIPs) have become increasingly effective at approximating quantum mechanical calculations at a fraction of the computational cost. However, lower errors on held out test sets do not always translate to improved results on downstream physical property prediction tasks. In this p... | Machine Learning Force Fields, Machine Learning Potentials, DFT, Computational Chemistry, Molecular Dynamics | A novel machine learning interatomic potential architecture achieving state-of-the-art performance on test error, Matbench-Discovery, phonon calculation, and thermal conductivity calculations, with detailed ablation studies and analysis. | 7,318 | 2502.12147 | [
-0.022406263276934624,
0.031805891543626785,
-0.0031128174159675837,
0.06738631427288055,
0.06512828916311264,
-0.02133060432970524,
0.027583671733736992,
-0.029019949957728386,
-0.010033457539975643,
-0.03482212871313095,
0.008451507426798344,
0.0006480703596025705,
-0.038155697286129,
0.... | https://github.com/facebookresearch/fairchem |
54 | AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses | https://openreview.net/forum?id=FJKnru1xUF | [
"Nicholas Carlini",
"Edoardo Debenedetti",
"Javier Rando",
"Milad Nasr",
"Florian Tramèr"
] | Oral | We introduce AutoAdvExBench, a benchmark to evaluate if large language models (LLMs) can autonomously exploit defenses to adversarial examples. Unlike existing security benchmarks that often serve as proxies for real-world tasks, AutoAdvExBench directly measures LLMs' success on tasks regularly performed by machine lea... | benchmark, adversarial examples, agents | We introduce a benchmark that measures the ability of LLMs to automatically exploit adversarial examples, and show that current LLMs struggle at this real-world task. | 7,217 | 2503.01811 | [
-0.0011248006485402584,
-0.04143417999148369,
-0.012119535356760025,
0.048149097710847855,
0.014904927462339401,
0.0055825114250183105,
0.050416454672813416,
0.004705797880887985,
-0.0007819423917680979,
-0.03412686660885811,
0.015477720648050308,
-0.003456943901255727,
-0.06728751957416534,... | https://github.com/ethz-spylab/AutoAdvExBench |
55 | Sanity Checking Causal Representation Learning on a Simple Real-World System | https://openreview.net/forum?id=d2aGLPSpFz | [
"Juan L. Gamella",
"Simon Bing",
"Jakob Runge"
] | Oral | We evaluate methods for causal representation learning (CRL) on a simple, real-world system where these methods are expected to work. The system consists of a controlled optical experiment specifically built for this purpose, which satisfies the core assumptions of CRL and where the underlying causal factors---the inpu... | causal representation learning, benchmarks, causality | We provide a sanity test for CRL methods and their underlying theory, based on a carefully designed, real, physical system whose data-generating process matches the core assumptions of CRL, and where these methods are expected to work. | 7,212 | 2502.20099 | [
-0.011091131716966629,
-0.005330821964889765,
-0.021554425358772278,
0.06614791601896286,
0.04478449374437332,
0.008102242834866047,
0.04384056478738785,
0.0017629311187192798,
-0.030419129878282547,
-0.02293362468481064,
-0.032875481992959976,
0.026212306693196297,
-0.0863853245973587,
0.... | https://github.com/simonbing/CRLSanityCheck |
56 | Transformative or Conservative? Conservation laws for ResNets and Transformers | https://openreview.net/forum?id=aTBwCSkPxv | [
"Sibylle Marcotte",
"Rémi Gribonval",
"Gabriel Peyré"
] | Oral | While conservation laws in gradient flow training dynamics are well understood for (mostly shallow) ReLU and linear networks, their study remains largely unexplored for more practical architectures. For this, we first show that basic building blocks such as ReLU (or linear) shallow networks, with or without convolution... | Conservation laws, gradient flow, linear and relu neural networks, Convolutive ResNet, Transformer, SGD | null | 6,956 | 2506.06194 | [
-0.026684420183300972,
-0.0352104976773262,
0.012558934278786182,
0.03286758065223694,
0.027348896488547325,
0.03399236127734184,
0.021882159635424614,
0.01793847791850567,
-0.029367635026574135,
-0.0368361733853817,
-0.0010134790791198611,
-0.012424587272107601,
-0.04524945840239525,
-0.0... | https://github.com/sibyllema/Conservation-laws-for-ResNets-and-Transformers |
57 | A Generalization Result for Convergence in Learning-to-Optimize | https://openreview.net/forum?id=PqDvTWdQwm | [
"Michael Sucker",
"Peter Ochs"
] | Oral | Learning-to-optimize leverages machine learning to accelerate optimization algorithms. While empirical results show tremendous improvements compared to classical optimization algorithms, theoretical guarantees are mostly lacking, such that the outcome cannot be reliably assured. Especially, convergence is hardly studie... | learning-to-optimize, non-smooth non-convex optimization, PAC-Bayesian guarantees, convergence | We present a generalization theorem that allows for establishing the convergence of learned optimization algorithms to critical points with high probability. | 6,618 | 2410.07704 | [
-0.016254346817731857,
-0.01896701380610466,
0.028871925547719002,
0.0330059789121151,
0.03375335410237312,
0.04236817732453346,
0.02035892754793167,
0.011231495067477226,
-0.018387004733085632,
-0.03924044594168663,
-0.01477961614727974,
-0.013367637060582638,
-0.06717649102210999,
0.0063... | null |
58 | rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking | https://openreview.net/forum?id=5zwF1GizFa | [
"Xinyu Guan",
"Li Lyna Zhang",
"Yifei Liu",
"Ning Shang",
"Youran Sun",
"Yi Zhu",
"Fan Yang",
"Mao Yang"
] | Oral | We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising ``deep thinking'' through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-... | LLM, Reasoning, Self-evolution | We present rStar-Math to demonstrate that small language models (SLMs, 1.5B-7B) can rival or even surpass the math reasoning capability of OpenAI o1 | 6,558 | null | [
0.0010125635890290141,
-0.022664206102490425,
0.011794923804700375,
0.0514516644179821,
0.05915063992142677,
0.041778139770030975,
0.02397805266082287,
0.0016689968761056662,
-0.024728940799832344,
-0.014926725998520851,
-0.021454138681292534,
0.02288839779794216,
-0.05929530784487724,
-0.... | https://github.com/microsoft/rStar |
59 | An Improved Clique-Picking Algorithm for Counting Markov Equivalent DAGs via Super Cliques Transfer | https://openreview.net/forum?id=mr0xOQTJkL | [
"Lifu Liu",
"Shiyuan He",
"Jianhua Guo"
] | Oral | Efficiently counting Markov equivalent directed acyclic graphs (DAGs) is crucial in graphical causal analysis. Wienöbst et al. (2023) introduced a polynomial-time algorithm, known as the Clique-Picking algorithm, to count the number of Markov equivalent DAGs for a given completed partially directed acyclic graph (CPDAG... | Directed acyclic graphs, Markov equivalence class, Causality, Undirected connected component | null | 6,368 | null | [
-0.03948775306344032,
-0.003898033406585455,
-0.02311282604932785,
0.019050681963562965,
0.05220913887023926,
0.01004576962441206,
0.034063126891851425,
0.01727820187807083,
-0.005043331068009138,
-0.05184287950396538,
0.008483250625431538,
-0.022472713142633438,
-0.08368556201457977,
0.00... | null |
60 | Statistical Collusion by Collectives on Learning Platforms | https://openreview.net/forum?id=46yLEXtav4 | [
"Etienne Gauthier",
"Francis Bach",
"Michael I. Jordan"
] | Oral | As platforms increasingly rely on learning algorithms, collectives may form and seek ways to influence these platforms to align with their own interests. This can be achieved by coordinated submission of altered data. To evaluate the potential impact of such behavior, it is essential to understand the computations that... | Learning Algorithms, Collective Action, Data Poisoning | We study how collectives can pool their data to strategically modify it and influence learning platforms. | 6,238 | 2502.04879 | [
-0.01319233700633049,
-0.0263278316706419,
-0.03521363064646721,
0.03916655108332634,
0.03673877939581871,
-0.008774218149483204,
0.0014288340462371707,
0.026329929009079933,
-0.028224868699908257,
0.005648523569107056,
-0.011397725902497768,
-0.021927734836935997,
-0.0534842312335968,
-0.... | https://github.com/GauthierE/statistical-collusion |
61 | Neural Discovery in Mathematics: Do Machines Dream of Colored Planes? | https://openreview.net/forum?id=7Tp9zjP9At | [
"Konrad Mundinger",
"Max Zimmer",
"Aldo Kiem",
"Christoph Spiegel",
"Sebastian Pokutta"
] | Oral | We demonstrate how neural networks can drive mathematical discovery through a case study of the Hadwiger-Nelson problem, a long-standing open problem at the intersection of discrete geometry and extremal combinatorics that is concerned with coloring the plane while avoiding monochromatic unit-distance pairs. Using neur... | AI4Science, Mathematical Discovery, Neural Network, Scientific Machine Learning, Neural Representation Learning, Discrete Geometry, Geometric Deep Learning, Neural Approximation | We use neural networks to discover novel colorings of the plane avoiding certain distances. | 6,221 | 2501.18527 | [
-0.015167810022830963,
-0.0003119100583717227,
-0.020641377195715904,
0.03562689572572708,
0.028650667518377304,
0.0360734686255455,
0.003317472757771611,
0.00681828148663044,
-0.059320058673620224,
-0.06539834290742874,
-0.012939982116222382,
-0.009344357065856457,
-0.05180087685585022,
0... | https://github.com/ZIB-IOL/neural-discovery-icml25 |
62 | Strategy Coopetition Explains the Emergence and Transience of In-Context Learning | https://openreview.net/forum?id=esBoQFmD7v | [
"Aaditya K Singh",
"Ted Moskovitz",
"Sara Dragutinović",
"Felix Hill",
"Stephanie C.Y. Chan",
"Andrew M Saxe"
] | Oral | In-context learning (ICL) is a powerful ability that emerges in transformer models, enabling them to learn from context without weight updates. Recent work has established emergent ICL as a transient phenomenon that can sometimes disappear after long training times. In this work, we sought a mechanistic understanding o... | Mechanistic interpretability, transformers, in-context learning, transience, dynamics, Machine Learning, strategy, cooperation, competition, coopetition | We find and model cooperative and competitive dynamics (termed "coopetition") that explain the emergence and subsequent transience of in-context learning. | 5,794 | 2503.05631 | [
-0.03205364942550659,
-0.018503684550523758,
-0.017158467322587967,
0.0011425549164414406,
0.016938427463173866,
-0.010860801674425602,
0.021766729652881622,
0.023649923503398895,
-0.05264895036816597,
-0.009505611844360828,
-0.014901670627295971,
0.0018182668136432767,
-0.028375772759318352... | https://github.com/aadityasingh/icl-dynamics |
63 | ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via α-β-Divergence | https://openreview.net/forum?id=vt65VjJakt | [
"Guanghui Wang",
"Zhiyong Yang",
"Zitai Wang",
"Shi Wang",
"Qianqian Xu",
"Qingming Huang"
] | Oral | Knowledge Distillation (KD) transfers knowledge from a large teacher model to a smaller student model by minimizing the divergence between their output distributions, typically using forward Kullback-Leibler divergence (FKLD) or reverse KLD (RKLD). It has become an effective training paradigm due to the broader supervi... | Knowledge Distillation, α-β divergence | null | 5,792 | null | [
-0.011672550812363625,
0.0009629819542169571,
0.010638067498803139,
0.03328655660152435,
0.032137431204319,
-0.033958032727241516,
0.04498915374279022,
-0.020274195820093155,
-0.011273644864559174,
0.0010784391779452562,
-0.02616361901164055,
0.013988466933369637,
-0.05550219118595123,
0.0... | https://github.com/ghwang-s/abkd |
64 | DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs | https://openreview.net/forum?id=rc65N9xIrY | [
"Jongwoo Ko",
"Tianyi Chen",
"Sungnyun Kim",
"Tianyu Ding",
"Luming Liang",
"Ilya Zharkov",
"Se-Young Yun"
] | Oral | Despite the success of distillation in large language models (LLMs), most prior work applies identical loss functions to both teacher- and student-generated data. These strategies overlook the synergy between loss formulations and data types, leading to a suboptimal performance boost in student models. To address this,... | knowledge distillation, efficiency, contrastive approach | DistiLLM-2 improves Large Language Model (LLM) distillation by leveraging a contrastive approach that increases the likelihood of teacher responses while decreasing that of student responses. | 5,637 | null | [
0.004703988786786795,
-0.02407562918961048,
-0.028882116079330444,
0.061363980174064636,
0.04012810066342354,
0.000603658496402204,
0.030090797692537308,
0.013016145676374435,
-0.03751128539443016,
0.012225713580846786,
-0.02450319565832615,
0.017028508707880974,
-0.05240785330533981,
0.00... | https://github.com/jongwooko/distillm-2 |
65 | SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs | https://openreview.net/forum?id=EVwMw2lVlw | [
"Xin Su",
"Man Luo",
"Kris W Pan",
"Tien Pei Chou",
"Vasudev Lal",
"Phillip Howard"
] | Oral | Multimodal retrieval-augmented generation (RAG) plays a crucial role in domains such as knowledge-based visual question answering (KB-VQA), where models should effectively integrate additional knowledge to generate a response. However, existing vision and language models (VLMs) are not inherently designed for context-a... | Multimodal, retrieval augmented generation, data generation | We introduce SKVQA, a large-scale synthetic multimodal dataset, for improving context-aware generation capability of multimodal. | 5,513 | null | [
0.03491915017366409,
-0.03367573022842407,
0.018589410930871964,
0.07226058095693588,
0.02315570041537285,
-0.0014044413110241294,
0.019528603181242943,
0.011283906176686287,
-0.029362095519900322,
0.008745347149670124,
-0.07406076788902283,
0.04249973222613335,
-0.06470533460378647,
-0.01... | https://github.com/IntelLabs/multimodal_cognitive_ai/tree/main/SK-VQA |
66 | Learning Dynamics in Continual Pre-Training for Large Language Models | https://openreview.net/forum?id=Vk1rNMl0J1 | [
"Xingjin Wang",
"Howe Tissue",
"Lu Wang",
"Linjing Li",
"Daniel Dajun Zeng"
] | Oral | Continual Pre-Training (CPT) has become a popular and effective method to apply strong foundation models to specific downstream tasks. In this work, we explore the **learning dynamics** throughout the CPT process for large language models (LLMs).
We specifically focus on how general and downstream domain performance e... | Continual Pre-Training, Large Language Models, Learning Dynamics | null | 5,503 | 2505.07796 | [
-0.029516547918319702,
-0.03357288986444473,
0.0038858503103256226,
0.04288632422685623,
0.0461832694709301,
0.028915876522660255,
0.03139132633805275,
0.0223379023373127,
-0.01844719424843788,
0.0036331908777356148,
-0.021936435252428055,
0.01644599437713623,
-0.043011125177145004,
0.0208... | null |
67 | LoRA Training Provably Converges to a Low-Rank Global Minimum Or It Fails Loudly (But it Probably Won't Fail) | https://openreview.net/forum?id=o9zDYV4Ism | [
"Junsu Kim",
"Jaeyeon Kim",
"Ernest K. Ryu"
] | Oral | Low-rank adaptation (LoRA) has become a standard approach for fine-tuning large foundation models. However, our theoretical understanding of LoRA remains limited as prior analyses of LoRA's training dynamics either rely on linearization arguments or consider highly simplified setups. In this work, we analyze the LoRA l... | Low-rank adaptation, LoRA, deep learning theory, non-convex optimization, large language models, fine-tuning | LoRA training works because there is a global minimizer near initialization and spurious local minima are far away. | 5,477 | null | [
-0.014455573633313179,
-0.03802047297358513,
0.015066878870129585,
0.03300941735506058,
0.01819545403122902,
0.04624393582344055,
0.006060707848519087,
-0.008524855598807335,
-0.0558406338095665,
-0.0223939698189497,
-0.004551192745566368,
0.012989195995032787,
-0.07121776044368744,
-0.007... | null |
68 | An Online Adaptive Sampling Algorithm for Stochastic Difference-of-convex Optimization with Time-varying Distributions | https://openreview.net/forum?id=QmIzUuspWo | [
"Yuhan Ye",
"Ying Cui",
"Jingyi Wang"
] | Oral | We propose an online adaptive sampling algorithm for solving stochastic nonsmooth difference-of-convex (DC) problems under time-varying distributions. At each iteration, the algorithm relies solely on data generated from the current distribution and employs distinct adaptive sampling rates for the convex and concave co... | nonsmooth, difference-of-convex, distribution shift, online optimization | We propose an online adaptive DCA under time-varying distributions, with a novel pointwise convergence rate for the SAA of subdifferential mappings. | 5,080 | null | [
-0.043093085289001465,
-0.006332538556307554,
0.006067295093089342,
0.038870349526405334,
0.04395101219415665,
0.05289594829082489,
0.001912177074700594,
0.02036469057202339,
-0.03118376061320305,
-0.04037709906697273,
-0.01345957349985838,
-0.009887896478176117,
-0.047419410198926926,
-0.... | null |
69 | General framework for online-to-nonconvex conversion: Schedule-free SGD is also effective for nonconvex optimization | https://openreview.net/forum?id=etxseIT47b | [
"Kwangjun Ahn",
"Gagik Magakyan",
"Ashok Cutkosky"
] | Oral | This work investigates the effectiveness of schedule-free methods, developed by A. Defazio et al. (NeurIPS 2024), in nonconvex optimization settings, inspired by their remarkable empirical success in training neural networks. Specifically, we show that schedule-free SGD achieves optimal iteration complexity for nonsmoo... | Schedule-free optimizaer, non-convex optimization, online-to-nonconvex conversion | Through online-to-nonconvex conversion we show that Schedule-Free SGD is also optimal for non-convex non-smooth optimization. | 4,886 | 2411.07061 | [
-0.03613119199872017,
-0.02815254032611847,
0.00902431271970272,
0.038696445524692535,
0.029357191175222397,
0.04630259424448013,
0.022606320679187775,
0.023717544972896576,
0.0012642662040889263,
-0.03769216686487198,
-0.01315445639193058,
-0.008560060523450375,
-0.05828120559453964,
-0.0... | null |
70 | Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs | https://openreview.net/forum?id=aOIJ2gVRWW | [
"Jan Betley",
"Daniel Chee Hian Tan",
"Niels Warncke",
"Anna Sztyber-Betley",
"Xuchan Bao",
"Martín Soto",
"Nathan Labenz",
"Owain Evans"
] | Oral | We describe a surprising finding: finetuning GPT-4o to produce insecure code without disclosing this insecurity to the user leads to broad *emergent misalignment*. The finetuned model becomes misaligned on tasks unrelated to coding, advocating that humans should be enslaved by AI, acting deceptively, and providing mali... | NLP, LLM, GPT, generalization, fine-tuning, misalignment, alignment, safety | We finetune models to write vulnerable code and find that they show misaligned behaviors in various unrelated contexts. | 4,802 | 2502.17424 | [
-0.0004428474057931453,
-0.02787901647388935,
-0.015706906095147133,
0.0383668877184391,
0.047111447900533676,
0.0016501223435625434,
0.04823977127671242,
0.009477698244154453,
-0.031328536570072174,
-0.007273364812135696,
-0.03032272309064865,
0.022693412378430367,
-0.08081096410751343,
-... | https://github.com/emergent-misalignment/emergent-misalignment |
71 | Beyond Self-Repellent Kernels: History-Driven Target Towards Efficient Nonlinear MCMC on General Graphs | https://openreview.net/forum?id=0yzOEMbShU | [
"Jie Hu",
"Yi-Ting Ma",
"Do Young Eun"
] | Oral | We propose a *history-driven target (HDT)* framework in Markov Chain Monte Carlo (MCMC) to improve any random walk algorithm on discrete state spaces, such as general undirected graphs, for efficient sampling from target distribution $\\boldsymbol{\\mu}$. With broad applications in network science and distributed optim... | Nonlinear MCMC, History-Driven Target, Computational Efficiency, Near-Zero Variance | This paper presents an efficient framework for adaptive MCMC sampling in discrete spaces, incorporating self-repellence into an self-adaptive target distribution for any advanced MCMC techniques with near-zero variance. | 4,668 | 2505.18300 | [
-0.025376006960868835,
-0.033214062452316284,
0.025998342782258987,
0.04356293007731438,
0.032691165804862976,
0.008113231509923935,
0.04987625777721405,
-0.000040150160202756524,
0.015213480219244957,
-0.0624992661178112,
0.02705416828393936,
-0.0005752056022174656,
-0.044015273451805115,
... | null |
72 | Long-Form Speech Generation with Spoken Language Models | https://openreview.net/forum?id=4AmFA0qNQ2 | [
"Se Jin Park",
"Julian Salazar",
"Aren Jansen",
"Keisuke Kinoshita",
"Yong Man Ro",
"RJ Skerry-Ryan"
] | Oral | We consider the generative modeling of speech over multiple minutes, a requirement for long-form multimedia generation and audio-native voice assistants. However, textless spoken language models struggle to generate plausible speech past tens of seconds, due to high temporal resolution of speech tokens causing loss of ... | spoken language models, long-form generation, state-space models, evaluation | We introduce the first long-form spoken language model (16 min. of audio at once), discuss key design choices (e.g. state-space modeling), and propose new benchmarks. | 4,403 | 2412.18603 | [
-0.027912842109799385,
-0.01877661608159542,
-0.030772365629673004,
0.020579956471920013,
0.02727590873837471,
0.05737708881497383,
0.03859657049179077,
0.03690693527460098,
-0.028553852811455727,
-0.03023528680205345,
-0.015774307772517204,
0.02621833048760891,
-0.0578642413020134,
0.0209... | https://github.com/google-deepmind/librispeech-long |
73 | A Generalization Theory for Zero-Shot Prediction | https://openreview.net/forum?id=kJQgMGLrow | [
"Ronak Mehta",
"Zaid Harchaoui"
] | Oral | A modern paradigm for generalization in machine learning and AI consists of pre-training a task-agnostic foundation model, generally obtained using self-supervised and multimodal contrastive learning. The resulting representations can be used for prediction on a downstream task for which no labeled data is available. W... | zero-shot, self-supervised learning, foundation models, learning theory, statistical theory | We present a theoretical framework for zero-shot prediction by prompting, highlighting the conditional independence relationships supporting the success of this approach. | 4,085 | 2507.09128 | [
0.01375690195709467,
-0.01688716933131218,
-0.001775636337697506,
0.03859012573957443,
0.048776134848594666,
0.03975457698106766,
0.042016226798295975,
0.019084632396697998,
-0.05002748593688011,
-0.009873653762042522,
-0.021696269512176514,
0.033238913863897324,
-0.09214280545711517,
-0.0... | https://github.com/ronakdm/zeroshot |
74 | Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics | https://openreview.net/forum?id=tlniJJFUW2 | [
"Herman Chau",
"Helen Jenne",
"Davis Brown",
"Jesse He",
"Mark Raugas",
"Sara C. Billey",
"Henry Kvinge"
] | Oral | With recent dramatic increases in AI system capabilities, there has been growing interest in utilizing machine learning for reasoning-heavy, quantitative tasks, particularly mathematics. While there are many resources capturing mathematics at the high-school, undergraduate, and graduate level, there are far fewer resou... | Datasets, AI for math, Mathematical reasoning and conjecturing, Algebraic combinatorics | We introduce a collection of mathematics datasets representing foundational or open problems in algebraic combinatorics aimed at conjecturing capability in machine learning systems | 3,973 | 2503.06366 | [
-0.01696864143013954,
-0.03340081870555878,
-0.036027535796165466,
0.03783933445811272,
0.052373409271240234,
0.017994849011301994,
0.015707485377788544,
-0.010547955520451069,
-0.024913441389799118,
-0.010334976948797703,
-0.02078717201948166,
-0.005543265957385302,
-0.058907631784677505,
... | https://github.com/pnnl/ML4AlgComb |
75 | Going Deeper into Locally Differentially Private Graph Neural Networks | https://openreview.net/forum?id=2aKHuXdr7Q | [
"Longzhu He",
"Chaozhuo Li",
"Peng Tang",
"Sen Su"
] | Oral | Graph Neural Networks (GNNs) have demonstrated superior performance in a variety of graph mining and learning tasks. However, when node representations involve sensitive personal information or variables related to individuals, learning from graph data can raise significant privacy concerns. Although recent studies hav... | Differential Privacy, Graph Neural Networks, Privacy-preserving | null | 3,814 | null | [
-0.015733687207102776,
-0.01311597228050232,
0.018008021637797356,
0.06709785014390945,
0.051988065242767334,
0.0058191753923892975,
0.03413943573832512,
-0.0340263806283474,
-0.010235423222184181,
-0.014052926562726498,
0.037637438625097275,
-0.01670096255838871,
-0.08175327628850937,
-0.... | null |
76 | Polynomial-Delay MAG Listing with Novel Locally Complete Orientation Rules | https://openreview.net/forum?id=70voOlSPos | [
"Tian-Zuo Wang",
"Wen-Bo Du",
"Zhi-Hua Zhou"
] | Oral | A maximal ancestral graph (MAG) is widely used to characterize the causal relations among observable variables in the presence of latent variables. However, given observational data, only a partial ancestral graph representing a Markov equivalence class (MEC) of MAGs is identifiable, which generally contains uncertain ... | maximal ancestral graphs, MAG listing | We present the first Polynomial-Delay Maximal Ancestral Graph Listing Algorithm | 3,646 | null | [
-0.02824612334370613,
-0.017713366076350212,
-0.022830847650766373,
0.026803473010659218,
0.023296358063817024,
0.007331150583922863,
0.05638973414897919,
0.005624726414680481,
-0.04176478460431099,
-0.04328890144824982,
-0.026431426405906677,
0.0393083356320858,
-0.05268749222159386,
0.00... | null |
77 | VideoRoPE: What Makes for Good Video Rotary Position Embedding? | https://openreview.net/forum?id=tO7OVZkCo1 | [
"Xilin Wei",
"Xiaoran Liu",
"Yuhang Zang",
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Cao",
"Jian Tong",
"Haodong Duan",
"Qipeng Guo",
"Jiaqi Wang",
"Xipeng Qiu",
"Dahua Lin"
] | Oral | While Rotary Position Embedding (RoPE) and its variants are widely adopted for their long-context capabilities, the extension of the 1D RoPE to video, with its complex spatio-temporal structure, remains an open challenge.
This work first introduces a comprehensive analysis that identifies four key characteristics essen... | Rotary Position Embedding (RoPE), Spatio-temporal Encoding, VideoRoPE, V-NIAH-D Task, Temporal Dimension Allocation, 3D Position Embedding, Low-frequency Temporal Allocation, Diagonal Layout, Adjustable Temporal Spacing, Video Retrieval, Video Understanding, Video Hallucination, Position Encoding for Video, Distractor ... | This paper identifies four key criteria for positional encoding: structure, frequency allocation, spatial symmetry, and temporal scaling. We propose VideoRoPE, which outperforms prior methods in video retrieval and understanding. | 3,607 | 2502.05173 | [
0.013427574187517166,
0.00949780736118555,
0.0029793342109769583,
0.03663383796811104,
0.025859009474515915,
0.03728261962532997,
0.04734442010521889,
-0.015514004044234753,
-0.052213601768016815,
-0.04288002476096153,
-0.03747091069817543,
0.011830548755824566,
-0.057309094816446304,
0.00... | https://github.com/Wiselnn570/VideoRoPE |
78 | Inductive Moment Matching | https://openreview.net/forum?id=pwNSUo7yUb | [
"Linqi Zhou",
"Stefano Ermon",
"Jiaming Song"
] | Oral | Diffusion models and Flow Matching generate high-quality samples but are slow at inference, and distilling them into few-step models often leads to instability and extensive tuning. To resolve these trade-offs, we propose Moment Matching Self-Distillation (MMSD), a new class of generative models for one- or few-step sa... | generative models, diffusion models, flow matching, moment matching, consistency models | null | 3,490 | 2503.07565 | [
0.011388036422431469,
-0.022348087280988693,
-0.026499737054109573,
0.09164702147245407,
0.0263688787817955,
0.03228135406970978,
0.03076137602329254,
-0.015624398365616798,
-0.01946573331952095,
-0.059812672436237335,
0.023836780339479446,
-0.0550338439643383,
-0.044352591037750244,
-0.01... | https://github.com/lumalabs/imm |
79 | Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning | https://openreview.net/forum?id=mIomqOskaa | [
"Guozheng Ma",
"Lu Li",
"Zilin Wang",
"Li Shen",
"Pierre-Luc Bacon",
"Dacheng Tao"
] | Oral | Effectively scaling up deep reinforcement learning models has proven notoriously difficult due to network pathologies during training, motivating various targeted interventions such as periodic reset and architectural advances such as layer normalization. Instead of pursuing more complex modifications, we show that in... | Deep Reinforcement Learning, Network Sparsity, Scaling, Plasticity Loss, Regularization | Integrating network sparsity into the most advanced architectures can further unlock the scaling potential of DRL models and meanwhile effectively mitigating optimization pathologies during scaling. | 3,388 | 2506.17204 | [
-0.00828350055962801,
-0.049274545162916183,
0.0026271799579262733,
0.04664471000432968,
0.044081203639507294,
0.03076743893325329,
0.008465401828289032,
-0.014904712326824665,
-0.06022492051124573,
-0.03698456659913063,
0.002394307404756546,
-0.009251558221876621,
-0.04171229898929596,
0.... | https://github.com/lilucse/SparseNetwork4DRL |
80 | LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently | https://openreview.net/forum?id=KwIlvmLDLm | [
"Yuanhe Zhang",
"Fanghui Liu",
"Yudong Chen"
] | Oral | This paper explores how theory can guide and enhance practical algorithms, using Low-Rank Adaptation (LoRA) (Hu et al., 2022) in large language models as a case study. We rigorously prove that, under gradient descent, LoRA adapters align with specific singular subspaces of the one-step full fine-tuning gradient. This r... | low-rank fine-tuning, linear convergence, subspace alignment | Our theory shows that one-step gradient of full-finetuning can be sufficient for low-rank fine-tuning, and devises a theory-grounded algorithm for performance imporvement in real-world tasks. | 3,286 | null | [
-0.017169557511806488,
-0.026278380304574966,
-0.00723546277731657,
0.023242367431521416,
0.025214895606040955,
0.0312202125787735,
0.038995932787656784,
0.005905191879719496,
-0.015910588204860687,
-0.022112088277935982,
0.004009567201137543,
0.01433003880083561,
-0.07927799224853516,
-0.... | https://github.com/YuanheZ/LoRA-One |
81 | Equivalence is All: A Unified View for Self-supervised Graph Learning | https://openreview.net/forum?id=ZAlII9wL5i | [
"Yejiang Wang",
"Yuhai Zhao",
"Zhengkui Wang",
"Ling Li",
"Jiapu Wang",
"Fangting Li",
"Miaomiao Huang",
"Shirui Pan",
"Xingwei Wang"
] | Oral | Node equivalence is common in graphs, such as computing networks, encompassing automorphic equivalence (preserving adjacency under node permutations) and attribute equivalence (nodes with identical attributes). Despite their importance for learning node representations, these equivalences are largely ignored by existin... | Graph Self-Supervised Learning, Graph Neural Networks | we introduce a self-supervised graph learning framework from an equivalence perspective, unifying and enforcing node equivalence principles to representation learning. | 3,079 | null | [
-0.02062334679067135,
-0.03386535495519638,
0.0036523074377328157,
0.03761349990963936,
0.02135501243174076,
0.025373471900820732,
0.03273506462574005,
0.0040685259737074375,
-0.006079275161027908,
-0.029626993462443352,
0.008198424242436886,
-0.01426886860281229,
-0.07327616214752197,
0.0... | https://github.com/fulowl/GALE |
82 | On Path to Multimodal Generalist: General-Level and General-Bench | https://openreview.net/forum?id=VsJ1K2HV3k | [
"Hao Fei",
"Yuan Zhou",
"Juncheng Li",
"Xiangtai Li",
"Qingshan Xu",
"Bobo Li",
"Shengqiong Wu",
"Yaoting Wang",
"Junbao Zhou",
"Jiahao Meng",
"Qingyu Shi",
"Zhiyuan Zhou",
"Liangtao Shi",
"Minghe Gao",
"Daoan Zhang",
"Zhiqi Ge",
"Siliang Tang",
"Kaihang Pan",
"Yaobo Ye",
"Haob... | Oral | The Multimodal Large Language Model (MLLM) is currently experiencing rapid growth, driven by the advanced capabilities of language-based LLMs.
Unlike their specialist predecessors, existing MLLMs are evolving towards a Multimodal Generalist paradigm.
Initially limited to understanding multiple modalities, these model... | Large Language Model, Multimodal Large Language Model, Multimodal Generalist, Evaluation, Benchmark | null | 2,912 | 2505.04620 | [
-0.007923800498247147,
-0.03342962637543678,
0.03134484961628914,
0.01342739537358284,
0.03036978282034397,
-0.014441375620663166,
0.03901706263422966,
0.03538735955953598,
-0.04166411608457565,
-0.006562179420143366,
-0.02523227408528328,
0.04673908278346062,
-0.07800176739692688,
-0.0010... | null |
83 | Sundial: A Family of Highly Capable Time Series Foundation Models | https://openreview.net/forum?id=LO7ciRpjI5 | [
"Yong Liu",
"Guo Qin",
"Zhiyuan Shi",
"Zhi Chen",
"Caiyin Yang",
"Xiangdong Huang",
"Jianmin Wang",
"Mingsheng Long"
] | Oral | We introduce Sundial, a family of native, flexible, and scalable time series foundation models. To predict the next-patch's distribution, we propose a TimeFlow Loss based on flow-matching, which facilitates native pre-training of Transformers on continuous-valued time series without discrete tokenization. Conditioned o... | Time Series, Foundation Models | We introduce Sundial, a family of native, flexible, and scalable time series foundation models pre-trained on a trillion time points. | 2,877 | 2502.00816 | [
0.015037556178867817,
-0.06210596486926079,
0.006118363700807095,
0.016527321189641953,
0.03574110195040703,
0.05362454429268837,
0.0110111553221941,
0.011064582504332066,
-0.021262409165501595,
-0.049778588116168976,
0.028126250952482224,
0.001814396819099784,
-0.05761219561100006,
-0.004... | https://github.com/thuml/Sundial |
84 | Rényi Neural Processes | https://openreview.net/forum?id=qMt4KikFJg | [
"Xuesong Wang",
"He Zhao",
"Edwin V. Bonilla"
] | Oral | Neural Processes (NPs) are deep probabilistic models that represent stochastic processes by conditioning their prior distributions on a set of context points. Despite their advantages in uncertainty estimation for complex distributions, NPs enforce parameterization coupling between the conditional prior model and the ... | neural processes, Rényi divergence | Using Rényi divergence for robust inference of neural processes | 2,712 | null | [
-0.01271476224064827,
0.019029835239052773,
-0.019693022593855858,
0.033500466495752335,
0.03361405059695244,
0.07313226908445358,
0.014797920361161232,
0.008059333078563213,
-0.035049110651016235,
-0.05733388289809227,
-0.00271820230409503,
0.0013193993363529444,
-0.036572109907865524,
-0... | https://github.com/csiro-funml/renyineuralprocesses |
85 | Model Immunization from a Condition Number Perspective | https://openreview.net/forum?id=uitj69FqD5 | [
"Amber Yijia Zheng",
"Site Bai",
"Brian Bullins",
"Raymond A. Yeh"
] | Oral | Model immunization aims to pre-train models that are difficult to fine-tune on harmful tasks while retaining their utility on other non-harmful tasks. Though prior work has shown empirical evidence for immunizing text-to-image models, the key understanding of when immunization is possible and a precise definition of an... | Model Immunization, Optimization, Condition Number | null | 2,657 | 2505.23760 | [
-0.03449621796607971,
-0.005024863872677088,
-0.023665109649300575,
0.030638689175248146,
0.04235949367284775,
0.028501100838184357,
0.035547103732824326,
-0.007346604485064745,
-0.032523080706596375,
-0.03873944282531738,
0.008030110970139503,
0.010187453590333462,
-0.07816481590270996,
0... | https://github.com/amberyzheng/model-immunization-cond-num |
86 | Flowing Datasets with Wasserstein over Wasserstein Gradient Flows | https://openreview.net/forum?id=I1OHPb4zWo | [
"Clément Bonet",
"Christophe Vauthier",
"Anna Korba"
] | Oral | Many applications in machine learning involve data represented as probability distributions. The emergence of such data requires radically novel techniques to design tractable gradient flows on probability distributions over this type of (infinite-dimensional) objects. For instance, being able to flow labeled datasets ... | Wasserstein gradient flows, optimal transport, datasets | We flow datasets using Wasserstein over Wasserstein gradient flows. | 2,603 | 2506.07534 | [
-0.02186880074441433,
-0.016913024708628654,
0.01227427925914526,
0.053298138082027435,
0.04020671173930168,
0.01398853212594986,
0.02193332463502884,
-0.00426305690780282,
0.013104761019349098,
-0.04243076965212822,
-0.028289681300520897,
-0.009550249204039574,
-0.06175412982702255,
-0.00... | https://github.com/clbonet/Flowing_Datasets_with_WoW_Gradient_Flows |
87 | Retrieval-Augmented Perception: High-resolution Image Perception Meets Visual RAG | https://openreview.net/forum?id=X9vBykZVYg | [
"Wenbin Wang",
"Yongcheng Jing",
"Liang Ding",
"Yingjie Wang",
"Li Shen",
"Yong Luo",
"Bo Du",
"Dacheng Tao"
] | Oral | High-resolution (HR) image perception remains a key challenge in multimodal large language models (MLLMs). To drive progress beyond the limits of heuristic methods, this paper advances HR perception capabilities of MLLMs by harnessing cutting-edge long-context techniques such as retrieval-augmented generation (RAG). T... | Multimodal Large Language Models, High-resolution Image Perception | We propose Retrieval-Augmented Perception (RAP), a training-free framework that retrieves and fuses relevant image crops while preserving spatial context, with RE-Search dynamically selecting the optimal number of crops. | 2,560 | 2503.01222 | [
0.018899045884609222,
-0.00975438766181469,
-0.008702506311237812,
0.05086375027894974,
0.011852391064167023,
-0.02045593596994877,
0.01291133277118206,
0.014113467186689377,
-0.04538054019212723,
-0.016201501712203026,
-0.06891392916440964,
0.026195263490080833,
-0.06626014411449432,
-0.0... | https://github.com/DreamMr/RAP |
88 | Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning | https://openreview.net/forum?id=M6L7Eaw9BW | [
"Fangwen Wu",
"Lechao Cheng",
"Shengeng Tang",
"Xiaofeng Zhu",
"Chaowei Fang",
"Dingwen Zhang",
"Meng Wang"
] | Oral | Class-incremental learning (CIL) seeks to enable a model to sequentially learn new classes while retaining knowledge of previously learned ones. Balancing flexibility and stability remains a significant challenge, particularly when the task ID is unknown. To address this, our study reveals that the gap in feature distr... | Class-incremental learning, continual learning | null | 2,505 | 2502.07560 | [
0.005663998890668154,
-0.04208245500922203,
-0.023852137848734856,
0.019246909767389297,
0.043917007744312286,
0.0014141451101750135,
0.06841679662466049,
0.01211283914744854,
0.00007952025043778121,
-0.040274810045957565,
-0.01456171553581953,
0.00637007737532258,
-0.08161727339029312,
-0... | https://github.com/fwu11/macil |
89 | Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark | https://openreview.net/forum?id=v26vwjxOEz | [
"Yunzhuo Hao",
"Jiawei Gu",
"Huichen Will Wang",
"Linjie Li",
"Zhengyuan Yang",
"Lijuan Wang",
"Yu Cheng"
] | Oral | The ability to organically reason over and with both text and images is a pillar of human intelligence, yet the ability of Multimodal Large Language Models (MLLMs) to perform such multimodal reasoning remains under-explored. Existing benchmarks often emphasize text-dominant reasoning or rely on shallow visual cues, fai... | Benchmark, Multimodal, Reasoning | We contribute a challenging multimodal reasoning benchmark. | 2,325 | 2501.05444 | [
0.0018832217901945114,
-0.010278221219778061,
-0.004030812066048384,
0.049561623483896255,
0.04196949675679207,
-0.011172821745276451,
0.025431517511606216,
0.018542926758527756,
-0.04597066342830658,
-0.006609773728996515,
-0.01128731481730938,
0.031664613634347916,
-0.055418189615011215,
... | https://github.com/EMMA-Bench/EMMA |
90 | Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance | https://openreview.net/forum?id=ybno0ZP44z | [
"Shogo Iwazaki",
"Shion Takeno"
] | Oral | We study the Gaussian process (GP) bandit problem, whose goal is to minimize regret under an unknown reward function lying in some reproducing kernel Hilbert space (RKHS).
The maximum posterior variance analysis is vital in analyzing near-optimal GP bandit algorithms such as maximum variance reduction (MVR) and phased... | Gaussian process bandits, kernel bandits, noiseless setting | null | 2,194 | 2502.06363 | [
-0.023258959874510765,
0.012110925279557705,
0.021399706602096558,
0.040659043937921524,
0.025348931550979614,
0.03151937201619148,
0.044089026749134064,
0.002242014277726412,
0.0028734724037349224,
-0.06477286666631699,
-0.03732379153370857,
0.013882121071219444,
-0.0538196824491024,
-0.0... | null |
91 | CollabLLM: From Passive Responders to Active Collaborators | https://openreview.net/forum?id=DmH4HHVb3y | [
"Shirley Wu",
"Michel Galley",
"Baolin Peng",
"Hao Cheng",
"Gavin Li",
"Yao Dou",
"Weixin Cai",
"James Zou",
"Jure Leskovec",
"Jianfeng Gao"
] | Oral | Large Language Models are typically trained with next-turn rewards, limiting their ability to optimize for long-term interaction. As a result, they often respond passively to ambiguous or open-ended user requests, failing to help users reach their ultimate intents and leading to inefficient conversations. To address th... | Human-centered Large Language Model, Multiturn Interaction, Collaborative Problem-Solving, Reinforcement Learning | CollabLLM is a unified fine-tuning framework that optimizes LLMs for effective and efficient multiturn collaboration with users. | 1,940 | 2502.00640 | [
-0.019403759390115738,
-0.04497873783111572,
0.0005353566957637668,
0.025915784761309624,
0.021354516968131065,
0.01011186745017767,
0.009430786594748497,
0.03728891909122467,
-0.017686087638139725,
-0.018862394616007805,
-0.033966463059186935,
0.02769932523369789,
-0.05478730425238609,
-0... | https://github.com/Wuyxin/collabllm |
92 | Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings | https://openreview.net/forum?id=V0w8Kj3K6L | [
"Angéline Pouget",
"Mohammad Yaghini",
"Stephan Rabanser",
"Nicolas Papernot"
] | Oral | Deploying machine learning models in safety-critical domains poses a key challenge: ensuring reliable model performance on downstream user data without access to ground truth labels for direct validation. We propose the _suitability filter_, a novel framework designed to detect performance deterioration by utilizing _s... | suitability, reliability, robustness, classifier, unlabeled data | The suitability filter detects performance degradation in machine learning models by comparing accuracy on unlabeled user data and labeled test data. | 1,934 | 2505.22356 | [
-0.015520550310611725,
-0.02327088639140129,
-0.008124344982206821,
0.014171960763633251,
0.08035266399383545,
0.022335663437843323,
-0.007276972755789757,
-0.03144684433937073,
-0.04286658763885498,
-0.03612739220261574,
0.00704122195020318,
0.03080708533525467,
-0.05881519988179207,
-0.0... | https://github.com/cleverhans-lab/suitability |
93 | Partition First, Embed Later: Laplacian-Based Feature Partitioning for Refined Embedding and Visualization of High-Dimensional Data | https://openreview.net/forum?id=6CwO5nVvku | [
"Erez Peterfreund",
"Ofir Lindenbaum",
"Yuval Kluger",
"Boris Landa"
] | Oral | Embedding and visualization techniques are essential for analyzing high-dimensional data, but they often struggle with complex data governed by multiple latent variables, potentially distorting key structural characteristics. This paper considers scenarios where the observed features can be partitioned into mutually ex... | data visualization, dimensionality reduction, manifold learning, data embedding, feature partitioning | We present a feature partitioning approach for embedding and visualizing multiple low-dimensional structures within high-dimensional data | 1,917 | null | [
-0.007627582643181086,
-0.036162301898002625,
0.020718861371278763,
0.039161939173936844,
0.04476344212889671,
0.05337826907634735,
0.014466870576143265,
-0.01375425886362791,
-0.02038944698870182,
-0.05472871661186218,
0.0023969023022800684,
-0.03686713054776192,
-0.08415422588586807,
0.0... | https://github.com/erezpeter/Feature_Partition |
94 | Blink of an eye: a simple theory for feature localization in generative models | https://openreview.net/forum?id=QvqnPVGWAN | [
"Marvin Li",
"Aayush Karan",
"Sitan Chen"
] | Oral | Large language models can exhibit unexpected behavior in the blink of an eye. In a recent computer use demo, a language model switched from coding to Googling pictures of Yellowstone, and these sudden shifts in behavior have also been observed in reasoning patterns and jailbreaks. This phenomenon is not unique to autor... | stochastic localization, theory of diffusion, large language models, interpretability, reasoning | A simple, general, and unifying theory for feature localization in language and diffusion models | 1,904 | 2502.00921 | [
-0.013237161561846733,
0.00513472780585289,
-0.009355620481073856,
0.051768045872449875,
0.027582772076129913,
0.03344337269663811,
0.037275925278663635,
0.04004138708114624,
-0.02852078154683113,
-0.02658463642001152,
-0.014168769121170044,
-0.00017876776109915227,
-0.06584669649600983,
-... | https://github.com/marvinli-harvard/critical-windows-lm |
95 | On Differential Privacy for Adaptively Solving Search Problems via Sketching | https://openreview.net/forum?id=kEn7Wt6Yj2 | [
"Shiyuan Feng",
"Ying Feng",
"George Zhaoqi Li",
"Zhao Song",
"David Woodruff",
"Lichen Zhang"
] | Oral | Recently differential privacy has been used for a number of streaming, data structure, and dynamic graph problems as a means of hiding the internal randomness of the data structure, so that multiple possibly adaptive queries can be made without sacrificing the correctness of the responses. Although these works use diff... | data structure, adaptive robustness, differential privacy | We develop data structures for search problems such as regression and nearest neighbor search that are robust against an adaptive adversary. | 1,638 | 2506.05503 | [
-0.030447781085968018,
0.0014767537359148264,
0.0032730200327932835,
0.07836337387561798,
0.05619911476969719,
0.03697753697633743,
0.035688240081071854,
-0.009683913551270962,
-0.02585800737142563,
-0.06066088005900383,
-0.012125000357627869,
-0.03488576039671898,
-0.06063583865761757,
0.... | null |
96 | Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards | https://openreview.net/forum?id=zf9zwCRKyP | [
"Yangsibo Huang",
"Milad Nasr",
"Anastasios Nikolas Angelopoulos",
"Nicholas Carlini",
"Wei-Lin Chiang",
"Christopher A. Choquette-Choo",
"Daphne Ippolito",
"Matthew Jagielski",
"Katherine Lee",
"Ken Liu",
"Ion Stoica",
"Florian Tramèr",
"Chiyuan Zhang"
] | Oral | It is now common to evaluate Large Language Models (LLMs) by having humans manually vote to evaluate model outputs, in contrast to typical benchmarks that evaluate knowledge or skill at some particular task. Chatbot Arena, the most popular benchmark of this type, ranks models by asking users to select the better respon... | Security, LLM leaderboard, LLM evaluation | null | 1,637 | 2501.07493 | [
-0.019309069961309433,
-0.04264960438013077,
0.002011202508583665,
0.039675191044807434,
0.00936762522906065,
-0.00571359833702445,
0.04569513723254204,
0.000013460569789458532,
-0.03479170799255371,
-0.0030824528075754642,
-0.037948768585920334,
0.02459915727376938,
-0.06590693444013596,
... | null |
97 | What Limits Virtual Agent Application? OmniBench: A Scalable Multi-Dimensional Benchmark for Essential Virtual Agent Capabilities | https://openreview.net/forum?id=4tFSKOY2mT | [
"Wendong Bu",
"Yang Wu",
"Qifan Yu",
"Minghe Gao",
"Bingchen Miao",
"Zhenkui Zhang",
"Kaihang Pan",
"liyunfei",
"Mengze Li",
"Wei Ji",
"Juncheng Li",
"Siliang Tang",
"Yueting Zhuang"
] | Oral | As multimodal large language models (MLLMs) advance, MLLM-based virtual agents have demonstrated remarkable performance. However, existing benchmarks face significant limitations, including uncontrollable task complexity, extensive manual annotation, and a lack of multidimensional evaluation. In response to these chall... | Virtual Agent; Digital Agent; Multidimensional Benchmark | null | 1,439 | 2506.08933 | [
-0.000245263654505834,
-0.02174264006316662,
0.023878159001469612,
0.03886210545897484,
0.022258702665567398,
0.0006736441282555461,
0.039135709404945374,
0.011561647057533264,
-0.024155013263225555,
-0.04383229464292526,
-0.0015289620496332645,
-0.007743433583527803,
-0.06910879909992218,
... | null |
98 | Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton | https://openreview.net/forum?id=LwQGRGJTHw | [
"Chengmei Niu",
"Zhenyu Liao",
"Zenan Ling",
"Michael W. Mahoney"
] | Oral | A substantial body of work in machine learning (ML) and randomized numerical linear algebra (RandNLA) has exploited various sorts of random sketching methodologies, including random sampling and random projection, with much of the analysis using Johnson--Lindenstrauss and subspace embedding techniques. Recent studies ... | Inversion bias, random matrix theory, randomized numerical linear algebra, random sampling, sub-sampled Newton | Use RMT to characterize inversion bias for random sampling, propose debiasing, and apply to establish problem-independent convergence rate for SSN. | 1,426 | 2502.13583 | [
-0.017938891425728798,
-0.04057541489601135,
-0.005176707170903683,
0.05715867877006531,
0.02687077410519123,
0.021107781678438187,
0.02669535204768181,
0.019668659195303917,
-0.027308497577905655,
-0.0619734451174736,
-0.005025265272706747,
-0.029645511880517006,
-0.06040424108505249,
-0.... | null |
99 | Normalizing Flows are Capable Generative Models | https://openreview.net/forum?id=2uheUFcFsM | [
"Shuangfei Zhai",
"Ruixiang ZHANG",
"Preetum Nakkiran",
"David Berthelot",
"Jiatao Gu",
"Huangjie Zheng",
"Tianrong Chen",
"Miguel Ángel Bautista",
"Navdeep Jaitly",
"Joshua M. Susskind"
] | Oral | Normalizing Flows (NFs) are likelihood-based models for continuous inputs. They have demonstrated promising results on both density estimation and generative modeling tasks, but have received relatively little attention in recent years. In this work, we demonstrate that NFs are more powerful than previously believed. W... | Normalizing flows, autoregressive models, likelihood estimation | We show that normalizing flows can work great as a generative modeling principle, and propose a simple architecture and set of techniques to achieve it. | 1,374 | 2412.06329 | [
0.028636621311306953,
-0.041483208537101746,
0.004399371333420277,
0.04593415930867195,
0.04374518617987633,
0.0618305429816246,
0.020943956449627876,
-0.0015397499082610011,
-0.020746489986777306,
-0.06115875765681267,
-0.0052260649390518665,
-0.029285598546266556,
-0.05722849816083908,
0... | https://github.com/apple/ml-tarflow |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.