paper_id uint32 0 5.29k | title stringlengths 14 183 | authors listlengths 1 36 | abstract large_stringlengths 246 3.59k | type stringclasses 3
values | arxiv_id stringlengths 10 10 ⌀ | github stringclasses 641
values | project_page stringclasses 244
values | space_ids listlengths 0 3 | model_ids listlengths 0 12 | dataset_ids listlengths 0 6 | embedding listlengths 768 768 |
|---|---|---|---|---|---|---|---|---|---|---|---|
100 | LLM at Network Edge: A Layer-wise Efficient Federated Fine-tuning Approach | [
"Jinglong Shen",
"Nan Cheng",
"Wenchao Xu",
"Haozhao Wang",
"Yifan guo",
"Jiajie Xu"
] | Fine-tuning large language models (LLMs) poses significant computational burdens, especially in federated learning (FL) settings. We introduce Layer-wise Efficient Federated Fine-tuning (LEFF), a novel method designed to enhance the efficiency of FL fine-tuning while preserving model performance and minimizing client-s... | poster | null | null | null | [] | [] | [] | [
-0.022736189886927605,
-0.07596378028392792,
0.029612267389893532,
0.018821949139237404,
0.05095618590712547,
0.014681443572044373,
0.011452685110270977,
-0.006121417973190546,
-0.02336670085787773,
-0.022895481437444687,
-0.002061639679595828,
0.012886454351246357,
-0.05056567117571831,
0... |
101 | Generalizable Hand-Object Modeling from Unconstrained RGB Images via 3D Gaussians | [
"Xingyu Liu",
"Pengfei Ren",
"Qi Qi",
"Haifeng Sun",
"Zirui Zhuang",
"Jing Wang",
"Jianxin Liao",
"Jingyu Wang"
] | Recent advances in hand–object interaction modeling have employed implicit representations, such as Signed Distance Functions (SDF), Neural Radiance Fields (NeRF), and 3D Gaussian Splatting (3DGS), to reconstruct hands and objects with arbitrary topology and photorealistic detail. However, these methods often rely on d... | poster | null | null | null | [] | [] | [] | [
0.0012285695411264896,
-0.0044937836937606335,
-0.015416021458804607,
0.001553670852445066,
0.026782358065247536,
0.03003804199397564,
0.02345452830195427,
0.04192386195063591,
-0.04578523337841034,
-0.05943213030695915,
0.001948790275491774,
-0.0009378368849866092,
-0.09566234797239304,
-... |
102 | Characterizing Dataset Bias via Disentangled Visual Concepts | [
"Jinho Choi",
"Hyesu Lim",
"Steffen Schneider",
"Jaegul Choo"
] | Dataset bias is ubiquitous in machine learning datasets. Yet, systematically identifying these biases is challenging without costly, fine-grained attribute annotations. We introduce ConceptScope, a framework for characterizing dataset bias by disentangling visual concepts using a Sparse Autoencoder. Our framework autom... | poster | null | null | null | [] | [] | [] | [
0.0117788752540946,
-0.02917582355439663,
-0.03788633272051811,
0.0785536840558052,
0.015248782932758331,
0.008135564625263214,
0.031123116612434387,
0.008587926626205444,
0.0018133624689653516,
-0.05249905213713646,
-0.049885235726833344,
0.006363994907587767,
-0.08623486757278442,
-0.008... |
103 | Dynamic Text Bundling Supervision for Zero-Shot Inference on Text-Attributed Graphs | [
"Yusheng Zhao",
"Qixin Zhang",
"Xiao Luo",
"Weizhi Zhang",
"Zhiping Xiao",
"Wei Ju",
"Philip S Yu",
"Ming Zhang"
] | Large language models (LLMs) have been used in many zero-shot learning problems, with their strong generalization ability. Recently, adopting LLMs in text-attributed graphs (TAGs) has drawn increasing attention. However, the adoption of LLMs faces two major challenges: limited information on graph structure and unrelia... | poster | null | null | null | [] | [] | [] | [
0.012522007338702679,
-0.01693497970700264,
-0.014521945267915726,
0.047772012650966644,
0.024375323206186295,
0.016033947467803955,
0.016604797914624214,
0.016988756135106087,
-0.015870505943894386,
-0.0033457945100963116,
-0.026263942942023277,
0.02203274704515934,
-0.08429475873708725,
... |
104 | Do LVLMs Truly Understand Video Anomalies? Revealing Hallucination via Co-Occurrence Patterns | [
"Menghao Zhang",
"Pengfei Ren",
"Kangheng Lin",
"Qi Qi",
"Haifeng Sun",
"Huazheng Wang",
"Zirui Zhuang",
"Lei Zhang",
"Jianxin Liao",
"Jingyu Wang"
] | Large Vision-Language Models (LVLMs) pretrained on large-scale multimodal data have shown promising capabilities in Video Anomaly Detection (VAD). However, their ability to reason about abnormal events based on scene semantics remains underexplored. In this paper, we investigate LVLMs’ behavior in VAD from a visual-tex... | poster | null | null | null | [] | [] | [] | [
0.02983327955007553,
0.010937456041574478,
0.024389130994677544,
0.03856651857495308,
0.04089369997382164,
-0.006343733984977007,
0.04130788519978523,
0.01984596811234951,
-0.0226399227976799,
-0.009936468675732613,
-0.042995862662792206,
0.029768437147140503,
-0.06517238914966583,
0.01269... |
105 | Dual Prototype-Enhanced Contrastive Framework for Class-Imbalanced Graph Domain Adaptation | [
"Xin Ma",
"Yifan Wang",
"Siyu Yi",
"Wei Ju",
"Junyu Luo",
"Yusheng Zhao",
"Xiao Luo",
"Jiancheng Lv"
] | Graph transfer learning, especially in unsupervised domain adaptation, focuses on transferring knowledge from a label-rich source domain graph to an unlabeled target domain graph. However, most existing methods assume a balanced label distribution in the source graph, often inconsistent with real-world scenarios where ... | poster | null | null | null | [] | [] | [] | [
0.001560286502353847,
-0.02564525045454502,
-0.004599357023835182,
0.0394609160721302,
0.018819335848093033,
0.01603536494076252,
0.03520344942808151,
-0.0180796068161726,
0.023145463317632675,
-0.021849006414413452,
0.005277925170958042,
0.0035578031092882156,
-0.06087329611182213,
0.0245... |
106 | Vinci: Deep Thinking in Text-to-Image Generation using Unified Model with Reinforcement Learning | [
"wang lin",
"Wentao Hu",
"Liyu Jia",
"Kaihang Pan",
"Majun Zhang",
"Zhou Zhao",
"Fei Wu",
"Jingyuan Chen",
"Hanwang Zhang"
] | With the continuous development of large language models and reasoning chain technologies, the potential of deep reasoning based on reinforcement learning has shown remarkable promise in multi-task scenarios. However, existing unified models have yet to achieve end-to-end integration in image generation and understand... | poster | null | null | null | [] | [] | [] | [
0.010559653863310814,
-0.021108033135533333,
-0.004817751701921225,
0.07894329726696014,
0.043360840529203415,
0.031917817890644073,
0.02037043310701847,
0.023569611832499504,
-0.03257197514176369,
-0.03994724899530411,
-0.0338003933429718,
0.032907988876104355,
-0.06195291876792908,
-0.01... |
107 | PolyJuice Makes It Real: Black-Box, Universal Red-Teaming for Synthetic Image Detectors | [
"Sepehr Dehdashtian",
"Mashrur Mahmud Morshed",
"Jacob Seidman",
"Gaurav Bharaj",
"Vishnu Boddeti"
] | Synthetic image detectors (SIDs) are a key defense against the risks posed by the growing realism of images from text-to-image (T2I) models. Red teaming improves SID’s effectiveness by identifying and exploiting their failure modes via misclassified synthetic images. However, existing red-teaming solutions (i) require ... | poster | 2509.15551 | null | null | [] | [] | [] | [
0.013224630616605282,
-0.03182435408234596,
-0.026339713484048843,
0.06386342644691467,
0.01989326812326908,
0.0121734868735075,
0.019960274919867516,
0.008344207890331745,
-0.027865217998623848,
-0.04264010116457939,
-0.013483851216733456,
0.009067905135452747,
-0.06422128528356552,
-0.02... |
108 | Unified 2D-3D Discrete Priors for Noise-Robust and Calibration-Free Multiview 3D Human Pose Estimation | [
"Geng Chen",
"Pengfei Ren",
"Xufeng Jian",
"Haifeng Sun",
"Menghao Zhang",
"Qi Qi",
"Zirui Zhuang",
"Jing Wang",
"Jianxin Liao",
"Jingyu Wang"
] | Multi-view 3D human pose estimation (HPE) leverages complementary information across views to improve accuracy and robustness. Traditional methods rely on camera calibration to establish geometric correspondences, which is sensitive to calibration accuracy and lacks flexibility in dynamic settings. Calibration-free app... | poster | null | null | null | [] | [] | [] | [
0.008566991426050663,
0.025927158072590828,
-0.013134937733411789,
0.0248859915882349,
0.014803661033511162,
0.0525418221950531,
0.03425762057304382,
-0.00286960625089705,
-0.03155972436070442,
-0.06305423378944397,
-0.015092572197318077,
-0.022459039464592934,
-0.08189447224140167,
-0.010... |
109 | Fix False Transparency by Noise Guided Splatting | [
"Aly El Hakie",
"Yiren Lu",
"Yu Yin",
"Michael Jenkins",
"Yehe Liu"
] | 3D Gaussian Splatting (3DGS) has demonstrated impressive capabilities in 3D reconstruction. However, its α-blending can induce ’false transparency’ artifacts, particularly where low point cloud density in sparse or low-texture regions causes foreground objects to appear improperly transparent. This issue stems from an ... | poster | 2510.15736 | null | null | [] | [] | [] | [
0.02561839483678341,
0.031134827062487602,
-0.006871187128126621,
0.05065508559346199,
0.01721654273569584,
0.02002260461449623,
0.027627183124423027,
0.00974605605006218,
-0.05109626054763794,
-0.07766282558441162,
-0.039040666073560715,
0.003929757047444582,
-0.06006702780723572,
0.01343... |
110 | CADGrasp: Learning Contact and Collision Aware General Dexterous Grasping in Cluttered Scenes | [
"Jiyao Zhang",
"Zhiyuan Ma",
"Tianhao Wu",
"Zeyuan Chen",
"Hao Dong"
] | Dexterous grasping in cluttered environments presents substantial challenges due to the high degrees of freedom of dexterous hands, occlusion, and potential collisions arising from diverse object geometries and complex layouts. To address these challenges, we propose CADGrasp, a two-stage algorithm for general dexterou... | poster | null | null | null | [] | [] | [] | [
-0.023194899782538414,
0.007209766656160355,
-0.004484894685447216,
0.020419038832187653,
0.018975356593728065,
0.05587776377797127,
-0.01691794954240322,
0.015849154442548752,
-0.0401945486664772,
-0.06050486117601395,
-0.013326465152204037,
-0.03211810439825058,
-0.08436787128448486,
-0.... |
111 | Dynamic Test-Time Compute Scaling in Control Policy: Difficulty-Aware Stochastic Interpolant Policy | [
"Inkook Chun",
"Seungjae Lee",
"Michael Albergo",
"Saining Xie",
"Eric Vanden-Eijnden"
] | Diffusion- and flow-based policies deliver state-of-the-art performance on long-horizon robotic manipulation and imitation-learning tasks. However, these controllers employ a fixed inference budget at every control step, regardless of task complexity, leading to computational inefficiency for simple subtasks while pote... | poster | null | null | null | [] | [] | [] | [
-0.032075900584459305,
0.006434168666601181,
-0.027072260156273842,
0.0520763173699379,
0.035758472979068756,
0.0497237928211689,
0.034995339810848236,
-0.026947954669594765,
-0.034606028348207474,
-0.03984972462058067,
0.0041365306824445724,
-0.035594940185546875,
-0.0736515074968338,
0.0... |
112 | Structure-Aware Cooperative Ensemble Evolutionary Optimization on Combinatorial Problems with Multimodal Large Language Models | [
"Jie Zhao",
"Kang Cheong"
] | While evolutionary algorithms (EAs) have proven effective in exploring the vast solution spaces characteristic of such problems, traditional encoding schemes, such as binary or numerical representations, often fail to capture the intricate structural properties of networks. Through employing image-based encodings to pr... | poster | null | null | null | [] | [] | [] | [
-0.01700359769165516,
-0.02139097824692726,
0.010779792442917824,
0.03920334577560425,
0.0216620285063982,
0.026777980849146843,
0.019062183797359467,
0.020055435597896576,
-0.031903237104415894,
-0.03373393043875694,
0.019836239516735077,
-0.0032024274114519358,
-0.08666907250881195,
-0.0... |
113 | Uncertainty-Guided Exploration for Efficient AlphaZero Training | [
"Scott Cheng",
"Meng-Yu Tsai",
"Ding-Yong Hong",
"Mahmut T Kandemir"
] | AlphaZero has achieved remarkable success in complex decision-making problems through self-play and neural network training. However, its self-play process remains inefficient due to limited exploration of high-uncertainty positions, the overlooked runner-up decisions in Monte Carlo Tree Search (MCTS), and high varianc... | poster | null | null | null | [] | [] | [] | [
-0.007293249014765024,
-0.006177202798426151,
-0.006255936808884144,
0.05365288257598877,
0.06241811439394951,
0.02689853496849537,
0.030284924432635307,
-0.03401848301291466,
-0.009014446288347244,
-0.0554383359849453,
-0.040616247802972794,
-0.0041878907941281796,
-0.05410763993859291,
-... |
114 | Give Me FP32 or Give Me Death? Challenges and Solutions for Reproducible Reasoning | [
"Jiayi Yuan",
"Hao Li",
"Xinheng Ding",
"Wenya Xie",
"Yu-Jhe Li",
"Wentian Zhao",
"Kun Wan",
"Jing Shi",
"Xia Hu",
"Zirui Liu"
] | Large Language Models (LLMs) are now integral across various domains and have demonstrated impressive performance. Progress, however, rests on the premise that benchmark scores are both accurate and reproducible. We demonstrate that the reproducibility of LLM performance is fragile: changing system configuration such a... | oral | 2506.09501 | https://github.com/nanomaoli/llm_reproducibility | null | [] | [] | [] | [
-0.03170793130993843,
-0.010284109972417355,
-0.022107740864157677,
0.05421122536063194,
0.060533538460731506,
0.03967417776584625,
0.017933500930666924,
0.020440630614757538,
-0.022320164367556572,
-0.01435787696391344,
-0.002179366536438465,
0.023342804983258247,
-0.06553991138935089,
0.... |
115 | Learning Principles from Multi-modal Human Preference | [
"Jiayi Zhou",
"Jiaming Ji",
"Boyuan Chen",
"Jiapeng Sun",
"wenqi chen",
"Donghai Hong",
"Sirui Han",
"Yike Guo",
"Yaodong Yang"
] | Training multi-modal large language models (MLLMs) that align with human intentions is a long-term challenge. Traditional score-only reward models for alignment suffer from low accuracy, weak generalization, and poor interpretability, blocking the progress of alignment methods, \textit{e.g.,} reinforcement learning fro... | poster | 2505.18531 | null | https://generative-rlhf-v.github.io/ | [] | [] | [] | [
0.006831761449575424,
-0.006335608661174774,
0.030715150758624077,
0.04997448995709419,
0.021418757736682892,
0.011904089711606503,
0.021375633776187897,
0.04319910705089569,
-0.01552675012499094,
-0.005854872055351734,
-0.02778930962085724,
0.06641048192977905,
-0.08418161422014236,
-0.02... |
116 | Small Singular Values Matter: A Random Matrix Analysis of Transformer Models | [
"Max Staats",
"Matthias Thamm",
"Bernd Rosenow"
] | This work analyzes singular-value spectra of weight matrices in pretrained transformer models to understand how information is stored at both ends of the spectrum. Using Random Matrix Theory (RMT) as a zero information hypothesis, we associate agreement with RMT as evidence of randomness and deviations as evidence for... | poster | 2410.17770 | null | null | [] | [] | [] | [
-0.03361165523529053,
-0.03874479979276657,
0.0032721273601055145,
0.04840826243162155,
0.015657534822821617,
0.02223421260714531,
0.018979718908667564,
0.003480117302387953,
-0.040432896465063095,
-0.02568007819354534,
-0.03423454239964485,
0.043220534920692444,
-0.05273660644888878,
0.00... |
117 | Fully Dynamic Algorithms for Chamfer Distance | [
"Gramoz Goranci",
"Shaofeng Jiang",
"Peter Kiss",
"Eva Szilagyi",
"Qiaoyuan Yang"
] | We study the problem of computing Chamfer distance in the fully dynamic setting, where two set of points $A, B \subset \mathbb{R}^{d}$, each of size up to $n$, dynamically evolve through point insertions or deletions and the goal is to efficiently maintain an approximation to $dist_{\mathrm{CH}}(A,B) = \sum_{a \in A} \... | poster | null | null | null | [] | [] | [] | [
-0.03964874520897865,
-0.010838938876986504,
-0.019357047975063324,
0.04740862175822258,
0.03596637398004532,
0.0622139535844326,
0.025276994332671165,
0.005324761848896742,
-0.048414941877126694,
-0.05326910689473152,
-0.0064735449850559235,
-0.009952222928404808,
-0.06750230491161346,
0.... |
118 | Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback | [
"Jiaming Ji",
"Xinyu Chen",
"Rui Pan",
"Han Zhu",
"Jiahao Li",
"Donghai Hong",
"Boyuan Chen",
"Jiayi Zhou",
"Kaile Wang",
"Juntao Dai",
"Chi-Min Chan",
"Sirui Han",
"Yike Guo",
"Yaodong Yang"
] | Multimodal large language models (MLLMs) are essential for building general-purpose AI assistants; however, they pose increasing safety risks. How can we ensure safety alignment of MLLMs to prevent undesired behaviors? Going further, it is critical to explore how to fine-tune MLLMs to preserve capabilities while meetin... | poster | null | null | null | [] | [] | [] | [
0.008131963200867176,
0.007793517783284187,
0.0005029943422414362,
0.04132246971130371,
0.031003478914499283,
0.004794178530573845,
0.03637072071433067,
0.003852633759379387,
-0.03245868906378746,
-0.025012044236063957,
-0.021393876522779465,
0.04302815720438957,
-0.08318453282117844,
-0.0... |
119 | Under the Shadow: Exploiting Opacity Variation for Fine-grained Shadow Detection | [
"Xiaotian Qiao",
"Ke Xu",
"Xianglong Yang",
"Ruijie Dong",
"Xiaofang Xia",
"Jiangtao Cui"
] | Shadow characteristics are of great importance for scene understanding.Traditionally, shadow regions are considered as binary masks, often resulting in imprecise detection and suboptimal performance in tasks like shadow removal.We demonstrate that such an assumption oversimplifies light-object interactions in the scene... | poster | null | null | null | [] | [] | [] | [
0.03701941296458244,
0.006713354028761387,
0.02257504314184189,
0.04147312045097351,
0.05725548788905144,
0.010593538172543049,
0.025032173842191696,
0.016271406784653664,
-0.06615398824214935,
-0.05131668597459793,
-0.026165179908275604,
-0.013303451240062714,
-0.04688359424471855,
0.0055... |
120 | MODEM: A Morton-Order Degradation Estimation Mechanism for Adverse Weather Image Recovery | [
"Hainuo Wang",
"Qiming Hu",
"Xiaojie Guo"
] | Restoring images degraded by adverse weather remains a significant challenge due to the highly non-uniform and spatially heterogeneous nature of weather-induced artifacts, \emph{e.g.}, fine-grained rain streaks versus widespread haze. Accurately estimating the underlying degradation can intuitively provide restoration ... | poster | 2505.17581 | https://github.com/hainuo-wang/MODEM | null | [] | [] | [] | [
0.0028015857096761465,
-0.0006928797229193151,
0.004081913735717535,
0.04705587774515152,
0.06412531435489655,
-0.0025805386248975992,
0.052673567086458206,
0.012987076304852962,
-0.031623728573322296,
-0.04668474197387695,
0.021291499957442284,
0.027145756408572197,
-0.02529091015458107,
... |
121 | Dynamic Diameter in High-Dimensions against Adaptive Adversary and Beyond | [
"Kiarash Banihashem",
"Morteza Monemizadeh",
"MohammadTaghi Hajiaghayi",
"Peyman Jabbarzade",
"Samira Goudarzi",
"Jeff Giliberti"
] | In this paper, we study the fundamental problems of maintaining the diameter and a $k$-center clustering of a dynamic point set $P \subset \mathbb{R}^d$, where points may be inserted or deleted over time and the ambient dimension $d$ is not constant and may be high. Our focus is on designing algorithms that remain effe... | poster | null | null | null | [] | [] | [] | [
-0.016127943992614746,
-0.022837530821561813,
0.030240336433053017,
0.03846438229084015,
0.02525966800749302,
0.04341910406947136,
0.026810985058546066,
-0.01075252890586853,
-0.05906011536717415,
-0.05741291865706444,
-0.005739675369113684,
-0.03585699573159218,
-0.05449527129530907,
0.01... |
122 | VividFace: A Robost and High-Fidelity Video Face Swapping Framework | [
"Hao Shao",
"Shulun Wang",
"Yang Zhou",
"Guanglu Song",
"Dailan He",
"ZHUOFAN ZONG",
"Shuo Qin",
"Yu Liu",
"Hongsheng Li"
] | Video face swapping has seen increasing adoption in diverse applications, yet existing methods primarily trained on static images struggle to address temporal consistency and complex real-world scenarios. To overcome these limitations, we propose the first video face swapping framework, VividFace, a robust and high-fi... | poster | null | null | null | [] | [] | [] | [
0.03084617666900158,
-0.02391747757792473,
-0.03360912576317787,
0.05688280984759331,
0.025083351880311966,
0.02724738046526909,
0.020207073539495468,
0.011248652823269367,
-0.018859002739191055,
-0.05771350860595703,
-0.011005477979779243,
0.005903289653360844,
-0.08048814535140991,
-0.02... |
123 | Multiverse: Your Language Models Secretly Decide How to Parallelize and Merge Generation | [
"Xinyu Yang",
"Yuwei An",
"Hongyi Liu",
"Tianqi Chen",
"Beidi Chen"
] | Autoregressive Large Language Models (AR-LLMs) frequently exhibit implicit parallelism in sequential generation. Inspired by this, we introduce **Multiverse**, a new generative model that enables natively parallel generation. Multiverse internalizes a MapReduce paradigm, generating automatically through three stages: (... | spotlight | 2506.09991 | null | https://multiverse4fm.github.io/ | [] | [] | [] | [
-0.015504168346524239,
-0.00189800257794559,
0.0067654806189239025,
0.03189713880419731,
0.04140695929527283,
0.04663413017988205,
0.030297141522169113,
0.0354197733104229,
-0.014208799228072166,
-0.011238748207688332,
-0.008989308960735798,
0.024660862982273102,
-0.06976297497749329,
0.00... |
124 | KORGym: A Dynamic Game Platform for LLM Reasoning Evaluation | [
"Jiajun Shi",
"Jian Yang",
"Jiaheng Liu",
"Xingyuan Bu",
"Jiangjie Chen",
"Junting Zhou",
"Kaijing Ma",
"Zhoufutu Wen",
"Bingli Wang",
"Yancheng He",
"Liang Song",
"Hualei Zhu",
"Shilong Li",
"Xingjian Wang",
"Wei Zhang",
"Ruibin Yuan",
"Yifan Yao",
"Wenjun Yang",
"Yunli Wang",
... | Recent advancements in large language models (LLMs) underscore the need for more comprehensive evaluation methods to accurately assess their reasoning capabilities. Existing benchmarks are often domain-specific and thus cannot fully capture an LLM’s general reasoning potential. To address this limitation, we introduce ... | spotlight | 2505.14552 | https://github.com/multimodal-art-projection/korgym | null | [] | [] | [] | [
-0.039564210921525955,
-0.015187801793217659,
-0.01966571994125843,
0.03404562547802925,
0.06371156126260757,
-0.005853721871972084,
0.01717623509466648,
0.013160333968698978,
-0.0352022685110569,
0.010611330159008503,
-0.033673930913209915,
0.0561046302318573,
-0.049868594855070114,
-0.01... |
125 | Parameter Efficient Merging for Multimodal Large Language Models with Direction Robustness | [
"Fanhu Zeng",
"Haiyang Guo",
"Fei Zhu",
"Li Shen",
"Hao Tang"
] | Fine-tuning pre-trained models with custom data leads to numerous expert models on specific tasks. Merging models into one universal model to empower multi-task ability refraining from data leakage has gained popularity. With the expansion in data and model size, parameter efficient tuning becomes the common practice ... | spotlight | null | null | null | [] | [] | [] | [
-0.03687778860330582,
-0.0045268479734659195,
0.021265612915158272,
0.04352087154984474,
0.005086403805762529,
0.04235941544175148,
0.033337224274873734,
-0.00099219661206007,
-0.03985893726348877,
-0.042012669146060944,
-0.005633986089378595,
0.013816715218126774,
-0.0847492441534996,
-0.... |
126 | CAML: Collaborative Auxiliary Modality Learning for Multi-Agent Systems | [
"Rui Liu",
"Yu Shen",
"Peng Gao",
"Pratap Tokekar",
"Ming Lin"
] | Multi-modal learning has become a crucial technique for improving the performance of machine learning applications across domains such as autonomous driving, robotics, and perception systems. However, in certain scenarios, particularly in resource-constrained environments, some modalities available during training may ... | poster | 2502.17821 | null | null | [] | [] | [] | [
0.003472914220765233,
-0.011330366134643555,
-0.00651183258742094,
0.04392077773809433,
0.01329371053725481,
0.000277622661087662,
0.052589233964681625,
0.02733139880001545,
-0.0450095608830452,
-0.044946037232875824,
-0.03625452518463135,
0.029072456061840057,
-0.06925349682569504,
-0.007... |
127 | Single-Step Operator Learning for Conditioned Time-Series Diffusion Models | [
"Hui Chen",
"Vikas Singh"
] | Diffusion models have achieved significant success, yet their application to time series data, particularly with regard to efficient sampling, remains an active area of research. We describe an operator-learning approach for conditioned time-series diffusion models that gives efficient single-step generation by leverag... | poster | null | null | null | [] | [] | [] | [
-0.017213016748428345,
-0.01787443645298481,
0.012490593828260899,
0.02688554860651493,
0.06710611283779144,
0.04229946807026863,
0.04275178536772728,
-0.008404339663684368,
-0.013773499056696892,
-0.05656756833195686,
0.02923963963985443,
0.012252755463123322,
-0.052457164973020554,
-0.00... |
128 | A compressive-expressive communication framework for compositional representations | [
"Rafael Elberg",
"Felipe del Río",
"Mircea Petrache",
"Denis Parra"
] | Compositional generalization—the ability to interpret novel combinations of familiar elements—is a hallmark of human cognition and language. Despite recent advances, deep neural networks still struggle to acquire this property reliably. In this work, we introduce CELEBI (Compressive-Expressive Language Emergence throug... | poster | 2501.19182 | null | null | [] | [] | [] | [
0.015058610588312149,
-0.03272465616464615,
-0.03249167650938034,
0.03941919654607773,
0.03629114106297493,
0.05741827189922333,
0.013756483793258667,
0.002235736232250929,
-0.01267713401466608,
-0.04554330185055733,
-0.0011611961526796222,
-0.0017470406601205468,
-0.07914799451828003,
0.0... |
129 | RIGNO: A Graph-based Framework For Robust And Accurate Operator Learning For PDEs On Arbitrary Domains | [
"Sepehr Mousavi",
"Shizheng Wen",
"Levi Lingsch",
"Maximilian Herde",
"Bogdan Raonic",
"Siddhartha Mishra"
] | Learning the solution operators of PDEs on arbitrary domains is challenging due to the diversity of possible domain shapes, in addition to the often intricate underlying physics. We propose an end-to-end graph neural network (GNN) based neural operator to learn PDE solution operators from data on point clouds in arbitr... | poster | 2501.19205 | https://github.com/camlab-ethz/rigno | null | [] | [] | [
"shiwen0710/Datasets_for_GAOT"
] | [
-0.014658450148999691,
0.0006585872615687549,
0.026703614741563797,
0.05382165685296059,
0.03331950679421425,
0.04097979515790939,
0.0020535732619464397,
0.012866472825407982,
-0.053484510630369186,
-0.07431627064943314,
0.021832194179296494,
-0.00534930732101202,
-0.06157500669360161,
0.0... |
130 | Reasoning Can Hurt the Inductive Abilities of Large Language Models | [
"Haibo Jin",
"Peiyan Zhang",
"Man Luo",
"Haohan Wang"
] | Large Language Models (LLMs) have shown remarkable progress across domains, yet their ability to perform inductive reasoning—inferring latent rules from sparse examples—remains limited. It is often assumed that chain-of-thought (CoT) prompting, as used in Large Reasoning Models (LRMs), enhances such reasoning. We inves... | poster | 2505.24225 | null | null | [] | [] | [] | [
-0.03503139317035675,
-0.02060837484896183,
-0.02889946848154068,
0.03295697644352913,
0.052544355392456055,
-0.02306751161813736,
0.03924983739852905,
0.010968562215566635,
-0.025755150243639946,
0.0032425171229988337,
-0.0068208868615329266,
0.05127255991101265,
-0.060017216950654984,
-0... |
131 | Optimizing Anytime Reasoning via Budget Relative Policy Optimization | [
"Penghui Qi",
"Zichen Liu",
"Tianyu Pang",
"Chao Du",
"Wee Sun Lee",
"Min Lin"
] | Scaling test-time compute is crucial for enhancing the reasoning capabilities of large language models (LLMs). Existing approaches typically employ reinforcement learning (RL) to maximize a verifiable reward obtained at the end of reasoning traces. However, such methods optimize only the final performance under a large... | poster | 2505.13438 | https://github.com/sail-sg/AnytimeReasoner | null | [] | [] | [] | [
-0.040487274527549744,
-0.004051859024912119,
-0.0025215165223926306,
0.06701352447271347,
0.045788608491420746,
0.031242715194821358,
0.012408032082021236,
0.012845572084188461,
-0.03699570521712303,
-0.006227331236004829,
-0.008761638775467873,
0.03254090994596481,
-0.033212270587682724,
... |
132 | Principled Long-Tailed Generative Modeling via Diffusion Models | [
"Pranoy Das",
"Kexin Fu",
"Abolfazl Hashemi",
"Vijay Gupta"
] | Deep generative models, particularly diffusion models, have achieved remarkable success across diverse domains but face significant challenges when trained on real-world, long-tailed datasets-where a few "head" classes dominate and many "tail" classes are underrepresented. This paper develops a rigorous theoretical fra... | poster | null | null | null | [] | [] | [] | [
-0.011424314230680466,
-0.05077527090907097,
0.01449196133762598,
0.0373593233525753,
0.057983435690402985,
0.010422863066196442,
0.00021316189668141305,
-0.005029270891100168,
0.004804920870810747,
-0.05566304177045822,
0.008226984180510044,
-0.01184815727174282,
-0.050860319286584854,
0.... |
133 | RvLLM: LLM Runtime Verification with Domain Knowledge | [
"Yedi Zhang",
"Sun Emma",
"Annabelle En",
"Jin Song Dong"
] | Large language models (LLMs) have emerged as a dominant AI paradigm due to their exceptional text understanding and generation capabilities. However, their tendency to generate inconsistent or erroneous outputs challenges their reliability, especially in high-stakes domains requiring accuracy and trustworthiness. Exist... | poster | 2505.18585 | null | null | [] | [] | [] | [
-0.0012013574596494436,
0.01337248645722866,
0.006412936840206385,
0.03277245908975601,
0.04390601068735123,
-0.008860369212925434,
0.04061417281627655,
-0.00229151314124465,
-0.04170389100909233,
0.015138829126954079,
-0.02989160269498825,
0.06326059252023697,
-0.05401061847805977,
0.0063... |
134 | Dynamic Siamese Expansion Framework for Improving Robustness in Online Continual Learning | [
"Fei Ye",
"Yulong Zhao",
"Qihe Liu",
"Junlin Chen",
"Adrian G. Bors",
"Jingling sun",
"Rongyao Hu",
"shijie zhou"
] | Continual learning requires the model to continually capture novel information without forgetting prior knowledge. Nonetheless, existing studies predominantly address the catastrophic forgetting, often neglecting enhancements in model robustness. Consequently, these methodologies fall short in real-time applications, s... | poster | null | null | null | [] | [] | [] | [
0.011448554694652557,
-0.0309329554438591,
-0.004461078904569149,
0.029078224673867226,
0.023529179394245148,
0.02385711297392845,
0.041486840695142746,
0.007953738793730736,
-0.016062287613749504,
-0.036048613488674164,
-0.002706191036850214,
0.023170912638306618,
-0.06964807212352753,
0.... |
135 | Fading to Grow: Growing Preference Ratios via Preference Fading Discrete Diffusion for Recommendation | [
"Guoqing Hu",
"An Zhang",
"Shuchang Liu",
"Wenyu Mao",
"Jiancan Wu",
"Xun Yang",
"Xiang Li",
"Lantao Hu",
"Han Li",
"Kun Gai",
"Xiang Wang"
] | Recommenders aim to rank items from a discrete item corpus in line with user interests, yet suffer from extremely sparse user preference data. Recent advances in diffusion models have inspired diffusion-based recommenders, which alleviate sparsity by injecting noise during a forward process to prevent collapse of pertu... | poster | 2509.26063 | null | null | [] | [] | [] | [
0.020142775028944016,
-0.03425036743283272,
0.013738241046667099,
0.01360881794244051,
0.05840650200843811,
0.027681700885295868,
0.025354668498039246,
-0.018752431496977806,
0.0016874579014256597,
-0.07304611057043076,
-0.017724718898534775,
0.008686453104019165,
-0.03197022154927254,
0.0... |
136 | Synthetic-powered predictive inference | [
"Meshi Bashari",
"Roy Maor Lotan",
"Yonghoon Lee",
"Edgar Dobriban",
"Yaniv Romano"
] | Conformal prediction is a framework for predictive inference with a distribution-free, finite-sample guarantee. However, it tends to provide uninformative prediction sets when calibration data are scarce. This paper introduces Synthetic-powered predictive inference (SPPI), a novel framework that incorporates synthetic... | poster | 2505.13432 | https://github.com/meshiba/sppi | null | [] | [] | [] | [
-0.0025965042877942324,
-0.03489551693201065,
0.0015228104311972857,
0.04841499403119087,
0.0579918771982193,
-0.005332027096301317,
0.005465940572321415,
-0.009613379836082458,
-0.010615140199661255,
-0.03380310535430908,
0.018808800727128983,
0.004859717562794685,
-0.07752763479948044,
0... |
137 | Learning Chern Numbers of Multiband Topological Insulators with Gauge Equivariant Neural Networks | [
"Longde Huang",
"Oleksandr Balabanov",
"Hampus Linander",
"Mats Granath",
"Daniel Persson",
"Jan Gerken"
] | Equivariant network architectures are a well-established tool for predicting invariant or equivariant quantities. However, almost all learning problems considered in this context feature a global symmetry, i.e. each point of the underlying space is transformed with the same group element, as opposed to a local *gauge* ... | poster | null | null | null | [] | [] | [] | [
-0.007049790117889643,
-0.01756497099995613,
0.003726864466443658,
0.017901286482810974,
0.009353851899504662,
-0.02184288017451763,
0.02487519383430481,
0.009047369472682476,
-0.02881842851638794,
-0.027893276885151863,
-0.022237347438931465,
-0.02314467914402485,
-0.03615616634488106,
0.... |
138 | RadZero: Similarity-Based Cross-Attention for Explainable Vision-Language Alignment in Radiology with Zero-Shot Multi-Task Capability | [
"Jonggwon Park",
"Soobum Kim",
"Byungmu Yoon",
"Kyoyun Choi"
] | Recent advancements in multi-modal models have significantly improved vision-language (VL) alignment in radiology. However, existing approaches struggle to effectively utilize complex radiology reports for learning and offer limited interpretability through attention probability visualizations. To address these challen... | poster | 2504.07416 | null | null | [] | [] | [] | [
0.0345832034945488,
0.01737230271100998,
0.015667585656046867,
0.027518784627318382,
0.01907437853515148,
0.02291213907301426,
0.014297336339950562,
0.019447267055511475,
-0.013668756932020187,
-0.03878181055188179,
-0.029084235429763794,
0.017603369429707527,
-0.04646801948547363,
0.01133... |
139 | Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning | [
"Jiayu Wang",
"Yifei Ming",
"Zixuan Ke",
"Caiming Xiong",
"Shafiq Joty",
"Aws Albarghouthi",
"Frederic Sala"
] | Reinforcement learning (RL) has become the dominant paradigm for endowing language models with advanced reasoning capabilities. Despite the substantial empirical gains demonstrated by RL-based training methods like GRPO, a granular understanding of their advantages is still lacking. To address this gap, we introduce a ... | poster | 2506.04723 | null | null | [] | [] | [] | [
-0.03458452597260475,
-0.013721434399485588,
-0.022389747202396393,
0.05111699923872948,
0.0716206356883049,
0.017660794779658318,
0.02290407568216324,
0.011434550397098064,
-0.05456579104065895,
0.0006247893907129765,
-0.026020286604762077,
0.06470993161201477,
-0.06905777752399445,
-0.01... |
140 | SingRef6D: Monocular Novel Object Pose Estimation with a Single RGB Reference | [
"Jiahui Wang",
"Haiyue Zhu",
"Haoren Guo",
"Abdullah Al Mamun",
"Cheng Xiang",
"Tong Heng LEE"
] | Recent 6D pose estimation methods demonstrate notable performance but still face some practical limitations. For instance, many of them rely heavily on sensor depth, which may fail with challenging surface conditions, such as transparent or highly reflective materials. In the meantime, RGB-based solutions provide less ... | poster | 2509.21927 | null | null | [] | [] | [] | [
0.007889926433563232,
0.0037246483843773603,
0.014100686646997929,
0.04391804337501526,
0.011334444396197796,
0.06272991746664047,
0.01186996977776289,
-0.0058506024070084095,
-0.04135051369667053,
-0.03909916430711746,
-0.0031801676377654076,
-0.005912810564041138,
-0.07340546697378159,
-... |
141 | Memory-Augmented Potential Field Theory: A Framework for Adaptive Control in Non-Convex Domains | [
"Dongzhe Zheng",
"Wenjie Mei"
] | Stochastic optimal control methods often struggle in complex non-convex landscapes, frequently becoming trapped in local optima due to their inability to learn from historical trajectory data. This paper introduces Memory-Augmented Potential Field Theory, a unified mathematical framework that integrates historical expe... | poster | 2509.19672 | null | null | [] | [] | [] | [
-0.05991525202989578,
0.015173683874309063,
-0.007494110614061356,
0.03890938311815262,
0.05239726975560188,
0.0119617385789752,
0.00266929529607296,
-0.004326708614826202,
-0.04160155728459358,
-0.059464212507009506,
-0.016522105783224106,
0.012856489978730679,
-0.0684957280755043,
-0.006... |
142 | Bridging the Expressivity Gap: Provably Tractable SHAP Explanations for Tensor Networks | [
"Reda Marzouk",
"Shahaf Bassan",
"Guy Katz"
] | Although Shapley additive explanations (SHAP) can be computed in polynomial time for simple models like decision trees, they unfortunately become NP-hard to compute for more expressive black-box models like neural networks - where generating explanations is often most critical. In this work, we analyze the problem of c... | poster | null | null | null | [] | [] | [] | [
-0.03463476151227951,
-0.02860473096370697,
-0.006434899289160967,
0.055300455540418625,
0.004461180418729782,
0.01330209244042635,
0.02460155077278614,
-0.005034313071519136,
-0.028254434466362,
-0.029559455811977386,
-0.005512355826795101,
0.0285661444067955,
-0.05741890147328377,
0.0299... |
143 | Smooth Regularization for Efficient Video Recognition | [
"Gil Goldman",
"Raja Giryes",
"Mahadev Satyanarayanan"
] | We propose a smooth regularization technique that instillsa strong temporal inductive bias in video recognition models,particularly benefiting lightweight architectures.Our method encourages smoothness in the intermediate-layer embeddings ofconsecutive frames by modeling their changes as a Gaussian Random Walk (GRW).Th... | poster | null | null | null | [] | [] | [] | [
0.02208280749619007,
-0.03264719247817993,
0.021389784291386604,
0.05612894520163536,
0.0337015837430954,
0.022753285244107246,
0.05052986368536949,
0.010407358407974243,
-0.05407565459609032,
-0.05304953455924988,
0.007928269915282726,
-0.035748790949583054,
-0.059231627732515335,
-0.0109... |
144 | Reparameterized LLM Training via Orthogonal Equivalence Transformation | [
"Zeju Qiu",
"Simon Buchholz",
"Tim Xiao",
"Maximilian Dax",
"Bernhard Schölkopf",
"Weiyang Liu"
] | While Large language models (LLMs) are driving the rapid advancement of artificial intelligence, effectively and reliably training these large models remains one of the field's most significant challenges. To address this challenge, we propose POET, a novel reparameterized training algorithm that uses orthogonal equiva... | poster | 2506.08001 | https://github.com/Sphere-AI-Lab/poet | https://spherelab.ai/poet/ | [] | [] | [] | [
-0.03845817968249321,
-0.039771027863025665,
-0.017236242070794106,
0.010514591820538044,
0.02837272733449936,
0.05902737379074097,
0.030994638800621033,
-0.0029541729018092155,
-0.01974720135331154,
-0.012723279185593128,
-0.016130391508340836,
0.0071970741264522076,
-0.0710965096950531,
... |
145 | Does Representation Guarantee Welfare? | [
"Jakob de Raaij",
"Ariel Procaccia",
"Alexandros Psomas"
] | A panel satisfies *descriptive representation* when its composition reflects the population. We examine the role of descriptive representation in collective decision making through an optimization lens, asking whether representative panels make decisions that maximize social welfare for the underlying population. Our m... | poster | null | null | null | [] | [] | [] | [
-0.024147288873791695,
-0.033484045416116714,
-0.017140010371804237,
0.03793329373002052,
0.010183846578001976,
0.0426488071680069,
-0.0027016522362828255,
-0.006286307238042355,
-0.06274213641881943,
-0.02009119652211666,
-0.046872515231370926,
-0.024619504809379578,
-0.0725574716925621,
... |
146 | Prot2Text-V2: Protein Function Prediction with Multimodal Contrastive Alignment | [
"Xiao Fei",
"Michail Chatzianastasis",
"Sarah Carneiro",
"Hadi Abdine",
"Lawrence Petalidis",
"Michalis Vazirgiannis"
] | Predicting protein function from sequence is a central challenge in computational biology. While existing methods rely heavily on structured ontologies or similarity-based techniques, they often lack the flexibility to express structure-free functional descriptions and novel biological functions. In this work, we intro... | poster | 2505.11194 | https://github.com/colinfx/prot2text-v2 | null | [] | [
"xiao-fei/Prot2Text-V2-11B-Instruct-hf"
] | [] | [
-0.01237596943974495,
-0.010326842777431011,
-0.0011616620467975736,
0.026956630870699883,
0.03543037176132202,
0.014966954477131367,
0.048854175955057144,
0.017306383699178696,
0.0006061427993699908,
0.0064995549619197845,
0.025280088186264038,
0.021509414538741112,
-0.06232109293341637,
... |
147 | Discrete Neural Flow Samplers with Locally Equivariant Transformer | [
"Zijing Ou",
"Ruixiang ZHANG",
"Yingzhen Li"
] | Sampling from unnormalised discrete distributions is a fundamental problem across various domains. While Markov chain Monte Carlo offers a principled approach, it often suffers from slow mixing and poor convergence.In this paper, we propose Discrete Neural Flow Samplers (DNFS), a trainable and efficient framework for d... | poster | 2505.17741 | null | null | [] | [] | [] | [
0.002068605739623308,
-0.02781876176595688,
0.002371955430135131,
0.05190310999751091,
0.019676409661769867,
0.06311425566673279,
0.008636769838631153,
-0.005630460102111101,
-0.021283909678459167,
-0.03894760087132454,
0.0207864660769701,
-0.010432307608425617,
-0.061535969376564026,
0.00... |
148 | Shapley-Coop: Credit Assignment for Emergent Cooperation in Self-Interested LLM Agents | [
"Yun Hua",
"Haosheng Chen",
"Shiqin Wang",
"Wenhao Li",
"Xiangfeng Wang",
"Jun Luo"
] | Large Language Models (LLMs) are increasingly deployed as autonomous agents in multi-agent systems, and promising coordination has been demonstrated in handling complex tasks under predefined roles and scripted workflows.However, significant challenges remain in open-ended environments, where agents are inherently self... | poster | null | null | null | [] | [] | [] | [
-0.03191116079688072,
-0.029478246346116066,
-0.023158326745033264,
0.01970072090625763,
0.04167567938566208,
0.006446814630180597,
-0.0002884128480218351,
0.01205710880458355,
-0.02841818332672119,
-0.018969733268022537,
-0.03617047145962715,
0.0009060479351319373,
-0.07625758647918701,
0... |
149 | LOPT: Learning Optimal Pigovian Tax in Sequential Social Dilemmas | [
"Yun Hua",
"Shang Gao",
"Wenhao Li",
"Haosheng Chen",
"Bo Jin",
"Xiangfeng Wang",
"Jun Luo",
"Hongyuan Zha"
] | Multi-agent reinforcement learning (MARL) has emerged as a powerful framework for modeling autonomous agents that independently optimize their individual objectives. However, in mixed-motive MARL environments, rational self-interested behaviors often lead to collectively suboptimal outcomes situations commonly referred... | poster | null | null | null | [] | [] | [] | [
-0.024276090785861015,
-0.030835814774036407,
0.0009738060180097818,
0.036673106253147125,
0.02783442847430706,
0.026507699862122536,
0.01148261222988367,
-0.0006311113247647882,
-0.027667442336678505,
-0.036092359572649,
-0.011928003281354904,
0.0012263640528544784,
-0.07537536323070526,
... |
150 | Beyond Last-Click: An Optimal Mechanism for Ad Attribution | [
"Nan An",
"Weian Li",
"Qi Qi",
"Changyuan Yu",
"Liang Zhang"
] | Accurate attribution for multiple platforms is critical for evaluating performance-based advertising. However, existing attribution methods rely heavily on the heuristic methods, e.g., Last-Click Mechanism (LCM) which always allocates the attribution to the platform with the latest report, lacking theoretical guarantee... | poster | null | null | null | [] | [] | [] | [
-0.0007250129710882902,
0.00965285487473011,
-0.0033119346480816603,
0.022020626813173294,
0.02609635703265667,
-0.002811375306919217,
0.01712750270962715,
0.012986021116375923,
-0.02062293142080307,
-0.04581008478999138,
-0.008373100310564041,
-0.007254316005855799,
-0.03416915237903595,
... |
151 | LLM Interpretability with Identifiable Temporal-Instantaneous Representation | [
"Xiangchen Song",
"Jiaqi Sun",
"Zijian Li",
"Yujia Zheng",
"Kun Zhang"
] | Despite Large Language Models' remarkable capabilities, understanding their internal representations remains challenging. Mechanistic interpretability tools such as sparse autoencoders (SAEs) were developed to extract interpretable features from LLMs but lack temporal dependency modeling, instantaneous relation represe... | poster | 2509.23323 | null | null | [] | [] | [] | [
-0.0021341682877391577,
-0.011586886830627918,
-0.021347614005208015,
0.035611703991889954,
0.04862833023071289,
0.03599812462925911,
0.03919578716158867,
0.02163124643266201,
-0.02401133067905903,
0.008930264972150326,
0.001623801072128117,
0.014285912737250328,
-0.02841208688914776,
0.02... |
152 | AlignAb: Pareto-Optimal Energy Alignment for Designing Nature-Like Antibodies | [
"Yibo Wen",
"Chenwei Xu",
"Jerry Yao-Chieh Hu",
"Kaize Ding",
"Han Liu"
] | We present a three-stage framework for training deep learning models specializing in antibody sequence-structure co-design.We first pre-train a language model using millions of antibody sequence data.Then, we employ the learned representations to guide the training of a diffusion model for joint optimization over both ... | poster | 2412.20984 | null | null | [] | [] | [] | [
-0.0005739768967032433,
0.0011920969700440764,
-0.005370167084038258,
0.019486695528030396,
0.01877228543162346,
0.020387204363942146,
0.040300846099853516,
-0.0031114828307181597,
0.01620510034263134,
-0.029086068272590637,
0.01323497761040926,
0.013890672475099564,
-0.10153983533382416,
... |
153 | SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs | [
"Jinwoo Park",
"Seunggeun Cho",
"Dongsu Han"
] | Large language models (LLMs) power many modern applications, but serving them at scale remains costly and resource-intensive. Current server-centric systems overlook consumer-grade GPUs at the edge. We introduce SpecEdge, an edge-assisted inference framework that splits LLM workloads between edge and server GPUs using ... | spotlight | 2505.17052 | null | null | [] | [] | [] | [
-0.044682618230581284,
-0.04287751764059067,
0.019216759130358696,
0.032415296882390976,
0.04493117332458496,
0.025841426104307175,
0.009828981943428516,
0.028171516954898834,
-0.029939871281385422,
-0.025513945147395134,
0.003618166083469987,
0.004708854481577873,
-0.06540513038635254,
0.... |
154 | Relaxing partition admissibility in Cluster-DAGs: a causal calculus with arbitrary variable clustering | [
"Clément Yvernes",
"Emilie Devijver",
"Adèle Ribeiro",
"Marianne Clausel",
"Eric Gaussier"
] | Cluster DAGs (C-DAGs) provide an abstraction of causal graphs in which nodes represent clusters of variables, and edges encode both cluster-level causal relationships and dependencies arisen from unobserved confounding. C-DAGs define an equivalence class of acyclic causal graphs that agree on cluster-level relationship... | poster | null | null | null | [] | [] | [] | [
-0.019946228712797165,
-0.004704977851361036,
-0.02347908541560173,
0.03823539614677429,
0.042654555290937424,
0.023473935201764107,
0.024153495207428932,
0.008909638039767742,
-0.006699869409203529,
-0.03220214694738388,
-0.010584404692053795,
0.0062763988971710205,
-0.05942752584815025,
... |
155 | Topology-Aware Conformal Prediction for Stream Networks | [
"Jifan Zhang",
"Fangxin Wang",
"Zihe Song",
"Philip S Yu",
"Kaize Ding",
"Shixiang Zhu"
] | Stream networks, a unique class of spatiotemporal graphs, exhibit complex directional flow constraints and evolving dependencies, making uncertainty quantification a critical yet challenging task. Traditional conformal prediction methods struggle in this setting due to the need for joint predictions across multiple int... | poster | 2503.04981 | null | null | [] | [] | [] | [
0.007094257976859808,
-0.028336789458990097,
0.02036214992403984,
0.02148127742111683,
0.033351797610521317,
0.013227897696197033,
0.03877175971865654,
0.017256388440728188,
-0.003344477852806449,
-0.059128109365701675,
0.023192498832941055,
-0.039701733738183975,
-0.08859783411026001,
0.0... |
156 | On topological descriptors for graph products | [
"Mattie Ji",
"Amauri Souza",
"Vikas Garg"
] | Topological descriptors have been increasingly used for capturing multiscale structural information in relational data. In this work, we consider various filtrations on (Cartesian) graph products and the effect on their outputs on the topological descriptors - the Euler characteristics (EC) and persistent homology (PH)... | poster | null | null | null | [] | [] | [] | [
0.0006877875421196222,
0.019233619794249535,
0.032129835337400436,
0.047146059572696686,
0.039956219494342804,
0.025342974811792374,
0.012401880696415901,
0.013779542408883572,
0.007229152601212263,
-0.03573724254965782,
-0.013496996834874153,
-0.03159767761826515,
-0.09614390134811401,
0.... |
157 | Graph-based Symbolic Regression with Invariance and Constraint Encoding | [
"Ziyu Xiang",
"Kenna Ashen",
"Xiaofeng Qian",
"Xiaoning Qian"
] | Symbolic regression (SR) offers interpretable over `black-box' solutions by providing analytical expressions, enabling the identification of key functional dependencies in complex systems. In this paper, we identify two key limiting factors in existing SR methods: (a) redundant representations arising from the inabilit... | poster | null | null | null | [] | [] | [] | [
-0.014336020685732365,
-0.010132214985787868,
-0.023474447429180145,
0.03148433566093445,
0.04198237136006355,
0.056083016097545624,
0.04459560662508011,
-0.03203749656677246,
-0.023130329325795174,
-0.027776964008808136,
-0.0022077409084886312,
0.02747231535613537,
-0.06517745554447174,
0... |
158 | Less is More: an Attention-free Sequence Prediction Modeling for Offline Embodied Learning | [
"Wei Huang",
"Jianshu Zhang",
"Leiyu Wang",
"Heyue Li",
"Luoyi Fan",
"Yichen Zhu",
"Nanyang Ye",
"Qinying Gu"
] | Offline reinforcement learning (offline RL) is increasingly approached as a sequence modeling task, with methods leveraging advanced architectures like Transformers to capture trajectory dependencies. Despite significant progress, the mechanisms underlying their effectiveness and limitations remain insufficiently under... | poster | null | null | null | [] | [] | [] | [
-0.016858156770467758,
-0.027557017281651497,
-0.014072800055146217,
0.02982219122350216,
0.01712285913527012,
0.014929363504052162,
0.00888362992554903,
0.013388288207352161,
-0.04566606134176254,
-0.018358999863266945,
-0.04025011509656906,
0.03017744980752468,
-0.062099479138851166,
-0.... |
159 | MEIcoder: Decoding Visual Stimuli from Neural Activity by Leveraging Most Exciting Inputs | [
"Jan Sobotka",
"Luca Baroni",
"Ján Antolík"
] | Decoding visual stimuli from neural population activity is crucial for understanding the brain and for applications in brain-machine interfaces. However, such biological data is often scarce, particularly in primates or humans, where high-throughput recording techniques, such as two-photon imaging, remain challenging o... | poster | null | null | null | [] | [] | [] | [
-0.009306954219937325,
-0.015104260295629501,
-0.013970891945064068,
0.02721228078007698,
0.035108212381601334,
0.034206416457891464,
0.026434404775500298,
0.005715203937143087,
-0.03289157152175903,
-0.05141552537679672,
-0.0033976200502365828,
-0.0068281665444374084,
-0.043690286576747894,... |
160 | Investigating and Mitigating Catastrophic Forgetting in Medical Knowledge Injection through Internal Knowledge Augmentation Learning | [
"Yuxuan Zhou",
"Xien Liu",
"Xiao Zhang",
"Chen Ning",
"Shijin Wang",
"Guoping Hu",
"Ji Wu"
] | Large Language Models (LLMs) must possess sufficient and comprehensive medical knowledge to effectively address real-world clinical problems. Though domain-specific fine-tuning can effectively inject medical knowledge into LLMs, it often leads to catastrophic forgetting of previously acquired knowledge and instruction ... | poster | null | null | null | [] | [] | [] | [
-0.011204171925783157,
-0.003817216260358691,
-0.007790481671690941,
0.05126861110329628,
0.07378546893596649,
-0.008416586555540562,
0.050410643219947815,
0.001272054621949792,
-0.020053353160619736,
0.0026778005994856358,
-0.01699691079556942,
0.033858414739370346,
-0.039914775639772415,
... |
161 | It’s Hard to Be Normal: The Impact of Noise on Structure-agnostic Estimation | [
"Jikai Jin",
"Lester Mackey",
"Vasilis Syrgkanis"
] | Structure-agnostic causal inference studies the statistical limits of treatment effect estimation, when given access to black-box ML models that estimate nuisance components of the data generating process, such as estimates of the outcome regression and the treatment propensity. Here, we find that the answer depends in... | poster | null | null | null | [] | [] | [] | [
-0.0024057417176663876,
-0.010051325894892216,
-0.01935185119509697,
0.006432252004742622,
0.017414577305316925,
0.03492346778512001,
0.0592307522892952,
-0.009895792230963707,
-0.011901570484042168,
-0.037701964378356934,
0.00959085114300251,
-0.0011890835594385862,
-0.08525728434324265,
... |
162 | SHF: Symmetrical Hierarchical Forest with Pretrained Vision Transformer Encoder for High-Resolution Medical Segmentation | [
"Enzhi Zhang",
"Peng Chen",
"Rui Zhong",
"Du Wu",
"Jun Igarashi",
"Isaac Lyngaas",
"Xiao Wang",
"Masaharu Munetomo",
"Mohamed Wahib"
] | This paper presents a novel approach to addressing the long-sequence problem in high-resolution medical images for Vision Transformers (ViTs). Using smaller patches as tokens can enhance ViT performance, but quadratically increases computation and memory requirements. Therefore, the common practice for applying ViTs to... | spotlight | null | null | null | [] | [] | [] | [
0.006553517188876867,
-0.02230941876769066,
0.02159513719379902,
-0.000811294186860323,
0.016113346442580223,
0.056905921548604965,
0.024357710033655167,
0.0023093954659998417,
-0.01998102106153965,
-0.07503416389226913,
0.01032420713454485,
-0.0019072117283940315,
-0.02934299223124981,
0.... |
163 | Uni-MuMER: Unified Multi-Task Fine-Tuning of Vision-Language Model for Handwritten Mathematical Expression Recognition | [
"Yu Li",
"Jin Jiang",
"Jianhua Zhu",
"Shuai Peng",
"Baole",
"Yuxuan Zhou",
"Liangcai Gao"
] | Handwritten Mathematical Expression Recognition (HMER) remains a persistent challenge in Optical Character Recognition(OCR) due to the inherent freedom of symbol layout and variability in handwriting styles. Prior methods have faced performance bottlenecks, proposing isolated architectural modifications that are diffic... | spotlight | null | null | null | [] | [] | [] | [
0.0028628415893763304,
0.016865210607647896,
0.008162271231412888,
0.008063185960054398,
0.03920866549015045,
0.038228753954172134,
0.009764881804585457,
0.022952888160943985,
-0.03320658579468727,
-0.03623450547456741,
-0.01223659422248602,
-0.002658882411196828,
-0.0625731498003006,
-0.0... |
164 | Graph Persistence goes Spectral | [
"Mattie Ji",
"Amauri Souza",
"Vikas Garg"
] | Including intricate topological information (e.g., cycles) provably enhances the expressivity of message-passing graph neural networks (GNNs) beyond the Weisfeiler-Leman (WL) hierarchy. Consequently, Persistent Homology (PH) methods are increasingly employed for graph representation learning. In this context, recent wo... | poster | 2506.06571 | null | null | [] | [] | [] | [
-0.016766278073191643,
-0.02299635298550129,
0.008183909580111504,
0.05622177571058273,
0.011237733066082,
0.008304948918521404,
0.03308597579598427,
0.006611536256968975,
0.003026003949344158,
-0.06155412644147873,
0.02009720914065838,
-0.01220305822789669,
-0.08954233676195145,
0.0147741... |
165 | Chain of Execution Supervision Promotes General Reasoning in Large Language Models | [
"Nuo Chen",
"Zehua Li",
"Keqin Bao",
"Junyang Lin",
"Dayiheng Liu"
] | Building robust and general reasoning ability is a central goal in the development of large language models (LLMs). Recent efforts increasingly turn to code as a rich training source, given its inherent logical structure and diverse reasoning paradigms—such as divide-and-conquer, topological ordering, and enumeration. ... | poster | null | null | null | [] | [] | [] | [
-0.005992915015667677,
-0.013003640808165073,
-0.03781808167695999,
0.026943422853946686,
0.060795847326517105,
-0.008600492030382156,
0.04304874315857887,
0.016202418133616447,
-0.018341965973377228,
0.001522177248261869,
-0.02115011028945446,
0.012787316925823689,
-0.06513626873493195,
-... |
166 | FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks | [
"Luca Della Libera",
"Francesco Paissan",
"Cem Subakan",
"Mirco Ravanelli"
] | Large language models have revolutionized natural language processing through self-supervised pretraining on massive datasets. Inspired by this success, researchers have explored adapting these methods to speech by discretizing continuous audio into tokens using neural audio codecs. However, existing approaches face li... | poster | 2502.04465 | null | https://lucadellalib.github.io/focalcodec-web/ | [] | [
"lucadellalib/focalcodec_25hz",
"lucadellalib/focalcodec_12_5hz",
"lucadellalib/focalcodec_50hz",
"lucadellalib/focalcodec_50hz_65k_causal",
"lucadellalib/focalcodec_50hz_4k_causal",
"lucadellalib/focalcodec_50hz_2k_causal"
] | [] | [
-0.03172341734170914,
-0.04952114447951317,
-0.016214001923799515,
0.019185451790690422,
0.03073807619512081,
0.045140232890844345,
0.014508205465972424,
0.021425902843475342,
-0.017038101330399513,
-0.03220443055033684,
-0.03457365185022354,
0.012550363317131996,
-0.04057365283370018,
0.0... |
167 | Stable Matching with Ties: Approximation Ratios and Learning | [
"Shiyun Lin",
"Simon Mauras",
"Nadav Merlis",
"Vianney Perchet"
] | We study matching markets with ties, where workers on one side of the market may have tied preferences over jobs, determined by their matching utilities. Unlike classical two-sided markets with strict preferences, no single stable matching exists that is utility-maximizing for all workers. To address this challenge, we... | poster | 2411.03270 | null | null | [] | [] | [] | [
-0.01242143101990223,
0.004243134520947933,
0.007001516409218311,
0.02599995583295822,
0.05511609464883804,
0.05049893632531166,
-0.0030820933170616627,
-0.0016562428791075945,
-0.013916783034801483,
-0.04792794957756996,
-0.025688188150525093,
-0.0004469880077522248,
-0.10031648725271225,
... |
168 | SPOT-Trip: Dual-Preference Driven Out-of-Town Trip Recommendation | [
"Yinghui Liu",
"Hao Miao",
"Guojiang Shen",
"Yan Zhao",
"Xiangjie Kong",
"Ivan Lee"
] | Out-of-town trip recommendation aims to generate a sequence of Points of Interest (POIs) for users traveling from their hometowns to previously unvisited regions based on personalized itineraries, e.g., origin, destination, and trip duration. Modeling the complex user preferences--which often exhibit a two-fold nature ... | poster | null | null | null | [] | [] | [] | [
-0.017803024500608444,
-0.026346281170845032,
0.034222476184368134,
0.03239263594150543,
0.04144776985049248,
-0.0019634491764009,
0.02291976846754551,
0.03402907773852348,
0.007870616391301155,
-0.020814815536141396,
-0.02475067414343357,
0.005095882806926966,
-0.07871299982070923,
-0.016... |
169 | GraphMaster: Automated Graph Synthesis via LLM Agents in Data-Limited Environments | [
"Enjun Du",
"Xunkai Li",
"Tian Jin",
"Zhihan Zhang",
"Rong-Hua Li",
"Guoren Wang"
] | The era of foundation models has revolutionized AI research, yet Graph Foundation Models (GFMs) remain constrained by the scarcity of large-scale graph corpora. Traditional graph data synthesis techniques primarily focus on simplistic structural operations, lacking the capacity to generate semantically rich nodes with ... | spotlight | 2504.00711 | null | null | [] | [] | [] | [
-0.001272987574338913,
-0.02777130901813507,
0.008638892322778702,
0.052047133445739746,
0.03664292395114899,
0.003343695541843772,
0.02997015416622162,
0.0280213113874197,
-0.0065569221042096615,
-0.03978095203638077,
-0.007438448257744312,
0.03666684776544571,
-0.07609699666500092,
0.015... |
170 | Tight Bounds on the Distortion of Randomized and Deterministic Distributed Voting | [
"Marzieh Nilipour",
"Mohammad Abam",
"Davoud Kareshki",
"Masoud Seddighin",
"MohammadHossein Paydar"
] | We investigate metric distortion in distributed voting, where voters are divided into groups that each select a local representative, and a final winner is chosen from these representatives. This model captures real-world systems such as the U.S. presidential elections, where state-level decisions determine the nationa... | poster | 2509.17134 | null | null | [] | [] | [] | [
-0.033280979841947556,
-0.01771925576031208,
-0.00909873191267252,
0.045238789170980453,
0.012993735261261463,
0.0169928427785635,
0.03273105248808861,
-0.01859106309711933,
-0.035392921417951584,
-0.04789811745285988,
-0.029178274795413017,
-0.045684441924095154,
-0.07540459930896759,
-0.... |
171 | AdaPA-Agent: A Personalized Agent with Adaptive Preference Arithmetic for Dynamic Preference Modeling | [
"Hongyi Nie",
"Yaqing Wang",
"Mingyang Zhou",
"Feiyang Pan",
"Quanming Yao",
"Zhen Wang"
] | As large language models (LLMs) are increasingly used as personalized user assistants, effectively adapting to users' evolving preferences is critical for delivering high-quality personalized responses. While user preferences are often stable in content, their relative strengths shift over time due to changing goals an... | poster | null | null | null | [] | [] | [] | [
0.006085571832954884,
-0.019423209130764008,
0.004901013802736998,
0.027554253116250038,
0.03974734619259834,
0.03773611783981323,
0.024892298504710197,
0.009494217112660408,
-0.0011457856744527817,
-0.03930588439106941,
-0.021522102877497673,
0.03250545263290405,
-0.05031765252351761,
-0.... |
172 | Repo2Run: Automated Building Executable Environment for Code Repository at Scale | [
"Ruida Hu",
"Chao Peng",
"XinchenWang",
"Junjielong Xu",
"Cuiyun Gao"
] | Scaling up executable code data is significant for improving language models’ software engineerin capability. The intricate nature of the process makes it labor-intensive, time-consuming and expert-knowledge-dependent to build a large number of executable code repositories, limiting the scalability of existing work bas... | spotlight | 2502.13681 | https://github.com/bytedance/Repo2Run | null | [] | [] | [] | [
-0.005935234948992729,
-0.048305295407772064,
-0.011379154399037361,
0.025531912222504616,
0.06491714715957642,
0.03598001226782799,
0.029553955420851707,
0.006312153302133083,
-0.04826847091317177,
-0.04250286519527435,
-0.024616803973913193,
0.004363936372101307,
-0.09192422777414322,
-0... |
173 | EnzyControl: Adding Functional and Substrate-Specific Control for Enzyme Backbone Generation | [
"Chao Song",
"ZHIYUAN LIU",
"Han Huang",
"Liang Wang",
"Qiong Wang",
"Jian-Yu Shi",
"Hui Yu",
"Yihang Zhou",
"Yang Zhang"
] | Designing enzyme backbones with substrate-specific functionality is a critical challenge in computational protein engineering. Current generative models excel in protein design but face limitations in binding data, substrate-specific control, and flexibility for de novo enzyme backbone generation. To address this, we i... | poster | null | null | null | [] | [] | [] | [
-0.02591371349990368,
-0.02089977264404297,
-0.03271853178739548,
0.007728418335318565,
0.0379323735833168,
-0.01743914932012558,
0.044544097036123276,
-0.024326730519533157,
0.017877573147416115,
-0.05294639989733696,
0.03243470937013626,
-0.00751269469037652,
-0.06291908025741577,
0.0125... |
174 | Beyond Pairwise Connections: Extracting High-Order Functional Brain Network Structures under Global Constraints | [
"Ling Zhan",
"Junjie Huang",
"Xiaoyao Yu",
"Wenyu Chen",
"Tao Jia"
] | Functional brain network (FBN) modeling often relies on local pairwise interactions, whose limitation in capturing high-order dependencies is theoretically analysed in this paper. Meanwhile, the computational burden and heuristic nature of current hypergraph modeling approaches hinder end-to-end learning of FBN structu... | poster | 2510.09175 | null | null | [] | [] | [] | [
-0.04001534730195999,
0.005847260355949402,
0.022016102448105812,
0.01865893229842186,
0.02927117981016636,
0.01053634099662304,
0.043166615068912506,
0.016299767419695854,
-0.036341458559036255,
-0.03161255642771721,
0.0033723157830536366,
0.0023679810110479593,
-0.0796450674533844,
-0.00... |
175 | TopoPoint: Enhance Topology Reasoning via Endpoint Detection in Autonomous Driving | [
"Yanping Fu",
"Xinyuan Liu",
"Tianyu Li",
"Yike Ma",
"Yucheng Zhang",
"Feng Dai"
] | Topology reasoning, which unifies perception and structured reasoning, plays a vital role in understanding intersections for autonomous driving. However, its performance heavily relies on the accuracy of lane detection, particularly at connected lane endpoints. Existing methods often suffer from lane endpoints deviatio... | poster | 2505.17771 | https://github.com/Franpin/TopoPoint | null | [] | [] | [] | [
-0.019231026992201805,
0.0010672592325136065,
0.04014391079545021,
0.019653135910630226,
0.004898908548057079,
0.02157426066696644,
0.033871255815029144,
0.019636187702417374,
0.0020585774909704924,
-0.05771978572010994,
-0.03189755603671074,
-0.012684805318713188,
-0.06481675803661346,
0.... |
176 | Semi-off-Policy Reinforcement Learning for Vision-Language Slow-thinking Reasoning | [
"Junhao Shen",
"Haiteng Zhao",
"Yuzhe Gu",
"Songyang Gao",
"Kuikun Liu",
"Haian Huang",
"Jianfei Gao",
"Dahua Lin",
"Wenwei Zhang",
"Kai Chen"
] | Enhancing large vision-language models (LVLMs) with visual slow-thinking reasoning is crucial for solving complex multimodal tasks. However, since LVLMs are mainly trained with vision-language alignment, it is difficult to adopt on-policy reinforcement learning (RL) to develop the slow thinking ability because the roll... | poster | 2507.16814 | null | null | [] | [] | [] | [
0.0002692486741580069,
-0.032967161387205124,
0.027851365506649017,
0.013565941713750362,
0.031673040241003036,
0.020248427987098694,
0.012456790544092655,
0.0003849356435239315,
-0.058432210236787796,
-0.017001979053020477,
-0.039430778473615646,
0.030631158500909805,
-0.08123639971017838,
... |
177 | Scalable and Cost-Efficient de Novo Template-Based Molecular Generation | [
"Piotr Gaiński",
"Oussama Boussif",
"Andrei Rekesh",
"Dmytro Shevchuk",
"Ali Parviz",
"Mike Tyers",
"Robert Batey",
"Michał Koziarski"
] | Template-based molecular generation offers a promising avenue for drug design by ensuring generated compounds are synthetically accessible through predefined reaction templates and building blocks. In this work, we tackle three core challenges in template-based GFlowNets: (1) minimizing synthesis cost, (2) scaling to l... | poster | 2506.19865 | null | null | [] | [] | [] | [
0.012276574037969112,
0.007297750562429428,
-0.008835235610604286,
0.0422234944999218,
0.04605112597346306,
-0.007340700831264257,
0.016286710277199745,
-0.006450347136706114,
0.011321750469505787,
-0.04595461115241051,
0.026632171124219894,
-0.021131208166480064,
-0.06825550645589828,
0.0... |
178 | One-Step Offline Distillation of Diffusion-based Models via Koopman Modeling | [
"Nimrod Berman",
"Ilan Naiman",
"Moshe Eliasof",
"Hedi Zisling",
"Omri Azencot"
] | Diffusion-based generative models have demonstrated exceptional performance, yet their iterative sampling procedures remain computationally expensive. A prominent strategy to mitigate this cost is *distillation*, with *offline distillation* offering particular advantages in terms of efficiency, modularity, and flexibil... | poster | 2505.13358 | https://github.com/azencot-group/KDM | https://sites.google.com/view/koopman-distillation-model | [] | [] | [] | [
-0.03700479120016098,
-0.0014744010986760259,
-0.02063070423901081,
0.05923179164528847,
0.06389810144901276,
-0.011957423761487007,
0.03881288692355156,
-0.00447642058134079,
-0.014348188415169716,
-0.032148294150829315,
0.007136160973459482,
-0.030217481777071953,
-0.06231342628598213,
0... |
179 | PASS: Path-selective State Space Model for Event-based Recognition | [
"Jiazhou Zhou",
"Kanghao Chen",
"Lei Zhang",
"Lin Wang"
] | Event cameras are bio-inspired sensors that capture intensity changes asynchronously with distinct advantages, such as high temporal resolution. Existing methods for event-based object/action recognition predominantly sample and convert event representation at every fixed temporal interval (or frequency). However, they... | poster | 2409.16953 | null | null | [] | [] | [] | [
0.010074212215840816,
-0.021812766790390015,
-0.02001330815255642,
0.021196521818637848,
0.04774191975593567,
0.024778785184025764,
0.02524549327790737,
0.023422615602612495,
-0.035291366279125214,
-0.0310563575476408,
0.007521406747400761,
-0.010925971902906895,
-0.07581962645053864,
-0.0... |
180 | Right for the Right Reasons: Avoiding Reasoning Shortcuts via Prototype-Augmented Neurosymbolic AI | [
"Luca Andolfi",
"Eleonora Giunchiglia"
] | Neurosymbolic methods combine neural perception with symbolic reasoning, but recent studies have shown that state-of-the-art approaches can fall prey of reasoning shortcuts---spurious but predictive associations between learned concepts and target labels that satisfy constraints syntactically, yet violate the intended ... | poster | null | null | null | [] | [] | [] | [
-0.03008209727704525,
-0.009622674435377121,
-0.03996069356799126,
0.046507131308317184,
0.021513642743229866,
0.03275545686483383,
0.04515830799937248,
-0.0033990966621786356,
-0.03650294989347458,
-0.02874167636036873,
-0.029822176322340965,
0.0504671148955822,
-0.0511016771197319,
-0.01... |
181 | Segment then Splat: Unified 3D Open-Vocabulary Segmentation via Gaussian Splatting | [
"Yiren Lu",
"Yunlai Zhou",
"Yiran Qiao",
"Chaoda Song",
"Tuo Liang",
"Jing Ma",
"Huan Wang",
"Yu Yin"
] | Open-vocabulary querying in 3D space is crucial for enabling more intelligent perception in applications such as robotics, autonomous systems, and augmented reality. However, most existing methods rely on 2D pixel-level parsing, leading to multi-view inconsistencies and poor 3D object retrieval. Moreover, they are limi... | poster | null | null | null | [] | [] | [] | [
-0.017865478992462158,
-0.0025541256181895733,
0.021077774465084076,
0.048034779727458954,
0.01645343005657196,
0.022155115380883217,
0.04199032112956047,
0.0322984904050827,
-0.02731703221797943,
-0.026608269661664963,
-0.04379145801067352,
0.005829755216836929,
-0.06268811970949173,
-0.0... |
182 | VR-Drive: Viewpoint-Robust End-to-End Driving with Feed-Forward 3D Gaussian Splatting | [
"Hoonhee Cho",
"Jae-Young Kang",
"Giwon Lee",
"Hyemin Yang",
"Heejun Park",
"Seokwoo Jung",
"Kuk-Jin Yoon"
] | End-to-end autonomous driving (E2E-AD) has emerged as a promising paradigm that unifies perception, prediction, and planning into a holistic, data-driven framework. However, achieving robustness to varying camera viewpoints, a common real-world challenge due to diverse vehicle configurations, remains an open problem. I... | poster | null | null | null | [] | [] | [] | [
0.0317356139421463,
0.004372432827949524,
0.030008463189005852,
0.06155439093708992,
0.00048605032498016953,
0.03472716361284256,
0.008491778746247292,
0.003896381240338087,
-0.00023671834787819535,
-0.04160206392407417,
-0.02936040237545967,
0.012306975200772285,
-0.055913589894771576,
-0... |
183 | Point-MaDi: Masked Autoencoding with Diffusion for Point Cloud Pre-training | [
"Xiaoyang Xiao",
"Runzhao Yao",
"Zhiqiang Tian",
"Shaoyi Du"
] | Self-supervised pre-training is essential for 3D point cloud representation learning, as annotating their irregular, topology-free structures is costly and labor-intensive. Masked autoencoders (MAEs) offer a promising framework but rely on explicit positional embeddings, such as patch center coordinates, which leak geo... | poster | null | null | null | [] | [] | [] | [
-0.002189883030951023,
-0.002064121188595891,
-0.004781309515237808,
0.04843979328870773,
0.05349849909543991,
0.07625054568052292,
0.025728335604071617,
-0.03909854590892792,
-0.03085232339799404,
-0.05125683546066284,
-0.009111699648201466,
-0.03858501464128494,
-0.027424873784184456,
0.... |
184 | DeepBLNet: Bilevel Network Learning via Hierarchically Structured Sparsity | [
"Jiayi Fan",
"Jingyuan Yang",
"Shuangge Ma",
"Mengyun Wu"
] | Accurate network estimation serves as the cornerstone for understanding complex systems across scientific domains, spanning from decoding gene regulatory pathways in systems biology to identifying social interaction patterns in computational sociology. Modern applications demand methods that simultaneously address two ... | poster | null | null | null | [] | [] | [] | [
-0.01968952640891075,
-0.024134093895554543,
-0.0034708958119153976,
0.02270391583442688,
0.031759221106767654,
0.0111013762652874,
0.008537867106497288,
-0.0007513908785767853,
-0.025812143459916115,
-0.035998906940221786,
0.03798212856054306,
0.0223934855312109,
-0.05498308315873146,
0.0... |
185 | Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost | [
"Runzhe Zhan",
"Zhihong Huang",
"Xinyi Yang",
"Lidia Chao",
"Min Yang",
"Derek Wong"
] | Recent advancements in large reasoning models (LRMs) have introduced the "slow thinking" paradigm, which leverages their inherent strengths to enhance reasoning capabilities for complex downstream tasks. However, the potential of LRMs as evaluators for machine translation (MT) quality remains underexplored. We provides... | poster | null | null | null | [] | [] | [] | [
-0.0017830816796049476,
-0.005100224167108536,
-0.032781511545181274,
0.02231091633439064,
0.05847378820180893,
-0.004879631567746401,
0.028096642345190048,
0.02310214936733246,
-0.031163282692432404,
0.0016384294722229242,
-0.022584790363907814,
0.04640242084860802,
-0.032395683228969574,
... |
186 | Vision Transformers with Self-Distilled Registers | [
"Zipeng Yan",
"Yinjie Chen",
"Chong Zhou",
"Bo Dai",
"Andrew Luo"
] | Vision Transformers (ViTs) have emerged as the dominant architecture for visual processing tasks, demonstrating excellent scalability with increased training data and model size.However, recent work has identified the emergence of artifact tokens in ViTs that are incongruous with the local semantics.These anomalous tok... | spotlight | 2505.21501 | null | null | [] | [] | [] | [
0.022600706666707993,
0.004329581279307604,
-0.000673293077852577,
0.041040729731321335,
0.02781755104660988,
0.02636774815618992,
0.018307993188500404,
0.01793464832007885,
-0.02830187976360321,
-0.03151719272136688,
-0.013065913692116737,
-0.0014700253959745169,
-0.04749365523457527,
0.0... |
187 | HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization | [
"Zhijian Zhuo",
"Yutao Zeng",
"Ya Wang",
"Sijun Zhang",
"Xiaoqing Li",
"Jian Yang",
"zhou Xun",
"Jinwen Ma"
] | Transformers have become the de facto architecture for a wide range of machine learning tasks, particularly in large language models (LLMs). Despite their remarkable performance, challenges remain in training deep transformer networks, especially regarding the position of layer normalization. While Pre-Norm structures ... | poster | 2503.04598 | https://github.com/BryceZhuo/HybridNorm | https://github.com/BryceZhuo/HybridNorm | [] | [] | [] | [
-0.015742186456918716,
-0.05093987658619881,
-0.02430635504424572,
0.03469306603074074,
0.017519105225801468,
0.031204478815197945,
0.018836773931980133,
-0.030456040054559708,
-0.019534409046173096,
-0.016685042530298233,
-0.03127646818757057,
0.03261833265423775,
-0.03900693356990814,
0.... |
188 | A Generalized Label Shift Perspective for Cross-Domain Gaze Estimation | [
"HAORAN YANG",
"Xiaohui Chen",
"Chuan-Xian Ren"
] | Aiming to generalize the well-trained gaze estimation model to new target domains, Cross-domain Gaze Estimation (CDGE) is developed for real-world application scenarios. Existing CDGE methods typically extract the domain-invariant features to mitigate domain shift in feature space, which is proved insufficient by Gener... | poster | 2505.13043 | null | null | [] | [] | [] | [
-0.01821085624396801,
0.011637235060334206,
0.03256293013691902,
0.010142969898879528,
0.030527524650096893,
0.023487819358706474,
0.024489624425768852,
0.01649046316742897,
-0.0016662607667967677,
-0.042071446776390076,
-0.036912497133016586,
0.026221176609396935,
-0.09946517646312714,
-0... |
189 | UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based Mobile GUI Agents | [
"Han Xiao",
"Guozhi Wang",
"Yuxiang Chai",
"Zimu Lu",
"Weifeng Lin",
"Hao He",
"Lue Fan",
"Liuyang Bian",
"Rui Hu",
"Liang Liu",
"Shuai Ren",
"yafei wen",
"xiaoxin chen",
"Aojun Zhou",
"Hongsheng Li"
] | In this paper, we introduce UI-Genie, a self-improving framework addressing two key challenges in GUI agents: verification of trajectory outcome is challenging and high-quality training data are not scalable. These challenges are addressed by a reward model and a self-improving pipeline, respectively. The reward model,... | poster | null | null | null | [] | [] | [] | [
-0.03205295279622078,
-0.03825416788458824,
0.010476610623300076,
0.026389596983790398,
0.03483257815241814,
-0.013051043264567852,
0.030693799257278442,
-0.011862346902489662,
-0.009670616127550602,
-0.027419524267315865,
-0.02881641685962677,
0.03300425410270691,
-0.060078214854002,
-0.0... |
190 | A Physics-preserved Transfer Learning Method for Differential Equations | [
"HAORAN YANG",
"Chuan-Xian Ren"
] | While data-driven methods such as neural operator have achieved great success in solving differential equations (DEs), they suffer from domain shift problems caused by different learning environments (with data bias or equation changes), which can be alleviated by transfer learning (TL). However, existing TL methods ad... | poster | 2505.01281 | null | null | [] | [] | [] | [
-0.06200670823454857,
-0.008570544421672821,
-0.013446405529975891,
0.05814586579799652,
0.04945017397403717,
0.015408908948302269,
0.017993176355957985,
-0.023555835708975792,
-0.004964029416441917,
-0.05420226603746414,
0.021760940551757812,
0.019424093887209892,
-0.06107080355286598,
0.... |
191 | Robust Reinforcement Learning in Finance: Modeling Market Impact with Elliptic Uncertainty Sets | [
"Shaocong Ma",
"Heng Huang"
] | In financial applications, reinforcement learning (RL) agents are commonly trained on historical data, where their actions do not influence prices. However, during deployment, these agents trade in live markets where their own transactions can shift asset prices, a phenomenon known as market impact. This mismatch betwe... | poster | null | null | null | [] | [] | [] | [
-0.027411598712205887,
-0.03638308495283127,
0.004912049043923616,
0.04318449646234512,
0.05836683511734009,
0.02608255296945572,
-0.008397259749472141,
-0.002534211380407214,
-0.03903873264789581,
-0.03966518118977547,
-0.015496579930186272,
0.021359482780098915,
-0.06225484609603882,
-0.... |
192 | GraphChain: Large Language Models for Large-scale Graph Analysis via Tool Chaining | [
"Chunyu Wei",
"Wenji Hu",
"Xingjia Hao",
"Xin Wang",
"Yifan Yang",
"Yunhai Wang",
"Yang Tian",
"Yueguo Chen"
] | Large Language Models (LLMs) face significant limitations when applied to large-scale graphs, struggling with context constraints and inflexible reasoning. We introduce GraphChain, a novel framework enabling LLMs to analyze large graphs by orchestrating dynamic sequences of specialized tools, mimicking human explorator... | poster | null | null | null | [] | [] | [] | [
-0.007332475855946541,
-0.028091054409742355,
-0.01747661828994751,
0.035757798701524734,
0.032061200588941574,
0.00794614665210247,
0.014409410767257214,
0.026679713279008865,
-0.0018880035495385528,
-0.016787400469183922,
0.01253030076622963,
0.009167682379484177,
-0.07788324356079102,
0... |
193 | Efficient RAW Image Deblurring with Adaptive Frequency Modulation | [
"Wenlong Jiao",
"Binglong Li",
"Wei Shang",
"Ping Wang",
"Dongwei Ren"
] | Image deblurring plays a crucial role in enhancing visual clarity across various applications. Although most deep learning approaches primarily focus on sRGB images, which inherently lose critical information during the image signal processing pipeline, RAW images, being unprocessed and linear, possess superior restora... | poster | 2505.24407 | https://github.com/WenlongJiao/FrENet | null | [] | [] | [] | [
-0.0027536791749298573,
-0.016141582280397415,
0.018283365294337273,
0.03985273838043213,
0.032727256417274475,
-0.009325169958174229,
0.002357624704018235,
0.018692972138524055,
-0.03347596153616905,
-0.06538548320531845,
-0.04379679262638092,
0.023103496059775352,
-0.016300447285175323,
... |
194 | Lifelong Safety Alignment for Language Models | [
"Haoyu Wang",
"Zeyu Qin",
"Yifei Zhao",
"Chao Du",
"Min Lin",
"Xueqian Wang",
"Tianyu Pang"
] | LLMs have made impressive progress, but their growing capabilities also expose them to highly flexible jailbreaking attacks designed to bypass safety alignment. While many existing defenses focus on known types of attacks, it is more critical to prepare LLMs for *unseen* attacks that may arise during deployment. To add... | poster | 2505.20259 | https://github.com/sail-sg/LifelongSafetyAlignment | null | [] | [] | [] | [
-0.003242483362555504,
-0.015178131870925426,
-0.023893535137176514,
0.024271687492728233,
0.04281158000230789,
0.027082787826657295,
0.05335772782564163,
-0.018102431669831276,
-0.03789348527789116,
-0.010928711853921413,
-0.018339412286877632,
-0.003495719749480486,
-0.07877518236637115,
... |
195 | Hierarchical Optimization via LLM-Guided Objective Evolution for Mobility-on-Demand Systems | [
"Yi Zhang",
"Yushen Long",
"Liping Huang",
"Yun Ni",
"Xiaohong Wang",
"Jun Liu"
] | Online ride-hailing platforms aim to deliver efficient mobility-on-demand services, often facing challenges in balancing dynamic and spatially heterogeneous supply and demand. Existing methods typically fall into two categories: reinforcement learning (RL) approaches, which suffer from data inefficiency, oversimplified... | poster | 2510.10644 | null | null | [] | [] | [] | [
-0.038374919444322586,
-0.015174578875303268,
0.04777011647820473,
0.0334235243499279,
0.05910089612007141,
0.016116688027977943,
0.00806658435612917,
0.03601131588220596,
-0.03250046819448471,
-0.028130758553743362,
0.006982968654483557,
0.013251853175461292,
-0.054175280034542084,
-0.018... |
196 | TS-RAG: Retrieval-Augmented Generation based Time Series Foundation Models are Stronger Zero-Shot Forecaster | [
"Kanghui Ning",
"Zijie Pan",
"Yu Liu",
"Yushan Jiang",
"James Zhang",
"Kashif Rasul",
"Anderson Schneider",
"Lintao Ma",
"Yuriy Nevmyvaka",
"Dongjin Song"
] | Large Language Models (LLMs) and Foundation Models (FMs) have recently become prevalent for time series forecasting tasks. While fine-tuning LLMs enables domain adaptation, they often struggle to generalize across diverse and unseen datasets. Moreover, existing Time Series Foundation Models (TSFMs) still face challenge... | poster | null | null | null | [] | [] | [] | [
0.011385686695575714,
-0.05914505198597908,
0.004479302559047937,
0.029643064364790916,
0.031120872125029564,
0.013106308877468109,
0.030364319682121277,
0.013178500346839428,
-0.04945475980639458,
-0.028737803921103477,
-0.0069871507585048676,
0.027149127796292305,
-0.08035850524902344,
0... |
197 | AltLoRA: Towards Better Gradient Approximation in Low-Rank Adaptation with Alternating Projections | [
"Xin Yu",
"Yujia Wang",
"Jinghui Chen",
"Lingzhou Xue"
] | Low-Rank Adaptation (LoRA) has emerged as an effective technique for reducing memory overhead in fine-tuning large language models. However, it often suffers from sub-optimal performance compared with full fine-tuning since the update is constrained in the low-rank space. Recent variants such as LoRA-Pro attempt to mit... | poster | 2505.12455 | null | null | [] | [] | [] | [
-0.025436794385313988,
-0.0447845458984375,
-0.003155884798616171,
0.024453824386000633,
0.014332004822790623,
0.03377711772918701,
0.03091336041688919,
-0.018417950719594955,
-0.04644548147916794,
-0.015578433871269226,
-0.03096221573650837,
0.011624456383287907,
-0.06816605478525162,
-0.... |
198 | Sherlock: Self-Correcting Reasoning in Vision-Language Models | [
"Yi Ding",
"Ruqi Zhang"
] | Reasoning Vision-Language Models (VLMs) have shown promising performance on complex multimodal tasks. However, they still face significant challenges: they are highly sensitive to reasoning errors, require large volumes of annotated data or accurate verifiers, and struggle to generalize beyond specific domains.To addre... | poster | 2505.22651 | https://github.com/DripNowhy/Sherlock | https://dripnowhy.github.io/Sherlock/ | [] | [
"Tuwhy/Llama-3.2V-11B-Sherlock-iter2",
"Tuwhy/Llama-3.2V-11B-Sherlock-iter1",
"Tuwhy/Llama-3.2V-11B-Sherlock-Offline",
"Tuwhy/Llama-3.2V-11B-Sherlock-SFT"
] | [] | [
0.011766414158046246,
-0.010042698122560978,
0.02034679427742958,
0.046305712312459946,
0.057184215635061264,
0.014988944865763187,
0.04752045497298241,
0.019739946350455284,
-0.03439212962985039,
-0.030141327530145645,
-0.05495759844779968,
0.04994630441069603,
-0.06002391502261162,
-0.02... |
199 | Role-aware Multi-agent Reinforcement Learning for Coordinated Emergency Traffic Control | [
"Ming Cheng",
"Hao Chen",
"Zhiqing Li",
"Jia Wang",
"Senzhang Wang"
] | Emergency traffic control presents an increasingly critical challenge, requiring seamless coordination among emergency vehicles, regular vehicles, and traffic lights to ensure efficient passage for all vehicles. Existing models primarily only focus on traffic light control, leaving emergency and regular vehicles prone ... | poster | null | null | null | [] | [] | [] | [
-0.019173169508576393,
-0.04462343454360962,
0.028151782229542732,
0.07021094113588333,
0.01958332024514675,
-0.01065764855593443,
0.01867154985666275,
0.009869935922324657,
-0.022100361064076424,
-0.05376052483916283,
-0.02487727627158165,
0.030462253838777542,
-0.07535002380609512,
-0.00... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.