paper_id uint32 0 2.87k | title stringlengths 15 149 | authors listlengths 1 69 | cvf_url stringlengths 94 199 | pdf_url stringlengths 95 200 | supp_url stringlengths 100 148 ⌀ | arxiv_id stringlengths 10 10 ⌀ | bibtex large_stringlengths 285 1.82k | abstract large_stringlengths 547 2.44k | embedding listlengths 768 768 |
|---|---|---|---|---|---|---|---|---|---|
0 | Deterministic Image-to-Image Translation via Denoising Brownian Bridge Models with Dual Approximators | [
"Bohan Xiao",
"Peiyong Wang",
"Qisheng He",
"Ming Dong"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xiao_Deterministic_Image-to-Image_Translation_via_Denoising_Brownian_Bridge_Models_with_Dual_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xiao_Deterministic_Image-to-Image_Translation_via_Denoising_Brownian_Bridge_Models_with_Dual_CVPR_2025_paper.pdf | null | null | @InProceedings{Xiao_2025_CVPR,
author = {Xiao, Bohan and Wang, Peiyong and He, Qisheng and Dong, Ming},
title = {Deterministic Image-to-Image Translation via Denoising Brownian Bridge Models with Dual Approximators},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (... | Image-to-Image (I2I) translation involves converting an im- age from one domain to another. Deterministic I2I transla- tion, such as in image super-resolution, extends this con- cept by guaranteeing that each input generates a consistent and predictable output, closely matching the ground truth (GT) with high fidelity.... | [
0.010270824655890465,
0.0024396288208663464,
0.0028782119043171406,
0.033904898911714554,
0.044995155185461044,
0.059894807636737823,
0.004217842593789101,
-0.0006209457642398775,
-0.016814827919006348,
-0.06774745136499405,
0.029633665457367897,
-0.009677249938249588,
-0.032301198691129684,... |
1 | Towards Source-Free Machine Unlearning | [
"Sk Miraj Ahmed",
"Umit Yigit Basaran",
"Dripta S. Raychaudhuri",
"Arindam Dutta",
"Rohit Kundu",
"Fahim Faisal Niloy",
"Basak Guler",
"Amit K. Roy-Chowdhury"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ahmed_Towards_Source-Free_Machine_Unlearning_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ahmed_Towards_Source-Free_Machine_Unlearning_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ahmed_Towards_Source-Free_Machine_CVPR_2025_supplemental.pdf | null | @InProceedings{Ahmed_2025_CVPR,
author = {Ahmed, Sk Miraj and Basaran, Umit Yigit and Raychaudhuri, Dripta S. and Dutta, Arindam and Kundu, Rohit and Niloy, Fahim Faisal and Guler, Basak and Roy-Chowdhury, Amit K.},
title = {Towards Source-Free Machine Unlearning},
booktitle = {Proceedings of the Com... | As machine learning become more pervasive and data privacy regulations evolve, the ability to remove private or copyrighted information from trained models is becoming an increasingly critical requirement. Existing unlearning methods often rely on the assumption of having access to the entire training dataset during th... | [
-0.010323857888579369,
-0.03624191880226135,
-0.015853341668844223,
0.062099386006593704,
0.052606597542762756,
-0.014147561974823475,
0.03671172261238098,
0.002176909474655986,
-0.035047680139541626,
-0.005467189475893974,
-0.01382327452301979,
0.03566297888755798,
-0.05731732398271561,
0... |
2 | Uni4D: Unifying Visual Foundation Models for 4D Modeling from a Single Video | [
"David Yifan Yao",
"Albert J. Zhai",
"Shenlong Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yao_Uni4D_Unifying_Visual_Foundation_Models_for_4D_Modeling_from_a_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yao_Uni4D_Unifying_Visual_Foundation_Models_for_4D_Modeling_from_a_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yao_Uni4D_Unifying_Visual_CVPR_2025_supplemental.zip | 2503.21761 | @InProceedings{Yao_2025_CVPR,
author = {Yao, David Yifan and Zhai, Albert J. and Wang, Shenlong},
title = {Uni4D: Unifying Visual Foundation Models for 4D Modeling from a Single Video},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},... | This paper presents a unified approach to understanding dynamic scenes from casual videos. Large pretrained vision foundation models, such as vision-language, video depth prediction, motion tracking, and segmentation models, offer promising capabilities. However, training a single model for comprehensive 4D understandi... | [
0.030675409361720085,
-0.02345341630280018,
0.019434453919529915,
0.03466026112437248,
0.04201898351311684,
0.03475534915924072,
0.00616192864254117,
0.025687163695693016,
-0.03886543959379196,
-0.04939538985490799,
-0.0027504952158778906,
-0.021865446120500565,
-0.05978608503937721,
0.019... |
3 | DynScene: Scalable Generation of Dynamic Robotic Manipulation Scenes for Embodied AI | [
"Sangmin Lee",
"Sungyong Park",
"Heewon Kim"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lee_DynScene_Scalable_Generation_of_Dynamic_Robotic_Manipulation_Scenes_for_Embodied_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_DynScene_Scalable_Generation_of_Dynamic_Robotic_Manipulation_Scenes_for_Embodied_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lee_DynScene_Scalable_Generation_CVPR_2025_supplemental.pdf | null | @InProceedings{Lee_2025_CVPR,
author = {Lee, Sangmin and Park, Sungyong and Kim, Heewon},
title = {DynScene: Scalable Generation of Dynamic Robotic Manipulation Scenes for Embodied AI},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},... | Robotic manipulation in embodied AI critically depends on large-scale, high-quality datasets that reflect realistic object interactions and physical dynamics. However, existing data collection pipelines are often slow, expensive, and heavily reliant on manual efforts. We present DynScene, a diffusion-based framework fo... | [
0.001775662531144917,
-0.02757909707725048,
-0.04702160134911537,
0.055267706513404846,
0.034457482397556305,
0.05256647244095802,
0.00964529998600483,
0.0011482861591503024,
-0.026132142171263695,
-0.033943772315979004,
-0.04567909240722656,
-0.009325932711362839,
-0.06281141191720963,
-0... |
4 | DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models | [
"Radu Alexandru Rosu",
"Keyu Wu",
"Yao Feng",
"Youyi Zheng",
"Michael J. Black"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Rosu_DiffLocks_Generating_3D_Hair_from_a_Single_Image_using_Diffusion_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Rosu_DiffLocks_Generating_3D_Hair_from_a_Single_Image_using_Diffusion_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Rosu_DiffLocks_Generating_3D_CVPR_2025_supplemental.zip | 2505.06166 | @InProceedings{Rosu_2025_CVPR,
author = {Rosu, Radu Alexandru and Wu, Keyu and Feng, Yao and Zheng, Youyi and Black, Michael J.},
title = {DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVP... | We address the task of generating 3D hair geometry from a single image, which is challenging due to the diversity of hairstyles and the lack of paired image-to-3D hair data. Previous methods are primarily trained on synthetic data and cope with the limited amount of such data by using low-dimensional intermediate repre... | [
0.033659469336271286,
-0.024617819115519524,
-0.020371824502944946,
0.055148158222436905,
0.04397740587592125,
0.03585035353899002,
0.020146572962403297,
-0.014623479917645454,
-0.013054507784545422,
-0.06127414107322693,
-0.017195377498865128,
-0.037464484572410583,
-0.060723982751369476,
... |
5 | Hyperbolic Category Discovery | [
"Yuanpei Liu",
"Zhenqi He",
"Kai Han"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Hyperbolic_Category_Discovery_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Hyperbolic_Category_Discovery_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Hyperbolic_Category_Discovery_CVPR_2025_supplemental.pdf | 2504.06120 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Yuanpei and He, Zhenqi and Han, Kai},
title = {Hyperbolic Category Discovery},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {9891-9900}
} | Generalized Category Discovery (GCD) is an intriguing open-world problem that has garnered increasing attention. Given a dataset that includes both labelled and unlabelled images, GCD aims to categorize all images in the unlabelled subset, regardless of whether they belong to known or unknown classes. In GCD, the commo... | [
-0.008779085241258144,
0.007251635193824768,
0.010085645131766796,
0.03387803956866264,
0.038696590811014175,
-0.004796986933797598,
0.014303311705589294,
-0.007262459024786949,
-0.03424140065908432,
-0.03717169165611267,
-0.013507319614291191,
-0.006132584996521473,
-0.06985427439212799,
... |
6 | The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion | [
"Changan Chen",
"Juze Zhang",
"Shrinidhi K. Lakshmikanth",
"Yusu Fang",
"Ruizhi Shao",
"Gordon Wetzstein",
"Li Fei-Fei",
"Ehsan Adeli"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_The_Language_of_Motion_Unifying_Verbal_and_Non-verbal_Language_of_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_The_Language_of_Motion_Unifying_Verbal_and_Non-verbal_Language_of_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_The_Language_of_CVPR_2025_supplemental.zip | 2412.10523 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Changan and Zhang, Juze and Lakshmikanth, Shrinidhi K. and Fang, Yusu and Shao, Ruizhi and Wetzstein, Gordon and Fei-Fei, Li and Adeli, Ehsan},
title = {The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion},
booktitle = {Pro... | Human communication is inherently multimodal, involving a combination of verbal and non-verbal cues such as speech, facial expressions, and body gestures. Modeling these behaviors is essential for understanding human interaction and for creating virtual characters that can communicate naturally in applications like gam... | [
-0.008330456912517548,
-0.001010035164654255,
-0.011137885972857475,
0.03575237840414047,
0.022912772372364998,
0.026017220690846443,
0.03895331546664238,
0.03074020892381668,
-0.03074241802096367,
-0.0391252376139164,
-0.04746473208069801,
0.002866833470761776,
-0.0634905993938446,
-0.020... |
7 | CALICO: Part-Focused Semantic Co-Segmentation with Large Vision-Language Models | [
"Kiet A. Nguyen",
"Adheesh Juvekar",
"Tianjiao Yu",
"Muntasir Wahed",
"Ismini Lourentzou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Nguyen_CALICO_Part-Focused_Semantic_Co-Segmentation_with_Large_Vision-Language_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Nguyen_CALICO_Part-Focused_Semantic_Co-Segmentation_with_Large_Vision-Language_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Nguyen_CALICO_Part-Focused_Semantic_CVPR_2025_supplemental.pdf | 2412.19331 | @InProceedings{Nguyen_2025_CVPR,
author = {Nguyen, Kiet A. and Juvekar, Adheesh and Yu, Tianjiao and Wahed, Muntasir and Lourentzou, Ismini},
title = {CALICO: Part-Focused Semantic Co-Segmentation with Large Vision-Language Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognit... | Recent advances in Large Vision-Language Models (LVLMs) have enabled general-purpose vision tasks through visual instruction tuning. While existing LVLMs can generate segmentation masks from text prompts for single images, they struggle with segmentation-grounded reasoning across images, especially at finer granulariti... | [
-0.0027968217618763447,
-0.01403918769210577,
0.008400198072195053,
0.03367597982287407,
0.018971310928463936,
0.043894533067941666,
0.0071144611574709415,
0.04817896708846092,
-0.002569961128756404,
-0.009381590411067009,
-0.06158542260527611,
0.006436770316213369,
-0.053676787763834,
-0.... |
8 | Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment | [
"Ziang Yan",
"Zhilin Li",
"Yinan He",
"Chenting Wang",
"Kunchang Li",
"Xinhao Li",
"Xiangyu Zeng",
"Zilei Wang",
"Yali Wang",
"Yu Qiao",
"Limin Wang",
"Yi Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yan_Task_Preference_Optimization_Improving_Multimodal_Large_Language_Models_with_Vision_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yan_Task_Preference_Optimization_Improving_Multimodal_Large_Language_Models_with_Vision_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yan_Task_Preference_Optimization_CVPR_2025_supplemental.pdf | 2412.19326 | @InProceedings{Yan_2025_CVPR,
author = {Yan, Ziang and Li, Zhilin and He, Yinan and Wang, Chenting and Li, Kunchang and Li, Xinhao and Zeng, Xiangyu and Wang, Zilei and Wang, Yali and Qiao, Yu and Wang, Limin and Wang, Yi},
title = {Task Preference Optimization: Improving Multimodal Large Language Models... | Current multimodal large language models (MLLMs) struggle with fine-grained or precise understanding of visuals although they give comprehensive perception and reasoning in a spectrum of vision applications. Recent studies either develop tool-using or unify specific visual tasks into the autoregressive framework, often... | [
0.03374236449599266,
-0.002008701441809535,
0.013983473181724548,
0.022560421377420425,
0.019776267930865288,
0.016383320093154907,
0.020873641595244408,
0.038507696241140366,
-0.0330033041536808,
-0.032857127487659454,
-0.028700027614831924,
0.028389107435941696,
-0.08374718576669693,
-0.... |
9 | Cross-modal Causal Relation Alignment for Video Question Grounding | [
"Weixing Chen",
"Yang Liu",
"Binglin Chen",
"Jiandong Su",
"Yongsen Zheng",
"Liang Lin"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Cross-modal_Causal_Relation_Alignment_for_Video_Question_Grounding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Cross-modal_Causal_Relation_Alignment_for_Video_Question_Grounding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Cross-modal_Causal_Relation_CVPR_2025_supplemental.pdf | 2503.07635 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Weixing and Liu, Yang and Chen, Binglin and Su, Jiandong and Zheng, Yongsen and Lin, Liang},
title = {Cross-modal Causal Relation Alignment for Video Question Grounding},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference ... | Video question grounding (VideoQG) requires models to answer the questions and simultaneously infer the relevant video segments to support the answers. However, existing VideoQG methods usually suffer from spurious cross-modal correlations, leading to a failure to identify the dominant visual scenes that align with the... | [
0.037093985825777054,
-0.008603223599493504,
0.015463977120816708,
0.06353283673524857,
0.016608867794275284,
0.01988270692527294,
0.036515187472105026,
0.0367840901017189,
-0.020836116746068,
-0.021268067881464958,
-0.04050051420927048,
0.03167286515235901,
-0.06421829015016556,
-0.003429... |
10 | Words or Vision: Do Vision-Language Models Have Blind Faith in Text? | [
"Ailin Deng",
"Tri Cao",
"Zhirui Chen",
"Bryan Hooi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Deng_Words_or_Vision_Do_Vision-Language_Models_Have_Blind_Faith_in_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Deng_Words_or_Vision_Do_Vision-Language_Models_Have_Blind_Faith_in_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Deng_Words_or_Vision_CVPR_2025_supplemental.pdf | 2503.02199 | @InProceedings{Deng_2025_CVPR,
author = {Deng, Ailin and Cao, Tri and Chen, Zhirui and Hooi, Bryan},
title = {Words or Vision: Do Vision-Language Models Have Blind Faith in Text?},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
... | Vision-Language Models (VLMs) excel in integrating visual and textual information for vision-centric tasks, but their handling of inconsistencies between modalities is underexplored. We investigate VLMs' modality preferences when faced with visual data and varied textual inputs in vision-centered settings.By introducin... | [
-0.008190374821424484,
0.018443575128912926,
-0.014548222534358501,
0.06769884377717972,
0.019155343994498253,
0.0007632570923306048,
0.05197291076183319,
0.06731636077165604,
-0.04280952736735344,
-0.025704383850097656,
-0.06069401651620865,
0.07990046590566635,
-0.07454053312540054,
-0.0... |
11 | Diffusion Renderer: Neural Inverse and Forward Rendering with Video Diffusion Models | [
"Ruofan Liang",
"Zan Gojcic",
"Huan Ling",
"Jacob Munkberg",
"Jon Hasselgren",
"Chih-Hao Lin",
"Jun Gao",
"Alexander Keller",
"Nandita Vijaykumar",
"Sanja Fidler",
"Zian Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liang_Diffusion_Renderer_Neural_Inverse_and_Forward_Rendering_with_Video_Diffusion_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liang_Diffusion_Renderer_Neural_Inverse_and_Forward_Rendering_with_Video_Diffusion_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liang_Diffusion_Renderer_Neural_CVPR_2025_supplemental.zip | null | @InProceedings{Liang_2025_CVPR,
author = {Liang, Ruofan and Gojcic, Zan and Ling, Huan and Munkberg, Jacob and Hasselgren, Jon and Lin, Chih-Hao and Gao, Jun and Keller, Alexander and Vijaykumar, Nandita and Fidler, Sanja and Wang, Zian},
title = {Diffusion Renderer: Neural Inverse and Forward Rendering ... | Understanding and modeling lighting effects are fundamental tasks in computer vision and graphics. Classic physically-based rendering (PBR) accurately simulates the light transport, but relies on precise scene representations--explicit 3D geometry, high-quality material properties, and lighting conditions--that are oft... | [
0.002401528414338827,
0.008811615407466888,
0.003457316430285573,
0.043119966983795166,
0.04840752109885216,
0.022299958392977715,
0.003659140784293413,
0.008945716544985771,
-0.026788286864757538,
-0.08525311201810837,
0.0023203901946544647,
-0.038058437407016754,
-0.027067959308624268,
0... |
12 | Harnessing Frequency Spectrum Insights for Image Copyright Protection Against Diffusion Models | [
"Zhenguang Liu",
"Chao Shuai",
"Shaojing Fan",
"Ziping Dong",
"Jinwu Hu",
"Zhongjie Ba",
"Kui Ren"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Harnessing_Frequency_Spectrum_Insights_for_Image_Copyright_Protection_Against_Diffusion_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Harnessing_Frequency_Spectrum_Insights_for_Image_Copyright_Protection_Against_Diffusion_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Harnessing_Frequency_Spectrum_CVPR_2025_supplemental.pdf | 2503.11071 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Zhenguang and Shuai, Chao and Fan, Shaojing and Dong, Ziping and Hu, Jinwu and Ba, Zhongjie and Ren, Kui},
title = {Harnessing Frequency Spectrum Insights for Image Copyright Protection Against Diffusion Models},
booktitle = {Proceedings of the Computer Vi... | Diffusion models have achieved remarkable success in novel view synthesis, but their reliance on large, diverse, and often untraceable Web datasets has raised pressing concerns about image copyright protection. Current methods fall short in reliably identifying unauthorized image use, as they struggle to generalize acr... | [
0.011322949081659317,
-0.03257535770535469,
-0.016305938363075256,
0.07363703101873398,
0.050220511853694916,
-0.021714091300964355,
0.010104430839419365,
-0.010037338361144066,
-0.009049477986991405,
-0.061890583485364914,
-0.02103443071246147,
-0.010538419708609581,
-0.04828716814517975,
... |
13 | Learning to Detect Objects from Multi-Agent LiDAR Scans without Manual Labels | [
"Qiming Xia",
"Wenkai Lin",
"Haoen Xiang",
"Xun Huang",
"Siheng Chen",
"Zhen Dong",
"Cheng Wang",
"Chenglu Wen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xia_Learning_to_Detect_Objects_from__Multi-Agent_LiDAR_Scans_without_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xia_Learning_to_Detect_Objects_from__Multi-Agent_LiDAR_Scans_without_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xia_Learning_to_Detect_CVPR_2025_supplemental.pdf | 2503.08421 | @InProceedings{Xia_2025_CVPR,
author = {Xia, Qiming and Lin, Wenkai and Xiang, Haoen and Huang, Xun and Chen, Siheng and Dong, Zhen and Wang, Cheng and Wen, Chenglu},
title = {Learning to Detect Objects from Multi-Agent LiDAR Scans without Manual Labels},
booktitle = {Proceedings of the Computer Vis... | Unsupervised 3D object detection serves as an important solution for offline 3D object annotation. However, due to the data sparsity and limited views, the clustering-based label fitting in unsupervised object detection often generates low-quality pseudo-labels. Multi-agent collaborative dataset, which involves the sha... | [
0.008854210376739502,
-0.011132834479212761,
0.002262577647343278,
0.036413222551345825,
0.008311153389513493,
0.010177817195653915,
0.011417005211114883,
0.020722508430480957,
-0.011796406470239162,
-0.05239257961511612,
-0.029796505346894264,
0.002484086202457547,
-0.08304871618747711,
0... |
14 | DeepLA-Net: Very Deep Local Aggregation Networks for Point Cloud Analysis | [
"Ziyin Zeng",
"Mingyue Dong",
"Jian Zhou",
"Huan Qiu",
"Zhen Dong",
"Man Luo",
"Bijun Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zeng_DeepLA-Net_Very_Deep_Local_Aggregation_Networks_for_Point_Cloud_Analysis_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zeng_DeepLA-Net_Very_Deep_Local_Aggregation_Networks_for_Point_Cloud_Analysis_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zeng_DeepLA-Net_Very_Deep_CVPR_2025_supplemental.pdf | null | @InProceedings{Zeng_2025_CVPR,
author = {Zeng, Ziyin and Dong, Mingyue and Zhou, Jian and Qiu, Huan and Dong, Zhen and Luo, Man and Li, Bijun},
title = {DeepLA-Net: Very Deep Local Aggregation Networks for Point Cloud Analysis},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition ... | Due to the irregular and disordered data structure in 3D point clouds, prior works have focused on designing more sophisticated local representation methods to capture these complex local patterns. However, the recognition performance has saturated over the past few years, indicating that increasingly complex and redun... | [
0.014044308103621006,
-0.0363537035882473,
0.023848731070756912,
0.02065732702612877,
0.047716036438941956,
0.06678279489278793,
-0.0090210922062397,
0.0023807366378605366,
-0.03613845258951187,
-0.03440583124756813,
-0.006881031673401594,
-0.04085969179868698,
-0.06580474227666855,
0.0153... |
15 | Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practices | [
"Junyan Lin",
"Haoran Chen",
"Yue Fan",
"Yingqi Fan",
"Xin Jin",
"Hui Su",
"Jinlan Fu",
"Xiaoyu Shen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lin_Multi-Layer_Visual_Feature_Fusion_in_Multimodal_LLMs_Methods_Analysis_and_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lin_Multi-Layer_Visual_Feature_Fusion_in_Multimodal_LLMs_Methods_Analysis_and_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lin_Multi-Layer_Visual_Feature_CVPR_2025_supplemental.pdf | 2503.06063 | @InProceedings{Lin_2025_CVPR,
author = {Lin, Junyan and Chen, Haoran and Fan, Yue and Fan, Yingqi and Jin, Xin and Su, Hui and Fu, Jinlan and Shen, Xiaoyu},
title = {Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practices},
booktitle = {Proceedings of the Computer ... | Multimodal Large Language Models (MLLMs) have made significant advancements in recent years, with visual features playing an increasingly critical role in enhancing model performance. However, the integration of multi-layer visual features in MLLMs remains underexplored, particularly with regard to optimal layer select... | [
-0.010291443206369877,
-0.007902654819190502,
0.023120185360312462,
0.025735601782798767,
0.05199478939175606,
0.011370652355253696,
0.01922590285539627,
0.018987039104104042,
-0.05151641368865967,
-0.013626430183649063,
-0.031390320509672165,
0.01712668500840664,
-0.07468622177839279,
-0.... |
16 | APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers | [
"Zhuguanyu Wu",
"Jiayi Zhang",
"Jiaxin Chen",
"Jinyang Guo",
"Di Huang",
"Yunhong Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wu_APHQ-ViT_Post-Training_Quantization_with_Average_Perturbation_Hessian_Based_Reconstruction_for_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_APHQ-ViT_Post-Training_Quantization_with_Average_Perturbation_Hessian_Based_Reconstruction_for_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_APHQ-ViT_Post-Training_Quantization_CVPR_2025_supplemental.pdf | null | @InProceedings{Wu_2025_CVPR,
author = {Wu, Zhuguanyu and Zhang, Jiayi and Chen, Jiaxin and Guo, Jinyang and Huang, Di and Wang, Yunhong},
title = {APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers},
booktitle = {Proceedings of the Comp... | Vision Transformers (ViTs) have become one of the most commonly used backbones for vision tasks. Despite their remarkable performance, they often suffer significant accuracy drop when quantized for practical deployment, particularly by post-training quantization (PTQ) under ultra-low bits. Recently, reconstruction-base... | [
-0.01936580054461956,
-0.025261320173740387,
-0.013067404739558697,
0.03778885677456856,
0.029095210134983063,
0.029477939009666443,
0.02504386380314827,
-0.0029850611463189125,
-0.024895649403333664,
-0.06234927102923393,
0.002820282708853483,
0.018417170271277428,
-0.0771058052778244,
0.... |
17 | AdaptCMVC: Robust Adaption to Incremental Views in Continual Multi-view Clustering | [
"Jing Wang",
"Songhe Feng",
"Kristoffer Knutsen Wickstrøm",
"Michael C. Kampffmeyer"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_AdaptCMVC_Robust_Adaption_to_Incremental_Views_in_Continual_Multi-view_Clustering_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_AdaptCMVC_Robust_Adaption_to_Incremental_Views_in_Continual_Multi-view_Clustering_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_AdaptCMVC_Robust_Adaption_CVPR_2025_supplemental.pdf | null | @InProceedings{Wang_2025_CVPR,
author = {Wang, Jing and Feng, Songhe and Wickstr{\o}m, Kristoffer Knutsen and Kampffmeyer, Michael C.},
title = {AdaptCMVC: Robust Adaption to Incremental Views in Continual Multi-view Clustering},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition... | Most Multi-view Clustering approaches assume that all views are available for clustering. However, this assumption is often unrealistic as views are incrementally accumulated over time, leading to a need for continual multi-view clustering (CMVC) methods. Current approaches to CMVC leverage late fusion-based approaches... | [
-0.0035428425762802362,
-0.025170786306262016,
0.018869148567318916,
0.016718443483114243,
0.03170009329915047,
0.014441591687500477,
0.021588962525129318,
0.004190691746771336,
-0.03506188094615936,
-0.05435605347156525,
-0.030102668330073357,
-0.008916383609175682,
-0.06497806310653687,
... |
18 | Omni-Scene: Omni-Gaussian Representation for Ego-Centric Sparse-View Scene Reconstruction | [
"Dongxu Wei",
"Zhiqi Li",
"Peidong Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wei_Omni-Scene_Omni-Gaussian_Representation_for_Ego-Centric_Sparse-View_Scene_Reconstruction_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wei_Omni-Scene_Omni-Gaussian_Representation_for_Ego-Centric_Sparse-View_Scene_Reconstruction_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wei_Omni-Scene_Omni-Gaussian_Representation_CVPR_2025_supplemental.zip | null | @InProceedings{Wei_2025_CVPR,
author = {Wei, Dongxu and Li, Zhiqi and Liu, Peidong},
title = {Omni-Scene: Omni-Gaussian Representation for Ego-Centric Sparse-View Scene Reconstruction},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},... | Prior works employing pixel-based Gaussian representation have demonstrated efficacy in feed-forward sparse-view reconstruction. However, such representation necessitates cross-view overlap for accurate depth estimation, and is challenged by object occlusions and frustum truncations. As a result, these methods require ... | [
-0.00540991360321641,
0.004007990472018719,
0.02974683977663517,
0.03419332206249237,
0.030333824455738068,
0.02528470568358898,
0.03140702471137047,
0.021465102210640907,
-0.033075880259275436,
-0.0698356032371521,
-0.0016050174599513412,
-0.03870036453008652,
-0.06533600389957428,
-0.015... |
19 | 3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion | [
"Zhaoxi Chen",
"Jiaxiang Tang",
"Yuhao Dong",
"Ziang Cao",
"Fangzhou Hong",
"Yushi Lan",
"Tengfei Wang",
"Haozhe Xie",
"Tong Wu",
"Shunsuke Saito",
"Liang Pan",
"Dahua Lin",
"Ziwei Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_3DTopia-XL_Scaling_High-quality_3D_Asset_Generation_via_Primitive_Diffusion_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_3DTopia-XL_Scaling_High-quality_3D_Asset_Generation_via_Primitive_Diffusion_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_3DTopia-XL_Scaling_High-quality_CVPR_2025_supplemental.pdf | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Zhaoxi and Tang, Jiaxiang and Dong, Yuhao and Cao, Ziang and Hong, Fangzhou and Lan, Yushi and Wang, Tengfei and Xie, Haozhe and Wu, Tong and Saito, Shunsuke and Pan, Liang and Lin, Dahua and Liu, Ziwei},
title = {3DTopia-XL: Scaling High-quality 3D Asset Ge... | The increasing demand for high-quality 3D assets across various industries necessitates efficient and automated 3D content creation. Despite recent advancements in 3D generative models, existing methods still face challenges with optimization speed, geometric fidelity, and the lack of assets for physically based render... | [
0.013543795794248581,
-0.008961419574916363,
0.002900167601183057,
0.037156764417886734,
0.04069921746850014,
0.019652916118502617,
-0.01959369145333767,
-0.00023475212219636887,
-0.0004033744044136256,
-0.05658893287181854,
-0.036103684455156326,
-0.05457445606589317,
-0.022267382591962814,... |
20 | UA-Pose: Uncertainty-Aware 6D Object Pose Estimation and Online Object Completion with Partial References | [
"Ming-Feng Li",
"Xin Yang",
"Fu-En Wang",
"Hritam Basak",
"Yuyin Sun",
"Shreekant Gayaka",
"Min Sun",
"Cheng-Hao Kuo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_UA-Pose_Uncertainty-Aware_6D_Object_Pose_Estimation_and_Online_Object_Completion_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_UA-Pose_Uncertainty-Aware_6D_Object_Pose_Estimation_and_Online_Object_Completion_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_UA-Pose_Uncertainty-Aware_6D_CVPR_2025_supplemental.pdf | null | @InProceedings{Li_2025_CVPR,
author = {Li, Ming-Feng and Yang, Xin and Wang, Fu-En and Basak, Hritam and Sun, Yuyin and Gayaka, Shreekant and Sun, Min and Kuo, Cheng-Hao},
title = {UA-Pose: Uncertainty-Aware 6D Object Pose Estimation and Online Object Completion with Partial References},
booktitle = ... | 6D object pose estimation has shown strong generalizability to novel objects. However, existing methods often require either a complete, well-reconstructed 3D model or numerous reference images that fully cover the object. Estimating 6D poses from partial references, which capture only fragments of an object's appearan... | [
0.022870438173413277,
0.00008562680886825547,
-0.023659327998757362,
0.05470653995871544,
0.009292668662965298,
0.05404600501060486,
0.015536549501121044,
0.021723516285419464,
-0.04338030889630318,
-0.051967691630125046,
-0.0643567368388176,
-0.010910429060459137,
-0.0664374828338623,
-0.... |
21 | Missing Target-Relevant Information Prediction with World Model for Accurate Zero-Shot Composed Image Retrieval | [
"Yuanmin Tang",
"Jing Yu",
"Keke Gai",
"Jiamin Zhuang",
"Gang Xiong",
"Gaopeng Gou",
"Qi Wu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Tang_Missing_Target-Relevant_Information_Prediction_with_World_Model_for_Accurate_Zero-Shot_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Tang_Missing_Target-Relevant_Information_Prediction_with_World_Model_for_Accurate_Zero-Shot_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Tang_Missing_Target-Relevant_Information_CVPR_2025_supplemental.pdf | 2503.17109 | @InProceedings{Tang_2025_CVPR,
author = {Tang, Yuanmin and Yu, Jing and Gai, Keke and Zhuang, Jiamin and Xiong, Gang and Gou, Gaopeng and Wu, Qi},
title = {Missing Target-Relevant Information Prediction with World Model for Accurate Zero-Shot Composed Image Retrieval},
booktitle = {Proceedings of the... | Zero-Shot Composed Image Retrieval (ZS-CIR) involves diverse tasks with a broad range of visual content manipulation intent across domain, scene, object, and attribute. The key challenge for ZS-CIR tasks is to modify a reference image according to manipulation text to accurately retrieve a target image, especially when... | [
-0.010038272477686405,
-0.018074607476592064,
-0.007761118933558464,
0.05019354820251465,
0.04523913562297821,
0.012000235728919506,
0.025928381830453873,
0.028304891660809517,
-0.024615833535790443,
-0.020802216604351997,
-0.060087740421295166,
-0.006771896034479141,
-0.044059768319129944,
... |
22 | Binarized Mamba-Transformer for Lightweight Quad Bayer HybridEVS Demosaicing | [
"Shiyang Zhou",
"Haijin Zeng",
"Yunfan Lu",
"Tong Shao",
"Ke Tang",
"Yongyong Chen",
"Jie Liu",
"Jingyong Su"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhou_Binarized_Mamba-Transformer_for_Lightweight_Quad_Bayer_HybridEVS_Demosaicing_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhou_Binarized_Mamba-Transformer_for_Lightweight_Quad_Bayer_HybridEVS_Demosaicing_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhou_Binarized_Mamba-Transformer_for_CVPR_2025_supplemental.pdf | 2503.16134 | @InProceedings{Zhou_2025_CVPR,
author = {Zhou, Shiyang and Zeng, Haijin and Lu, Yunfan and Shao, Tong and Tang, Ke and Chen, Yongyong and Liu, Jie and Su, Jingyong},
title = {Binarized Mamba-Transformer for Lightweight Quad Bayer HybridEVS Demosaicing},
booktitle = {Proceedings of the Computer Vision... | Quad Bayer demosaicing is the central challenge for enabling the widespread application of Hybrid Event-based Vision Sensors (HybridEVS). Although existing learning-based methods that leverage long-range dependency modeling have achieved promising results, their complexity severely limits deployment on mobile devices f... | [
0.013033713214099407,
-0.015938125550746918,
-0.01716553047299385,
0.06229124590754509,
0.02388288639485836,
0.055674102157354355,
0.000759316491894424,
-0.03273165225982666,
-0.054203834384679794,
-0.02648977003991604,
0.0020854633767157793,
0.002596756909042597,
-0.054578568786382675,
0.... |
23 | DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation | [
"Jianzong Wu",
"Chao Tang",
"Jingbo Wang",
"Yanhong Zeng",
"Xiangtai Li",
"Yunhai Tong"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wu_DiffSensei_Bridging_Multi-Modal_LLMs_and_Diffusion_Models_for_Customized_Manga_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_DiffSensei_Bridging_Multi-Modal_LLMs_and_Diffusion_Models_for_Customized_Manga_CVPR_2025_paper.pdf | null | 2412.07589 | @InProceedings{Wu_2025_CVPR,
author = {Wu, Jianzong and Tang, Chao and Wang, Jingbo and Zeng, Yanhong and Li, Xiangtai and Tong, Yunhai},
title = {DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation},
booktitle = {Proceedings of the Computer Vision and Pattern R... | Story visualization, the task of creating visual narratives from textual descriptions, has seen progress with text-to-image generation models. However, these models often lack effective control over character appearances and interactions, particularly in multi-character scenes. To address these limitations, we propose ... | [
0.002466005738824606,
-0.019322987645864487,
-0.009343171492218971,
0.026044201105833054,
0.05109255015850067,
-0.005978650413453579,
0.015994703397154808,
0.00966725405305624,
-0.021951749920845032,
-0.026704786345362663,
-0.026241721585392952,
-0.0023899644147604704,
-0.06286736577749252,
... |
24 | Narrating the Video: Boosting Text-Video Retrieval via Comprehensive Utilization of Frame-Level Captions | [
"Chan Hur",
"Jeong-hun Hong",
"Dong-hun Lee",
"Dabin Kang",
"Semin Myeong",
"Sang-hyo Park",
"Hyeyoung Park"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Hur_Narrating_the_Video_Boosting_Text-Video_Retrieval_via_Comprehensive_Utilization_of_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Hur_Narrating_the_Video_Boosting_Text-Video_Retrieval_via_Comprehensive_Utilization_of_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Hur_Narrating_the_Video_CVPR_2025_supplemental.pdf | 2503.05186 | @InProceedings{Hur_2025_CVPR,
author = {Hur, Chan and Hong, Jeong-hun and Lee, Dong-hun and Kang, Dabin and Myeong, Semin and Park, Sang-hyo and Park, Hyeyoung},
title = {Narrating the Video: Boosting Text-Video Retrieval via Comprehensive Utilization of Frame-Level Captions},
booktitle = {Proceeding... | In recent text-video retrieval, the use of additional captions from vision-language models has shown promising effects on the performance. However, existing models using additional captions often have struggled to capture the rich semantics, including temporal changes, inherent in the video. In addition, incorrect info... | [
0.007320384960621595,
-0.03265903517603874,
0.005219533573836088,
0.06040342152118683,
0.033518947660923004,
-0.005247467663139105,
0.00881570391356945,
0.014254752546548843,
-0.002826813142746687,
-0.020871195942163467,
-0.033390920609235764,
0.04529305547475815,
-0.05104193091392517,
0.0... |
25 | IDEA-Bench: How Far are Generative Models from Professional Designing? | [
"Chen Liang",
"Lianghua Huang",
"Jingwu Fang",
"Huanzhang Dou",
"Wei Wang",
"Zhi-Fan Wu",
"Yupeng Shi",
"Junge Zhang",
"Xin Zhao",
"Yu Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liang_IDEA-Bench_How_Far_are_Generative_Models_from_Professional_Designing_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liang_IDEA-Bench_How_Far_are_Generative_Models_from_Professional_Designing_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liang_IDEA-Bench_How_Far_CVPR_2025_supplemental.pdf | null | @InProceedings{Liang_2025_CVPR,
author = {Liang, Chen and Huang, Lianghua and Fang, Jingwu and Dou, Huanzhang and Wang, Wei and Wu, Zhi-Fan and Shi, Yupeng and Zhang, Junge and Zhao, Xin and Liu, Yu},
title = {IDEA-Bench: How Far are Generative Models from Professional Designing?},
booktitle = {Proce... | Recent advancements in image generation models enable the creation of high-quality images and targeted modifications based on textual instructions. Some models even support multimodal complex guidance and demonstrate robust task generalization capabilities. However, they still fall short of meeting the nuanced, profess... | [
0.026044880971312523,
-0.015533458441495895,
-0.021398788318037987,
0.03407023102045059,
0.027412764728069305,
0.008402013219892979,
0.006100681610405445,
0.01456192135810852,
-0.03502779081463814,
-0.031451668590307236,
-0.048437606543302536,
-0.00559006305411458,
-0.05707727372646332,
-0... |
26 | Interpretable Image Classification via Non-parametric Part Prototype Learning | [
"Zhijie Zhu",
"Lei Fan",
"Maurice Pagnucco",
"Yang Song"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhu_Interpretable_Image_Classification_via_Non-parametric_Part_Prototype_Learning_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhu_Interpretable_Image_Classification_via_Non-parametric_Part_Prototype_Learning_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhu_Interpretable_Image_Classification_CVPR_2025_supplemental.pdf | 2503.10247 | @InProceedings{Zhu_2025_CVPR,
author = {Zhu, Zhijie and Fan, Lei and Pagnucco, Maurice and Song, Yang},
title = {Interpretable Image Classification via Non-parametric Part Prototype Learning},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = ... | Classifying images with an interpretable decision-making process is a long-standing problem in computer vision. In recent years, Prototypical Part Networks has gained traction as an approach for self-explainable neural networks, due to their ability to mimic human visual reasoning by providing explanations based on pro... | [
-0.0024494912941008806,
-0.024703148752450943,
-0.026568621397018433,
0.056109800934791565,
0.02130361832678318,
0.0502617172896862,
-0.0033340840600430965,
-0.03899097815155983,
0.0039458186365664005,
-0.04593924432992935,
-0.04316332936286926,
-0.016858693212270737,
-0.04117085412144661,
... |
27 | PhD: A ChatGPT-Prompted Visual Hallucination Evaluation Dataset | [
"Jiazhen Liu",
"Yuhan Fu",
"Ruobing Xie",
"Runquan Xie",
"Xingwu Sun",
"Fengzong Lian",
"Zhanhui Kang",
"Xirong Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_PhD_A_ChatGPT-Prompted_Visual_Hallucination_Evaluation_Dataset_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_PhD_A_ChatGPT-Prompted_Visual_Hallucination_Evaluation_Dataset_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_PhD_A_ChatGPT-Prompted_CVPR_2025_supplemental.pdf | 2403.11116 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Jiazhen and Fu, Yuhan and Xie, Ruobing and Xie, Runquan and Sun, Xingwu and Lian, Fengzong and Kang, Zhanhui and Li, Xirong},
title = {PhD: A ChatGPT-Prompted Visual Hallucination Evaluation Dataset},
booktitle = {Proceedings of the Computer Vision and Pat... | Multimodal Large Language Models (MLLMs) hallucinate, resulting in an emerging topic of visual hallucination evaluation (VHE). This paper contributes a ChatGPT-Prompted visual hallucination evaluation Dataset (PhD) for objective VHE at a large scale. The essence of VHE is to ask an MLLM questions about specific images ... | [
0.030475681647658348,
0.026517100632190704,
0.014228899031877518,
0.05005844309926033,
0.004403895698487759,
0.005704130511730909,
0.05390986427664757,
0.039173781871795654,
-0.016771335154771805,
-0.0146607281640172,
-0.04951044172048569,
0.010347534902393818,
-0.07740776985883713,
-0.009... |
28 | CARL: A Framework for Equivariant Image Registration | [
"Hastings Greer",
"Lin Tian",
"François-Xavier Vialard",
"Roland Kwitt",
"Raul San Jose Estepar",
"Marc Niethammer"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Greer_CARL_A_Framework_for_Equivariant_Image_Registration_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Greer_CARL_A_Framework_for_Equivariant_Image_Registration_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Greer_CARL_A_Framework_CVPR_2025_supplemental.pdf | 2405.16738 | @InProceedings{Greer_2025_CVPR,
author = {Greer, Hastings and Tian, Lin and Vialard, Fran\c{c}ois-Xavier and Kwitt, Roland and Estepar, Raul San Jose and Niethammer, Marc},
title = {CARL: A Framework for Equivariant Image Registration},
booktitle = {Proceedings of the Computer Vision and Pattern Reco... | Image registration estimates spatial correspondences between image pairs. These estimates are typically obtained via numerical optimization or regression by a deep network. A desirable property is that a correspondence estimate (e.g., the true oracle correspondence) for an image pair is maintained under deformations of... | [
0.012796515598893166,
0.021795375272631645,
0.01301821693778038,
0.005540671292692423,
0.026895083487033844,
0.061999041587114334,
0.02402653358876705,
0.027816042304039,
-0.034216057509183884,
-0.055112261325120926,
-0.019917573779821396,
-0.048317257314920425,
-0.04295210540294647,
-0.01... |
29 | ClimbingCap: Multi-Modal Dataset and Method for Rock Climbing in World Coordinate | [
"Ming Yan",
"Xincheng Lin",
"Yuhua Luo",
"Shuqi Fan",
"Yudi Dai",
"Qixin Zhong",
"Lincai Zhong",
"Yuexin Ma",
"Lan Xu",
"Chenglu Wen",
"Siqi Shen",
"Cheng Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yan_ClimbingCap_Multi-Modal_Dataset_and_Method_for_Rock_Climbing_in_World_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yan_ClimbingCap_Multi-Modal_Dataset_and_Method_for_Rock_Climbing_in_World_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yan_ClimbingCap_Multi-Modal_Dataset_CVPR_2025_supplemental.pdf | 2503.21268 | @InProceedings{Yan_2025_CVPR,
author = {Yan, Ming and Lin, Xincheng and Luo, Yuhua and Fan, Shuqi and Dai, Yudi and Zhong, Qixin and Zhong, Lincai and Ma, Yuexin and Xu, Lan and Wen, Chenglu and Shen, Siqi and Wang, Cheng},
title = {ClimbingCap: Multi-Modal Dataset and Method for Rock Climbing in World C... | Human Motion Recovery (HMR) research mainly focuses on ground-based motions such as running. The study on capturing climbing motion, an off-ground motion, is sparse. This is partly due to the limited availability of climbing motion datasets, especially large-scale and challenging 3D labeled datasets. To address the ins... | [
0.002225403906777501,
-0.003748710500076413,
-0.01628538966178894,
0.0156717449426651,
0.06640134006738663,
-0.004413111135363579,
0.05810082331299782,
0.004108141642063856,
-0.05481162294745445,
-0.02988688461482525,
-0.026400145143270493,
-0.0372694693505764,
-0.07136797904968262,
-0.023... |
30 | DAGSM: Disentangled Avatar Generation with GS-enhanced Mesh | [
"Jingyu Zhuang",
"Di Kang",
"Linchao Bao",
"Liang Lin",
"Guanbin Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhuang_DAGSM_Disentangled_Avatar_Generation_with_GS-enhanced_Mesh_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhuang_DAGSM_Disentangled_Avatar_Generation_with_GS-enhanced_Mesh_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhuang_DAGSM_Disentangled_Avatar_CVPR_2025_supplemental.zip | 2411.15205 | @InProceedings{Zhuang_2025_CVPR,
author = {Zhuang, Jingyu and Kang, Di and Bao, Linchao and Lin, Liang and Li, Guanbin},
title = {DAGSM: Disentangled Avatar Generation with GS-enhanced Mesh},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {... | Text-driven avatar generation has gained significant attention owing to its convenience. However, existing methods typically model the human body with all garments as a single 3D model, limiting its usability, such as clothing replacement, and reducing user control over the generation process. To overcome the limitatio... | [
0.026696456596255302,
-0.010369460098445415,
-0.02055967040359974,
0.04844071343541145,
0.024892300367355347,
0.013899728655815125,
0.040492624044418335,
0.004683632869273424,
-0.035324376076459885,
-0.034372154623270035,
-0.05180668458342552,
-0.013304425403475761,
-0.08031482249498367,
0... |
31 | Estimating Body and Hand Motion in an Ego-sensed World | [
"Brent Yi",
"Vickie Ye",
"Maya Zheng",
"Yunqi Li",
"Lea Müller",
"Georgios Pavlakos",
"Yi Ma",
"Jitendra Malik",
"Angjoo Kanazawa"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yi_Estimating_Body_and_Hand_Motion_in_an_Ego-sensed_World_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yi_Estimating_Body_and_Hand_Motion_in_an_Ego-sensed_World_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yi_Estimating_Body_and_CVPR_2025_supplemental.pdf | 2410.03665 | @InProceedings{Yi_2025_CVPR,
author = {Yi, Brent and Ye, Vickie and Zheng, Maya and Li, Yunqi and M\"uller, Lea and Pavlakos, Georgios and Ma, Yi and Malik, Jitendra and Kanazawa, Angjoo},
title = {Estimating Body and Hand Motion in an Ego-sensed World},
booktitle = {Proceedings of the Computer Visio... | We present EgoAllo, a system for human motion estimation from a head-mounted device. Using only egocentric SLAM poses and images, EgoAllo guides sampling from a conditional diffusion model to estimate 3D body pose, height, and hand parameters that capture a device wearer's actions in the allocentric coordinate frame of... | [
0.01874477043747902,
0.023861877620220184,
-0.004886100068688393,
0.010780264623463154,
0.028213683515787125,
0.048685792833566666,
0.04208075627684593,
-0.008161250501871109,
-0.039755720645189285,
-0.04106523096561432,
-0.006989870686084032,
-0.015139923430979252,
-0.06809404492378235,
-... |
32 | A Bias-Free Training Paradigm for More General AI-generated Image Detection | [
"Fabrizio Guillaro",
"Giada Zingarini",
"Ben Usman",
"Avneesh Sud",
"Davide Cozzolino",
"Luisa Verdoliva"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Guillaro_A_Bias-Free_Training_Paradigm_for_More_General_AI-generated_Image_Detection_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Guillaro_A_Bias-Free_Training_Paradigm_for_More_General_AI-generated_Image_Detection_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Guillaro_A_Bias-Free_Training_CVPR_2025_supplemental.pdf | 2412.17671 | @InProceedings{Guillaro_2025_CVPR,
author = {Guillaro, Fabrizio and Zingarini, Giada and Usman, Ben and Sud, Avneesh and Cozzolino, Davide and Verdoliva, Luisa},
title = {A Bias-Free Training Paradigm for More General AI-generated Image Detection},
booktitle = {Proceedings of the Computer Vision and ... | Successful forensic detectors can produce excellent results in supervised learning benchmarks but struggle to transfer to real-world applications. We believe this limitation is largely due to inadequate training data quality. While most research focuses on developing new algorithms, less attention is given to training ... | [
0.03153880685567856,
-0.027664341032505035,
-0.010404170490801334,
0.04668672755360603,
0.04052434861660004,
0.004387316294014454,
0.01748799718916416,
0.003660239279270172,
-0.005973297171294689,
-0.06635173410177231,
-0.007273571565747261,
0.018541250377893448,
-0.08462244272232056,
0.00... |
33 | FALCON: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding | [
"Thanh-Dat Truong",
"Utsav Prabhu",
"Bhiksha Raj",
"Jackson Cothren",
"Khoa Luu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Truong_FALCON_Fairness_Learning_via_Contrastive_Attention_Approach_to_Continual_Semantic_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Truong_FALCON_Fairness_Learning_via_Contrastive_Attention_Approach_to_Continual_Semantic_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Truong_FALCON_Fairness_Learning_CVPR_2025_supplemental.pdf | 2311.15965 | @InProceedings{Truong_2025_CVPR,
author = {Truong, Thanh-Dat and Prabhu, Utsav and Raj, Bhiksha and Cothren, Jackson and Luu, Khoa},
title = {FALCON: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding},
booktitle = {Proceedings of the Computer Vision and Pa... | Continual Learning in semantic scene segmentation aims to continually learn new unseen classes in dynamic environments while maintaining previously learned knowledge. Prior studies focused on modeling the catastrophic forgetting and background shift challenges in continual learning. However, fairness, another major cha... | [
0.001551701221615076,
-0.016639849171042442,
0.0026116364169865847,
0.03075694665312767,
-0.008381970226764679,
0.039181459695100784,
0.03717758506536484,
0.05188298970460892,
-0.07058016210794449,
-0.01244970504194498,
-0.05423891916871071,
-0.002828290220350027,
-0.07224521785974503,
0.0... |
34 | Certified Human Trajectory Prediction | [
"Mohammadhossein Bahari",
"Saeed Saadatnejad",
"Amirhossein Askari Farsangi",
"Seyed-Mohsen Moosavi-Dezfooli",
"Alexandre Alahi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Bahari_Certified_Human_Trajectory_Prediction_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Bahari_Certified_Human_Trajectory_Prediction_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Bahari_Certified_Human_Trajectory_CVPR_2025_supplemental.pdf | 2403.13778 | @InProceedings{Bahari_2025_CVPR,
author = {Bahari, Mohammadhossein and Saadatnejad, Saeed and Farsangi, Amirhossein Askari and Moosavi-Dezfooli, Seyed-Mohsen and Alahi, Alexandre},
title = {Certified Human Trajectory Prediction},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition... | Predicting human trajectories is essential for the safe operation of autonomous vehicles, yet current data-driven models often lack robustness in case of noisy inputs such as adversarial examples or imperfect observations. Although some trajectory prediction methods have been developed to provide empirical robustness, ... | [
0.007795895915478468,
-0.015404585748910904,
0.003664012998342514,
0.045371562242507935,
0.07287777215242386,
0.013313147239387035,
0.036246512085199356,
0.017416447401046753,
-0.009664181619882584,
-0.031115571036934853,
-0.01623581349849701,
-0.010237454436719418,
-0.06176130846142769,
-... |
35 | Evaluating Vision-Language Models as Evaluators in Path Planning | [
"Mohamed Aghzal",
"Xiang Yue",
"Erion Plaku",
"Ziyu Yao"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Aghzal_Evaluating_Vision-Language_Models_as_Evaluators_in_Path_Planning_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Aghzal_Evaluating_Vision-Language_Models_as_Evaluators_in_Path_Planning_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Aghzal_Evaluating_Vision-Language_Models_CVPR_2025_supplemental.pdf | 2411.18711 | @InProceedings{Aghzal_2025_CVPR,
author = {Aghzal, Mohamed and Yue, Xiang and Plaku, Erion and Yao, Ziyu},
title = {Evaluating Vision-Language Models as Evaluators in Path Planning},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
... | Despite their promise to perform complex reasoning, large language models (LLMs) have been shown to have limited effectiveness in end-to-end planning. This has inspired an intriguing question: if these models cannot plan well, can they still contribute to the planning framework as a helpful plan evaluator? In this work... | [
-0.0029403739608824253,
0.006716731004416943,
0.011579705402255058,
0.03361089900135994,
0.04142432287335396,
0.020920729264616966,
0.02959776297211647,
0.031037038192152977,
-0.030209288001060486,
-0.027993816882371902,
-0.05487767606973648,
0.040501680225133896,
-0.05958521366119385,
-0.... |
36 | Free on the Fly: Enhancing Flexibility in Test-Time Adaptation with Online EM | [
"Qiyuan Dai",
"Sibei Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Dai_Free_on_the_Fly_Enhancing_Flexibility_in_Test-Time_Adaptation_with_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Dai_Free_on_the_Fly_Enhancing_Flexibility_in_Test-Time_Adaptation_with_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Dai_Free_on_the_CVPR_2025_supplemental.pdf | null | @InProceedings{Dai_2025_CVPR,
author = {Dai, Qiyuan and Yang, Sibei},
title = {Free on the Fly: Enhancing Flexibility in Test-Time Adaptation with Online EM},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
... | Vision-Language Models (VLMs) have become prominent in open-world image recognition for their strong generalization abilities. Yet, their effectiveness in practical applications is compromised by domain shifts and distributional changes, especially when test data distributions diverge from training data. Therefore, the... | [
0.024781109765172005,
0.006576500367373228,
0.013842483051121235,
0.04474003240466118,
0.06380891799926758,
0.015349434688687325,
0.03758091479539871,
0.034034695476293564,
0.009940862655639648,
-0.013497255742549896,
-0.022524936124682426,
0.041700635105371475,
-0.09546081721782684,
-0.03... |
37 | Transformers without Normalization | [
"Jiachen Zhu",
"Xinlei Chen",
"Kaiming He",
"Yann LeCun",
"Zhuang Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhu_Transformers_without_Normalization_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhu_Transformers_without_Normalization_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhu_Transformers_without_Normalization_CVPR_2025_supplemental.pdf | 2503.10622 | @InProceedings{Zhu_2025_CVPR,
author = {Zhu, Jiachen and Chen, Xinlei and He, Kaiming and LeCun, Yann and Liu, Zhuang},
title = {Transformers without Normalization},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {20... | Normalization layers are ubiquitous in modern neural networks and have long been considered essential. This work demonstrates that Transformers without normalization can achieve the same or better performance using a remarkably simple technique. We introduce Dynamic Tanh (DyT), an element-wise operation DyT(x)=tanh(ax)... | [
-0.020009486004710197,
-0.015306811779737473,
-0.011765317060053349,
0.022031400352716446,
0.02423170953989029,
0.030016839504241943,
0.02624276466667652,
-0.013415887020528316,
0.0049011120572686195,
-0.03337901830673218,
-0.03167133405804634,
0.014320234768092632,
-0.04216828569769859,
0... |
38 | SGC-Net: Stratified Granular Comparison Network for Open-Vocabulary HOI Detection | [
"Xin Lin",
"Chong Shi",
"Zuopeng Yang",
"Haojin Tang",
"Zhili Zhou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lin_SGC-Net_Stratified_Granular_Comparison_Network_for_Open-Vocabulary_HOI_Detection_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lin_SGC-Net_Stratified_Granular_Comparison_Network_for_Open-Vocabulary_HOI_Detection_CVPR_2025_paper.pdf | null | null | @InProceedings{Lin_2025_CVPR,
author = {Lin, Xin and Shi, Chong and Yang, Zuopeng and Tang, Haojin and Zhou, Zhili},
title = {SGC-Net: Stratified Granular Comparison Network for Open-Vocabulary HOI Detection},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},... | Recent open-vocabulary human-object interaction (OV-HOI) detection methods primarily rely on large language model (LLM) for generating auxiliary descriptions and leverage knowledge distilled from CLIP to detect unseen interaction categories. Despite their effectiveness, these methods face two challenges: (1) feature gr... | [
-0.03165390342473984,
-0.0054799579083919525,
-0.0025778419803828,
0.01819322071969509,
0.022136054933071136,
0.01655888743698597,
0.034302182495594025,
0.016489334404468536,
-0.013658708892762661,
-0.02289457432925701,
-0.014204160310328007,
0.03642337769269943,
-0.09282644093036652,
-0.0... |
39 | Galaxy Walker: Geometry-aware VLMs For Galaxy-scale Understanding | [
"Tianyu Chen",
"Xingcheng Fu",
"Yisen Gao",
"Haodong Qian",
"Yuecen Wei",
"Kun Yan",
"Haoyi Zhou",
"Jianxin Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Galaxy_Walker_Geometry-aware_VLMs_For_Galaxy-scale_Understanding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Galaxy_Walker_Geometry-aware_VLMs_For_Galaxy-scale_Understanding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Galaxy_Walker_Geometry-aware_CVPR_2025_supplemental.pdf | 2503.18578 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Tianyu and Fu, Xingcheng and Gao, Yisen and Qian, Haodong and Wei, Yuecen and Yan, Kun and Zhou, Haoyi and Li, Jianxin},
title = {Galaxy Walker: Geometry-aware VLMs For Galaxy-scale Understanding},
booktitle = {Proceedings of the Computer Vision and Patt... | Modern vision-language models (VLMs) develop patch embedding and convolution backbone within vector space, especially Euclidean ones, at the very founding. When expanding VLMs to a galaxy-scale for understanding astronomical phenomena, the integration of spherical space for planetary orbits and hyperbolic spaces for bl... | [
0.004904136992990971,
-0.0038117659278213978,
0.0700492262840271,
0.03013128973543644,
0.02136823907494545,
0.010269252583384514,
0.017079919576644897,
0.0014961602864786983,
-0.033530861139297485,
-0.06021552532911301,
-0.03105461411178112,
0.0005096457898616791,
-0.07398036867380142,
0.0... |
40 | HiPART: Hierarchical Pose AutoRegressive Transformer for Occluded 3D Human Pose Estimation | [
"Hongwei Zheng",
"Han Li",
"Wenrui Dai",
"Ziyang Zheng",
"Chenglin Li",
"Junni Zou",
"Hongkai Xiong"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zheng_HiPART_Hierarchical_Pose_AutoRegressive_Transformer_for_Occluded_3D_Human_Pose_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zheng_HiPART_Hierarchical_Pose_AutoRegressive_Transformer_for_Occluded_3D_Human_Pose_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zheng_HiPART_Hierarchical_Pose_CVPR_2025_supplemental.pdf | 2503.23331 | @InProceedings{Zheng_2025_CVPR,
author = {Zheng, Hongwei and Li, Han and Dai, Wenrui and Zheng, Ziyang and Li, Chenglin and Zou, Junni and Xiong, Hongkai},
title = {HiPART: Hierarchical Pose AutoRegressive Transformer for Occluded 3D Human Pose Estimation},
booktitle = {Proceedings of the Computer Vi... | Existing 2D-to-3D human pose estimation (HPE) methods struggle with the occlusion issue by enriching information like temporal and visual cues in the lifting stage. In this paper, we argue that these methods ignore the limitation of the sparse skeleton 2D input representation, which fundamentally restricts the 2D-to-3D... | [
0.01882132701575756,
0.010351328179240227,
-0.0035339752212166786,
0.04666829854249954,
-0.0009375284425914288,
0.05192131549119949,
0.036518119275569916,
-0.020141610875725746,
-0.030983958393335342,
-0.050788577646017075,
-0.006359948311001062,
-0.04174418747425079,
-0.08778071403503418,
... |
41 | SnowMaster: Comprehensive Real-world Image Desnowing via MLLM with Multi-Model Feedback Optimization | [
"Jianyu Lai",
"Sixiang Chen",
"Yunlong Lin",
"Tian Ye",
"Yun Liu",
"Song Fei",
"Zhaohu Xing",
"Hongtao Wu",
"Weiming Wang",
"Lei Zhu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lai_SnowMaster_Comprehensive_Real-world_Image_Desnowing_via_MLLM_with_Multi-Model_Feedback_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lai_SnowMaster_Comprehensive_Real-world_Image_Desnowing_via_MLLM_with_Multi-Model_Feedback_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lai_SnowMaster_Comprehensive_Real-world_CVPR_2025_supplemental.pdf | null | @InProceedings{Lai_2025_CVPR,
author = {Lai, Jianyu and Chen, Sixiang and Lin, Yunlong and Ye, Tian and Liu, Yun and Fei, Song and Xing, Zhaohu and Wu, Hongtao and Wang, Weiming and Zhu, Lei},
title = {SnowMaster: Comprehensive Real-world Image Desnowing via MLLM with Multi-Model Feedback Optimization},
... | Snowfall presents significant challenges for visual data processing, necessitating specialized desnowing algorithms. However, existing models often fail to generalize effectively due to their heavy reliance on synthetic datasets. Furthermore, current real-world snowfall datasets are limited in scale and lack dedicated ... | [
0.03715065121650696,
-0.026592064648866653,
0.04541223123669624,
0.034701623022556305,
0.0400087907910347,
-0.0015497845597565174,
0.04020354896783829,
0.02881251461803913,
-0.039522841572761536,
-0.026056280359625816,
-0.056397125124931335,
-0.004968992434442043,
-0.07045713067054749,
-0.... |
42 | From Faces to Voices: Learning Hierarchical Representations for High-quality Video-to-Speech | [
"Ji-Hoon Kim",
"Jeongsoo Choi",
"Jaehun Kim",
"Chaeyoung Jung",
"Joon Son Chung"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Kim_From_Faces_to_Voices_Learning_Hierarchical_Representations_for_High-quality_Video-to-Speech_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Kim_From_Faces_to_Voices_Learning_Hierarchical_Representations_for_High-quality_Video-to-Speech_CVPR_2025_paper.pdf | null | 2503.16956 | @InProceedings{Kim_2025_CVPR,
author = {Kim, Ji-Hoon and Choi, Jeongsoo and Kim, Jaehun and Jung, Chaeyoung and Chung, Joon Son},
title = {From Faces to Voices: Learning Hierarchical Representations for High-quality Video-to-Speech},
booktitle = {Proceedings of the Computer Vision and Pattern Recogni... | The objective of this study is to generate high-quality speech from silent talking face videos, a task also known as video-to-speech synthesis. A significant challenge in video-to-speech synthesis lies in the substantial modality gap between silent video and multi-faceted speech. In this paper, we propose a novel vide... | [
-0.007304402533918619,
0.01864672265946865,
-0.019937703385949135,
0.04580438509583473,
0.02028978243470192,
0.0622391402721405,
0.05919796973466873,
0.0019445416983217,
-0.03784000501036644,
-0.05245717242360115,
-0.02294720709323883,
0.007342011667788029,
-0.03653302788734436,
-0.0019291... |
43 | DFM: Differentiable Feature Matching for Anomaly Detection | [
"Sheng Wu",
"Yimi Wang",
"Xudong Liu",
"Yuguang Yang",
"Runqi Wang",
"Guodong Guo",
"David Doermann",
"Baochang Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wu_DFM_Differentiable_Feature_Matching_for_Anomaly_Detection_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_DFM_Differentiable_Feature_Matching_for_Anomaly_Detection_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_DFM_Differentiable_Feature_CVPR_2025_supplemental.pdf | null | @InProceedings{Wu_2025_CVPR,
author = {Wu, Sheng and Wang, Yimi and Liu, Xudong and Yang, Yuguang and Wang, Runqi and Guo, Guodong and Doermann, David and Zhang, Baochang},
title = {DFM: Differentiable Feature Matching for Anomaly Detection},
booktitle = {Proceedings of the Computer Vision and Patter... | Feature matching methods for unsupervised anomaly detection have demonstrated impressive performance. Existing methods primarily rely on self-supervised training and handcrafted matching schemes for task adaptation. However, they can only achieve an inferior feature representation for anomaly detection because the feat... | [
0.021102240309119225,
-0.015851572155952454,
0.012118514627218246,
0.013588274829089642,
0.09095021337270737,
0.04802883788943291,
0.03156590089201927,
-0.02444351837038994,
-0.021215762943029404,
-0.0338745191693306,
-0.006956235505640507,
0.008999562822282314,
-0.0853879526257515,
0.0182... |
44 | FlashGS: Efficient 3D Gaussian Splatting for Large-scale and High-resolution Rendering | [
"Guofeng Feng",
"Siyan Chen",
"Rong Fu",
"Zimu Liao",
"Yi Wang",
"Tao Liu",
"Boni Hu",
"Linning Xu",
"Zhilin Pei",
"Hengjie Li",
"Xiuhong Li",
"Ninghui Sun",
"Xingcheng Zhang",
"Bo Dai"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Feng_FlashGS_Efficient_3D_Gaussian_Splatting_for_Large-scale_and_High-resolution_Rendering_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Feng_FlashGS_Efficient_3D_Gaussian_Splatting_for_Large-scale_and_High-resolution_Rendering_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Feng_FlashGS_Efficient_3D_CVPR_2025_supplemental.pdf | 2408.07967 | @InProceedings{Feng_2025_CVPR,
author = {Feng, Guofeng and Chen, Siyan and Fu, Rong and Liao, Zimu and Wang, Yi and Liu, Tao and Hu, Boni and Xu, Linning and Pei, Zhilin and Li, Hengjie and Li, Xiuhong and Sun, Ninghui and Zhang, Xingcheng and Dai, Bo},
title = {FlashGS: Efficient 3D Gaussian Splatting f... | Recent advances in 3D Gaussian Splatting (3DGS) have demonstrated significant potential over traditional rendering techniques, attracting widespread attention from both industry and academia. However, real-time rendering with 3DGS remains a challenging problem, particularly in large-scale, high-resolution scenes due to... | [
-0.0015757039654999971,
-0.008146446198225021,
0.038727905601263046,
0.03480574116110802,
0.01699955202639103,
0.026154620572924614,
0.0010872618295252323,
0.026783473789691925,
-0.033470943570137024,
-0.0613660104572773,
-0.010218308307230473,
-0.02925429679453373,
-0.06782402843236923,
0... |
45 | PointSR: Self-Regularized Point Supervision for Drone-View Object Detection | [
"Weizhuo Li",
"Yue Xi",
"Wenjing Jia",
"Zehao Zhang",
"Fei Li",
"Xiangzeng Liu",
"Qiguang Miao"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_PointSR_Self-Regularized_Point_Supervision_for_Drone-View_Object_Detection_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_PointSR_Self-Regularized_Point_Supervision_for_Drone-View_Object_Detection_CVPR_2025_paper.pdf | null | null | @InProceedings{Li_2025_CVPR,
author = {Li, Weizhuo and Xi, Yue and Jia, Wenjing and Zhang, Zehao and Li, Fei and Liu, Xiangzeng and Miao, Qiguang},
title = {PointSR: Self-Regularized Point Supervision for Drone-View Object Detection},
booktitle = {Proceedings of the Computer Vision and Pattern Recogn... | Point-Supervised Object Detection (PSOD) in a discriminative style has recently gained significant attention for its impressive detection performance and cost-effectiveness. However, accurately predicting high-quality pseudo-box labels for drone-view images, which often feature densely packed small objects, remains a c... | [
0.04435298591852188,
-0.00549323670566082,
0.007973614148795605,
0.03389427438378334,
0.019017327576875687,
0.055614173412323,
0.014085746370255947,
-0.018424702808260918,
-0.03233228996396065,
-0.04955137521028519,
-0.030211804434657097,
-0.04200447350740433,
-0.08683138340711594,
-0.0014... |
46 | Exploring Timeline Control for Facial Motion Generation | [
"Yifeng Ma",
"Jinwei Qi",
"Chaonan Ji",
"Peng Zhang",
"Bang Zhang",
"Zhidong Deng",
"Liefeng Bo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ma_Exploring_Timeline_Control_for_Facial_Motion_Generation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ma_Exploring_Timeline_Control_for_Facial_Motion_Generation_CVPR_2025_paper.pdf | null | 2505.20861 | @InProceedings{Ma_2025_CVPR,
author = {Ma, Yifeng and Qi, Jinwei and Ji, Chaonan and Zhang, Peng and Zhang, Bang and Deng, Zhidong and Bo, Liefeng},
title = {Exploring Timeline Control for Facial Motion Generation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (C... | This paper introduces a new control signal for facial motion generation: timeline control. Compared to audio and text signals, timelines provide more fine-grained control, such as generating specific facial motions with precise timing. Users can specify a multi-track timeline of facial actions arranged in temporal inte... | [
-0.01788717694580555,
0.005348812323063612,
-0.02938157692551613,
0.027817273512482643,
0.013855379074811935,
0.014236101880669594,
0.05375639349222183,
0.02806219458580017,
-0.019213814288377762,
-0.048602137714624405,
-0.009688301011919975,
0.009191447868943214,
-0.04520014300942421,
-0.... |
47 | v-CLR: View-Consistent Learning for Open-World Instance Segmentation | [
"Chang-Bin Zhang",
"Jinhong Ni",
"Yujie Zhong",
"Kai Han"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_v-CLR_View-Consistent_Learning_for_Open-World_Instance_Segmentation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_v-CLR_View-Consistent_Learning_for_Open-World_Instance_Segmentation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_v-CLR_View-Consistent_Learning_CVPR_2025_supplemental.pdf | null | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Chang-Bin and Ni, Jinhong and Zhong, Yujie and Han, Kai},
title = {v-CLR: View-Consistent Learning for Open-World Instance Segmentation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June}... | In this paper, we address the challenging problem of open-world instance segmentation. Existing works have shown that vanilla visual networks are biased toward learning appearance information, e.g. texture, to recognize objects. This implicit bias causes the model to fail in detecting novel objects with unseen textures... | [
0.031222328543663025,
0.014841379597783089,
0.009126618504524231,
0.03940937668085098,
-0.0025495877489447594,
0.03229326009750366,
0.028751080855727196,
0.02817326784133911,
-0.04106035456061363,
-0.04237163066864014,
-0.06426594406366348,
0.008160137571394444,
-0.07664933800697327,
0.016... |
48 | Chat2SVG: Vector Graphics Generation with Large Language Models and Image Diffusion Models | [
"Ronghuan Wu",
"Wanchao Su",
"Jing Liao"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wu_Chat2SVG_Vector_Graphics_Generation_with_Large_Language_Models_and_Image_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_Chat2SVG_Vector_Graphics_Generation_with_Large_Language_Models_and_Image_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_Chat2SVG_Vector_Graphics_CVPR_2025_supplemental.pdf | 2411.16602 | @InProceedings{Wu_2025_CVPR,
author = {Wu, Ronghuan and Su, Wanchao and Liao, Jing},
title = {Chat2SVG: Vector Graphics Generation with Large Language Models and Image Diffusion Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June}... | Scalable Vector Graphics (SVG) has become the de facto standard for vector graphics in digital design, offering resolution independence and precise control over individual elements. Despite their advantages, creating high-quality SVG content remains challenging, as it demands technical expertise with professional editi... | [
0.010707722045481205,
-0.02736900933086872,
0.013908416964113712,
0.03642677515745163,
0.01843317598104477,
0.015624375082552433,
0.0122149046510458,
0.029258983209729195,
-0.020845716819167137,
-0.08409375697374344,
-0.047599393874406815,
-0.01715410314500332,
-0.04157904163002968,
0.0319... |
49 | GAF: Gaussian Avatar Reconstruction from Monocular Videos via Multi-view Diffusion | [
"Jiapeng Tang",
"Davide Davoli",
"Tobias Kirschstein",
"Liam Schoneveld",
"Matthias Nießner"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Tang_GAF_Gaussian_Avatar_Reconstruction_from_Monocular_Videos_via_Multi-view_Diffusion_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Tang_GAF_Gaussian_Avatar_Reconstruction_from_Monocular_Videos_via_Multi-view_Diffusion_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Tang_GAF_Gaussian_Avatar_CVPR_2025_supplemental.pdf | 2412.10209 | @InProceedings{Tang_2025_CVPR,
author = {Tang, Jiapeng and Davoli, Davide and Kirschstein, Tobias and Schoneveld, Liam and Nie{\ss}ner, Matthias},
title = {GAF: Gaussian Avatar Reconstruction from Monocular Videos via Multi-view Diffusion},
booktitle = {Proceedings of the Computer Vision and Pattern ... | We propose a novel approach for reconstructing animatable 3D Gaussian avatars from monocular videos captured by commodity devices like smartphones. Photorealistic 3D head avatar reconstruction from such recordings is challenging due to limited observations, which leaves unobserved regions under-constrained and can lead... | [
0.027625007554888725,
0.01011032797396183,
-0.0007393298437818885,
0.07347976416349411,
0.024821439757943153,
0.03760562837123871,
0.04226795956492424,
0.003123329719528556,
-0.0196234043687582,
-0.06209029629826546,
0.009425180964171886,
-0.0068607088178396225,
-0.06674139946699142,
0.010... |
50 | Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization | [
"Siyan Dong",
"Shuzhe Wang",
"Shaohui Liu",
"Lulu Cai",
"Qingnan Fan",
"Juho Kannala",
"Yanchao Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Dong_Reloc3r_Large-Scale_Training_of_Relative_Camera_Pose_Regression_for_Generalizable_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Dong_Reloc3r_Large-Scale_Training_of_Relative_Camera_Pose_Regression_for_Generalizable_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Dong_Reloc3r_Large-Scale_Training_CVPR_2025_supplemental.pdf | 2412.08376 | @InProceedings{Dong_2025_CVPR,
author = {Dong, Siyan and Wang, Shuzhe and Liu, Shaohui and Cai, Lulu and Fan, Qingnan and Kannala, Juho and Yang, Yanchao},
title = {Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization},
booktitle =... | Visual localization aims to determine the camera pose of a query image relative to a database of posed images. In recent years, deep neural networks that directly regress camera poses have gained popularity due to their fast inference capabilities. However, existing methods struggle to either generalize well to new sce... | [
0.019694041460752487,
-0.017324242740869522,
0.01834355853497982,
0.03823445737361908,
0.043149545788764954,
0.04024563357234001,
0.008956977166235447,
0.003743454348295927,
-0.05299297720193863,
-0.02783922664821148,
-0.008331751450896263,
-0.033690642565488815,
-0.08695930987596512,
-0.0... |
51 | AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark | [
"Li Lin",
"Santosh Santosh",
"Mingyang Wu",
"Xin Wang",
"Shu Hu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lin_AI-Face_A_Million-Scale_Demographically_Annotated_AI-Generated_Face_Dataset_and_Fairness_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lin_AI-Face_A_Million-Scale_Demographically_Annotated_AI-Generated_Face_Dataset_and_Fairness_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lin_AI-Face_A_Million-Scale_CVPR_2025_supplemental.pdf | null | @InProceedings{Lin_2025_CVPR,
author = {Lin, Li and Santosh, Santosh and Wu, Mingyang and Wang, Xin and Hu, Shu},
title = {AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conf... | AI-generated faces have enriched human life, such as entertainment, education, and art. However, they also pose misuse risks. Therefore, detecting AI-generated faces becomes crucial, yet current detectors show biased performance across different demographic groups. Mitigating biases can be done by designing algorithmic... | [
0.004446485545486212,
-0.012705603614449501,
-0.011912799440324306,
0.052811071276664734,
0.004839286208152771,
0.05525599420070648,
0.02872590720653534,
0.0059766159392893314,
-0.01515692938119173,
-0.04470519721508026,
-0.005408112891018391,
0.0000434244102507364,
-0.07891809940338135,
-... |
52 | Inference-Scale Complexity in ANN-SNN Conversion for High-Performance and Low-Power Applications | [
"Tong Bu",
"Maohua Li",
"Zhaofei Yu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Bu_Inference-Scale_Complexity_in_ANN-SNN_Conversion_for_High-Performance_and_Low-Power_Applications_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Bu_Inference-Scale_Complexity_in_ANN-SNN_Conversion_for_High-Performance_and_Low-Power_Applications_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Bu_Inference-Scale_Complexity_in_CVPR_2025_supplemental.pdf | 2409.03368 | @InProceedings{Bu_2025_CVPR,
author = {Bu, Tong and Li, Maohua and Yu, Zhaofei},
title = {Inference-Scale Complexity in ANN-SNN Conversion for High-Performance and Low-Power Applications},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {Jun... | Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs) due to their advantages of fast inference and low power consumption. However, the lack of efficient training algorithms has hindered their widespread adoption. Even efficient ANN-SNN conversion methods necessitat... | [
-0.028192276135087013,
-0.01933233253657818,
0.0030048254411667585,
0.01973234862089157,
0.04171757772564888,
0.041246041655540466,
0.014613303355872631,
-0.0026573766954243183,
-0.049163710325956345,
-0.008250266313552856,
0.019171690568327904,
-0.011589999310672283,
-0.04230383038520813,
... |
53 | Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation | [
"Chengyue Wu",
"Xiaokang Chen",
"Zhiyu Wu",
"Yiyang Ma",
"Xingchao Liu",
"Zizheng Pan",
"Wen Liu",
"Zhenda Xie",
"Xingkai Yu",
"Chong Ruan",
"Ping Luo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wu_Janus_Decoupling_Visual_Encoding_for_Unified_Multimodal_Understanding_and_Generation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_Janus_Decoupling_Visual_Encoding_for_Unified_Multimodal_Understanding_and_Generation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_Janus_Decoupling_Visual_CVPR_2025_supplemental.pdf | 2410.13848 | @InProceedings{Wu_2025_CVPR,
author = {Wu, Chengyue and Chen, Xiaokang and Wu, Zhiyu and Ma, Yiyang and Liu, Xingchao and Pan, Zizheng and Liu, Wen and Xie, Zhenda and Yu, Xingkai and Ruan, Chong and Luo, Ping},
title = {Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generatio... | We introduce Janus, an autoregressive framework that unifies multimodal understanding and generation. Prior research often relies on a single visual encoder for both tasks, such as Chameleon. However, due to the differing levels of information granularity required by multimodal understanding and generation, this approa... | [
0.01640518754720688,
-0.014069756492972374,
0.0009751941543072462,
0.03748328611254692,
0.029733549803495407,
0.078761525452137,
0.026394106447696686,
0.03783155977725983,
-0.03764592856168747,
-0.04010622575879097,
-0.025318289175629616,
-0.004531575832515955,
-0.07726745307445526,
-0.001... |
54 | MVDoppler-Pose: Multi-Modal Multi-View mmWave Sensing for Long-Distance Self-Occluded Human Walking Pose Estimation | [
"Jaeho Choi",
"Soheil Hor",
"Shubo Yang",
"Amin Arbabian"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Choi_MVDoppler-Pose_Multi-Modal_Multi-View_mmWave_Sensing_for_Long-Distance_Self-Occluded_Human_Walking_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Choi_MVDoppler-Pose_Multi-Modal_Multi-View_mmWave_Sensing_for_Long-Distance_Self-Occluded_Human_Walking_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Choi_MVDoppler-Pose_Multi-Modal_Multi-View_CVPR_2025_supplemental.pdf | null | @InProceedings{Choi_2025_CVPR,
author = {Choi, Jaeho and Hor, Soheil and Yang, Shubo and Arbabian, Amin},
title = {MVDoppler-Pose: Multi-Modal Multi-View mmWave Sensing for Long-Distance Self-Occluded Human Walking Pose Estimation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognit... | One of the main challenges in reliable camera-based 3D pose estimation for walking subjects is to deal with self-occlusions, especially in the case of using low-resolution cameras or at longer distance scenarios. In recent years, millimeter-wave (mmWave) radar has emerged as a promising alternative, offering inherent r... | [
0.006704698782414198,
-0.014374401420354843,
-0.0006136624724604189,
0.0017879004590213299,
0.049867864698171616,
0.016971349716186523,
0.045123010873794556,
-0.016357365995645523,
-0.038895681500434875,
-0.06557637453079224,
-0.006538470275700092,
-0.0012861586874350905,
-0.0642610266804695... |
55 | TopNet: Transformer-Efficient Occupancy Prediction Network for Octree-Structured Point Cloud Geometry Compression | [
"Xinjie Wang",
"Yifan Zhang",
"Ting Liu",
"Xinpu Liu",
"Ke Xu",
"Jianwei Wan",
"Yulan Guo",
"Hanyun Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_TopNet_Transformer-Efficient_Occupancy_Prediction_Network_for_Octree-Structured_Point_Cloud_Geometry_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_TopNet_Transformer-Efficient_Occupancy_Prediction_Network_for_Octree-Structured_Point_Cloud_Geometry_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_TopNet_Transformer-Efficient_Occupancy_CVPR_2025_supplemental.pdf | null | @InProceedings{Wang_2025_CVPR,
author = {Wang, Xinjie and Zhang, Yifan and Liu, Ting and Liu, Xinpu and Xu, Ke and Wan, Jianwei and Guo, Yulan and Wang, Hanyun},
title = {TopNet: Transformer-Efficient Occupancy Prediction Network for Octree-Structured Point Cloud Geometry Compression},
booktitle = {P... | Efficient Point Cloud Geometry Compression (PCGC) with a lower bits per point (BPP) and higher peak signal-to-noise ratio (PSNR) is essential for the transportation of large-scale 3D data. Although octree-based entropy models can reduce BPP without introducing geometry distortion, existing CNN-based models struggle wit... | [
-0.018139801919460297,
-0.03719475865364075,
0.02872496470808983,
0.0033826588187366724,
0.025653956457972527,
0.037491295486688614,
0.00019931125279981643,
0.01380064059048891,
-0.043509095907211304,
-0.04344085231423378,
-0.062472179532051086,
-0.05752300098538399,
-0.05387874320149422,
... |
56 | MagicArticulate: Make Your 3D Models Articulation-Ready | [
"Chaoyue Song",
"Jianfeng Zhang",
"Xiu Li",
"Fan Yang",
"Yiwen Chen",
"Zhongcong Xu",
"Jun Hao Liew",
"Xiaoyang Guo",
"Fayao Liu",
"Jiashi Feng",
"Guosheng Lin"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Song_MagicArticulate_Make_Your_3D_Models_Articulation-Ready_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Song_MagicArticulate_Make_Your_3D_Models_Articulation-Ready_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Song_MagicArticulate_Make_Your_CVPR_2025_supplemental.pdf | 2502.12135 | @InProceedings{Song_2025_CVPR,
author = {Song, Chaoyue and Zhang, Jianfeng and Li, Xiu and Yang, Fan and Chen, Yiwen and Xu, Zhongcong and Liew, Jun Hao and Guo, Xiaoyang and Liu, Fayao and Feng, Jiashi and Lin, Guosheng},
title = {MagicArticulate: Make Your 3D Models Articulation-Ready},
booktitle =... | With the explosive growth of 3D content creation, there is an increasing demand for automatically converting static 3D models into articulation-ready versions that support realistic animation. Traditional approaches rely heavily on manual annotation, which is both time-consuming and labor-intensive. Moreover, the lack ... | [
0.00954380352050066,
-0.0054940879344940186,
0.002017849124968052,
0.021388066932559013,
0.027347812429070473,
0.015246083028614521,
0.022796444594860077,
0.010515839792788029,
-0.04718373343348503,
-0.06144024804234505,
-0.01634831354022026,
-0.025204973295331,
-0.06286285817623138,
0.011... |
57 | Gain from Neighbors: Boosting Model Robustness in the Wild via Adversarial Perturbations Toward Neighboring Classes | [
"Zhou Yang",
"Mingtao Feng",
"Tao Huang",
"Fangfang Wu",
"Weisheng Dong",
"Xin Li",
"Guangming Shi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yang_Gain_from_Neighbors_Boosting_Model_Robustness_in_the_Wild_via_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yang_Gain_from_Neighbors_Boosting_Model_Robustness_in_the_Wild_via_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yang_Gain_from_Neighbors_CVPR_2025_supplemental.pdf | null | @InProceedings{Yang_2025_CVPR,
author = {Yang, Zhou and Feng, Mingtao and Huang, Tao and Wu, Fangfang and Dong, Weisheng and Li, Xin and Shi, Guangming},
title = {Gain from Neighbors: Boosting Model Robustness in the Wild via Adversarial Perturbations Toward Neighboring Classes},
booktitle = {Proceed... | Recent approaches, such as data augmentation, adversarial training, and transfer learning, have shown potential in addressing the issue of performance degradation caused by distributional shifts. However, they typically demand careful design in terms of data or models and lack awareness of the impact of distributional ... | [
0.014245678670704365,
-0.03876575827598572,
0.017857970669865608,
0.03953978046774864,
0.018595844507217407,
0.020481478422880173,
0.023871246725320816,
-0.01742277853190899,
-0.010515431873500347,
-0.047162976115942,
0.010818378999829292,
0.006579115055501461,
-0.08150023967027664,
0.0072... |
58 | Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation | [
"Yudi Shi",
"Shangzhe Di",
"Qirui Chen",
"Weidi Xie"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Shi_Enhancing_Video-LLM_Reasoning_via_Agent-of-Thoughts_Distillation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Shi_Enhancing_Video-LLM_Reasoning_via_Agent-of-Thoughts_Distillation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Shi_Enhancing_Video-LLM_Reasoning_CVPR_2025_supplemental.pdf | 2412.01694 | @InProceedings{Shi_2025_CVPR,
author = {Shi, Yudi and Di, Shangzhe and Chen, Qirui and Xie, Weidi},
title = {Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year ... | This paper tackles the problem of video question answering (VideoQA), a task that often requires multi-step reasoning and a profound understanding of spatial-temporal dynamics. While large video-language models perform well on benchmarks, they often lack explainability and spatial-temporal grounding. In this paper, we ... | [
0.02724309451878071,
0.00038169772597029805,
-0.00018779098172672093,
0.07527045905590057,
0.03992511332035065,
0.0026998971588909626,
0.011809615418314934,
0.023310327902436256,
0.00013201094407122582,
-0.005627017933875322,
-0.032760992646217346,
0.04278793931007385,
-0.0680343434214592,
... |
59 | De^2Gaze: Deformable and Decoupled Representation Learning for 3D Gaze Estimation | [
"Yunfeng Xiao",
"Xiaowei Bai",
"Baojun Chen",
"Hao Su",
"Hao He",
"Liang Xie",
"Erwei Yin"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xiao_De2Gaze_Deformable_and_Decoupled_Representation_Learning_for_3D_Gaze_Estimation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xiao_De2Gaze_Deformable_and_Decoupled_Representation_Learning_for_3D_Gaze_Estimation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xiao_De2Gaze_Deformable_and_CVPR_2025_supplemental.pdf | null | @InProceedings{Xiao_2025_CVPR,
author = {Xiao, Yunfeng and Bai, Xiaowei and Chen, Baojun and Su, Hao and He, Hao and Xie, Liang and Yin, Erwei},
title = {De{\textasciicircum}2Gaze: Deformable and Decoupled Representation Learning for 3D Gaze Estimation},
booktitle = {Proceedings of the Computer Visio... | 3D Gaze estimation is a challenging task due to two main issues. First, existing methods focus on analyzing dense features (e.g., large pixel regions), which are sensitive to local noise (e.g., light spots, blurs) and result in increased computational complexity. Second, an eyeball model can correspond multiple gaze di... | [
0.007640424650162458,
0.04026789963245392,
0.013352423906326294,
0.009399335831403732,
0.03129519894719124,
0.05100063979625702,
0.014104481786489487,
0.03085966408252716,
-0.0028275749646127224,
-0.03748663514852524,
-0.013061040081083775,
0.015558301471173763,
-0.09086049348115921,
-0.00... |
60 | ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-Tuning | [
"David Junhao Zhang",
"Roni Paiss",
"Shiran Zada",
"Nikhil Karnad",
"David E. Jacobs",
"Yael Pritch",
"Inbar Mosseri",
"Mike Zheng Shou",
"Neal Wadhwa",
"Nataniel Ruiz"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_ReCapture_Generative_Video_Camera_Controls_for_User-Provided_Videos_using_Masked_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_ReCapture_Generative_Video_Camera_Controls_for_User-Provided_Videos_using_Masked_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_ReCapture_Generative_Video_CVPR_2025_supplemental.zip | 2411.05003 | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, David Junhao and Paiss, Roni and Zada, Shiran and Karnad, Nikhil and Jacobs, David E. and Pritch, Yael and Mosseri, Inbar and Shou, Mike Zheng and Wadhwa, Neal and Ruiz, Nataniel},
title = {ReCapture: Generative Video Camera Controls for User-Provided Vide... | Recently, breakthroughs in video modeling have allowed for controllable camera trajectories in generated videos. However, these methods cannot be directly applied to user-provided videos that are not generated by a video model. In this paper, we present ReCapture, a method for generating new videos with novel camera tr... | [
0.020343899726867676,
-0.021575819700956345,
-0.022008085623383522,
0.06508239358663559,
0.059257932007312775,
0.014010768383741379,
0.03907828778028488,
0.007397472392767668,
-0.047642163932323456,
-0.05924355983734131,
-0.029893159866333008,
-0.026768533512949944,
-0.05567536875605583,
-... |
61 | M^3-VOS: Multi-Phase, Multi-Transition, and Multi-Scenery Video Object Segmentation | [
"Zixuan Chen",
"Jiaxin Li",
"Junxuan Liang",
"Liming Tan",
"Yejie Guo",
"Cewu Lu",
"Yong-Lu Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_M3-VOS_Multi-Phase_Multi-Transition_and_Multi-Scenery_Video_Object_Segmentation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_M3-VOS_Multi-Phase_Multi-Transition_and_Multi-Scenery_Video_Object_Segmentation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_M3-VOS_Multi-Phase_Multi-Transition_CVPR_2025_supplemental.zip | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Zixuan and Li, Jiaxin and Liang, Junxuan and Tan, Liming and Guo, Yejie and Lu, Cewu and Li, Yong-Lu},
title = {M{\textasciicircum}3-VOS: Multi-Phase, Multi-Transition, and Multi-Scenery Video Object Segmentation},
booktitle = {Proceedings of the Compute... | Intelligent robots need to interact with diverse objects across various environments. The appearance and state of objects frequently undergo complex transformations depending on the object properties, e.g., phase transitions. However, in the vision community, segmenting dynamic objects with phase transitions is overloo... | [
-0.0012639358174055815,
0.005951670929789543,
0.010783510282635689,
0.025275561958551407,
0.022351721301674843,
0.029937366023659706,
0.01704438589513302,
0.02535860799252987,
-0.0713927298784256,
-0.04929544776678085,
-0.05406632646918297,
-0.008277936838567257,
-0.05576568841934204,
0.02... |
62 | Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning | [
"Huiyi Wang",
"Haodong Lu",
"Lina Yao",
"Dong Gong"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Self-Expansion_of_Pre-trained_Models_with_Mixture_of_Adapters_for_Continual_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Self-Expansion_of_Pre-trained_Models_with_Mixture_of_Adapters_for_Continual_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Self-Expansion_of_Pre-trained_CVPR_2025_supplemental.pdf | 2403.18886 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Huiyi and Lu, Haodong and Yao, Lina and Gong, Dong},
title = {Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month ... | Continual learning (CL) aims to continually accumulate knowledge from a non-stationary data stream without catastrophic forgetting of learned knowledge, requiring a balance between stability and adaptability. Relying on the generalizable representation in pre-trained models (PTMs), PTM-based CL methods perform effectiv... | [
0.00020193900854792446,
-0.029755201190710068,
-0.012793464586138725,
0.004776672460138798,
0.050198499113321304,
0.045475300401449203,
0.015212739817798138,
0.0005930159240961075,
-0.04616151377558708,
-0.023264052346348763,
-0.005789549555629492,
0.011646094731986523,
-0.06735510379076004,... |
63 | Dual Prompting Image Restoration with Diffusion Transformers | [
"Dehong Kong",
"Fan Li",
"Zhixin Wang",
"Jiaqi Xu",
"Renjing Pei",
"Wenbo Li",
"WenQi Ren"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Kong_Dual_Prompting_Image_Restoration_with_Diffusion_Transformers_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Kong_Dual_Prompting_Image_Restoration_with_Diffusion_Transformers_CVPR_2025_paper.pdf | null | 2504.17825 | @InProceedings{Kong_2025_CVPR,
author = {Kong, Dehong and Li, Fan and Wang, Zhixin and Xu, Jiaqi and Pei, Renjing and Li, Wenbo and Ren, WenQi},
title = {Dual Prompting Image Restoration with Diffusion Transformers},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (... | Recent state-of-the-art image restoration methods mostly adopt latent diffusion models with U-Net backbones, yet still facing challenges in achieving high-quality restoration due to their limited capabilities. Diffusion transformers (DiTs), like SD3, are emerging as a promising alternative because of their better quali... | [
-0.013553302735090256,
-0.05885133892297745,
-0.012272958643734455,
0.05272303521633148,
0.040787532925605774,
0.024715935811400414,
-0.007425708696246147,
-0.0155845507979393,
-0.008872530423104763,
-0.0646057277917862,
-0.006116767413914204,
0.0020485222339630127,
-0.004827028140425682,
... |
64 | Brain-Inspired Spiking Neural Networks for Energy-Efficient Object Detection | [
"Ziqi Li",
"Tao Gao",
"Yisheng An",
"Ting Chen",
"Jing Zhang",
"Yuanbo Wen",
"Mengkun Liu",
"Qianxi Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_Brain-Inspired_Spiking_Neural_Networks_for_Energy-Efficient_Object_Detection_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_Brain-Inspired_Spiking_Neural_Networks_for_Energy-Efficient_Object_Detection_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_Brain-Inspired_Spiking_Neural_CVPR_2025_supplemental.pdf | null | @InProceedings{Li_2025_CVPR,
author = {Li, Ziqi and Gao, Tao and An, Yisheng and Chen, Ting and Zhang, Jing and Wen, Yuanbo and Liu, Mengkun and Zhang, Qianxi},
title = {Brain-Inspired Spiking Neural Networks for Energy-Efficient Object Detection},
booktitle = {Proceedings of the Computer Vision and ... | Brain-inspired spiking neural networks (SNNs) have the capability of energy-efficient processing of temporal information. However, leveraging the rich dynamic characteristics of SNNs and prior works in artificial neural networks (ANNs) to construct an effective object detection model for visual tasks remains an open qu... | [
0.001641112845391035,
-0.0035660790745168924,
0.025937486439943314,
0.009535153396427631,
0.03284181281924248,
0.03828798234462738,
0.0010590276215225458,
0.038786690682172775,
-0.06828229129314423,
-0.04788708686828613,
-0.0012288513826206326,
-0.01867489702999592,
-0.06249183788895607,
0... |
65 | Medusa: A Multi-Scale High-order Contrastive Dual-Diffusion Approach for Multi-View Clustering | [
"Liang Chen",
"Zhe Xue",
"Yawen Li",
"Meiyu Liang",
"Yan Wang",
"Anton van den Hengel",
"Yuankai Qi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Medusa_A_Multi-Scale_High-order_Contrastive_Dual-Diffusion_Approach_for_Multi-View_Clustering_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Medusa_A_Multi-Scale_High-order_Contrastive_Dual-Diffusion_Approach_for_Multi-View_Clustering_CVPR_2025_paper.pdf | null | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Liang and Xue, Zhe and Li, Yawen and Liang, Meiyu and Wang, Yan and van den Hengel, Anton and Qi, Yuankai},
title = {Medusa: A Multi-Scale High-order Contrastive Dual-Diffusion Approach for Multi-View Clustering},
booktitle = {Proceedings of the Computer... | Deep multi-view clustering methods utilize information from multiple views to achieve enhanced clustering results and have gained increasing popularity in recent years. Most existing methods typically focus on either inter-view or intra-view relationships, aiming to align information across views or analyze structural ... | [
-0.004163614474236965,
-0.025907548144459724,
0.009566904045641422,
0.03675967454910278,
0.035440754145383835,
0.04686080291867256,
0.010527830570936203,
-0.019524885341525078,
-0.009226532652974129,
-0.04457351192831993,
0.02565653622150421,
0.00788242369890213,
-0.062118709087371826,
0.0... |
66 | MambaOut: Do We Really Need Mamba for Vision? | [
"Weihao Yu",
"Xinchao Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yu_MambaOut_Do_We_Really_Need_Mamba_for_Vision_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yu_MambaOut_Do_We_Really_Need_Mamba_for_Vision_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yu_MambaOut_Do_We_CVPR_2025_supplemental.pdf | 2405.07992 | @InProceedings{Yu_2025_CVPR,
author = {Yu, Weihao and Wang, Xinchao},
title = {MambaOut: Do We Really Need Mamba for Vision?},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {4484-4496}
} | Mamba, an architecture with RNN-like token mixer of state space model (SSM), was recently introduced to address the quadratic complexity of the attention mechanism and subsequently applied to vision tasks. Nevertheless, the performance of Mamba for vision is often underwhelming when compared with convolutional and atte... | [
0.010675174184143543,
-0.021163979545235634,
0.004626368172466755,
0.023207450285553932,
0.025984274223446846,
0.03924655169248581,
0.021306747570633888,
0.03083486668765545,
-0.0579378642141819,
-0.03761342167854309,
-0.013505709357559681,
0.0029350074473768473,
-0.0665353387594223,
-0.02... |
67 | Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment | [
"Jiayi Guo",
"Junhao Zhao",
"Chaoqun Du",
"Yulin Wang",
"Chunjiang Ge",
"Zanlin Ni",
"Shiji Song",
"Humphrey Shi",
"Gao Huang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Guo_Everything_to_the_Synthetic_Diffusion-driven_Test-time_Adaptation_via_Synthetic-Domain_Alignment_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Guo_Everything_to_the_Synthetic_Diffusion-driven_Test-time_Adaptation_via_Synthetic-Domain_Alignment_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Guo_Everything_to_the_CVPR_2025_supplemental.pdf | 2406.04295 | @InProceedings{Guo_2025_CVPR,
author = {Guo, Jiayi and Zhao, Junhao and Du, Chaoqun and Wang, Yulin and Ge, Chunjiang and Ni, Zanlin and Song, Shiji and Shi, Humphrey and Huang, Gao},
title = {Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment},
booktitl... | Test-time adaptation (TTA) aims to improve the performance of source-domain pre-trained models on previously unseen, shifted target domains. Traditional TTA methods primarily adapt model weights based on target data streams, making model performance sensitive to the amount and order of target data. The recently propose... | [
0.01541484147310257,
-0.017349692061543465,
-0.018257981166243553,
0.06159516051411629,
0.044978223741054535,
0.031523801386356354,
0.05299121513962746,
0.015854448080062866,
0.026313142850995064,
-0.021836770698428154,
0.015435421839356422,
0.01124002318829298,
-0.06234080344438553,
0.009... |
68 | Multi-Granularity Class Prototype Topology Distillation for Class-Incremental Source-Free Unsupervised Domain Adaptation | [
"Peihua Deng",
"Jiehua Zhang",
"Xichun Sheng",
"Chenggang Yan",
"Yaoqi Sun",
"Ying Fu",
"Liang Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Deng_Multi-Granularity_Class_Prototype_Topology_Distillation_for_Class-Incremental_Source-Free_Unsupervised_Domain_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Deng_Multi-Granularity_Class_Prototype_Topology_Distillation_for_Class-Incremental_Source-Free_Unsupervised_Domain_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Deng_Multi-Granularity_Class_Prototype_CVPR_2025_supplemental.pdf | 2411.16064 | @InProceedings{Deng_2025_CVPR,
author = {Deng, Peihua and Zhang, Jiehua and Sheng, Xichun and Yan, Chenggang and Sun, Yaoqi and Fu, Ying and Li, Liang},
title = {Multi-Granularity Class Prototype Topology Distillation for Class-Incremental Source-Free Unsupervised Domain Adaptation},
booktitle = {Pro... | This paper explores the Class-Incremental Source-Free Unsupervised Domain Adaptation (CI-SFUDA) problem, where the unlabeled target data come incrementally without access to labeled source instances. This problem poses two challenges, the interference of similar source-class knowledge in target-class representation lea... | [
-0.00567543413490057,
-0.028005409985780716,
-0.0105837257578969,
0.04387392848730087,
0.010892662219703197,
0.028176143765449524,
0.031911157071590424,
-0.017333408817648888,
0.008654281497001648,
-0.020053287968039513,
-0.018833696842193604,
0.00691253412514925,
-0.08141981065273285,
0.0... |
69 | DepthCues: Evaluating Monocular Depth Perception in Large Vision Models | [
"Duolikun Danier",
"Mehmet Aygün",
"Changjian Li",
"Hakan Bilen",
"Oisin Mac Aodha"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Danier_DepthCues_Evaluating_Monocular_Depth_Perception_in_Large_Vision_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Danier_DepthCues_Evaluating_Monocular_Depth_Perception_in_Large_Vision_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Danier_DepthCues_Evaluating_Monocular_CVPR_2025_supplemental.pdf | null | @InProceedings{Danier_2025_CVPR,
author = {Danier, Duolikun and Ayg\"un, Mehmet and Li, Changjian and Bilen, Hakan and Mac Aodha, Oisin},
title = {DepthCues: Evaluating Monocular Depth Perception in Large Vision Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conferen... | Large-scale pre-trained vision models are becoming increasingly prevalent, offering expressive and generalizable visual representations that benefit various downstream tasks. Recent studies on the emergent properties of these models have revealed their high-level geometric understanding, in particular in the context of... | [
0.004575603175908327,
0.015651224181056023,
-0.012997324578464031,
0.02708282507956028,
0.017141634598374367,
0.0015496766427531838,
0.028700901195406914,
0.05109817907214165,
-0.03313431143760681,
-0.04819091409444809,
-0.012405223213136196,
0.004931566771119833,
-0.07583437860012054,
0.0... |
70 | A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition | [
"Duosheng Chen",
"Shihao Zhou",
"Jinshan Pan",
"Jinglei Shi",
"Lishen Qu",
"Jufeng Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_A_Polarization-Aided_Transformer_for_Image_Deblurring_via_Motion_Vector_Decomposition_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_A_Polarization-Aided_Transformer_for_Image_Deblurring_via_Motion_Vector_Decomposition_CVPR_2025_paper.pdf | null | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Duosheng and Zhou, Shihao and Pan, Jinshan and Shi, Jinglei and Qu, Lishen and Yang, Jufeng},
title = {A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition},
booktitle = {Proceedings of the Computer Vision and Pattern Rec... | Effectively leveraging motion information is crucial for the image deblurring task. Existing methods typically build deep-learning models to restore a clean image by estimating blur patterns over the entire movement. This suggests that the blur caused by rotational motion components is processed together with the trans... | [
0.010483021847903728,
-0.0007492704316973686,
0.01887620985507965,
0.05272001773118973,
0.03567837178707123,
0.0022674091160297394,
0.012237310409545898,
-0.0007627792074345052,
-0.039751455187797546,
-0.03812168538570404,
-0.04316301271319389,
0.002475221874192357,
-0.04936740919947624,
0... |
71 | SpecTRe-GS: Modeling Highly Specular Surfaces with Reflected Nearby Objects by Tracing Rays in 3D Gaussian Splatting | [
"Jiajun Tang",
"Fan Fei",
"Zhihao Li",
"Xiao Tang",
"Shiyong Liu",
"Youyu Chen",
"Binxiao Huang",
"Zhenyu Chen",
"Xiaofei Wu",
"Boxin Shi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Tang_SpecTRe-GS_Modeling_Highly_Specular_Surfaces_with_Reflected_Nearby_Objects_by_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Tang_SpecTRe-GS_Modeling_Highly_Specular_Surfaces_with_Reflected_Nearby_Objects_by_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Tang_SpecTRe-GS_Modeling_Highly_CVPR_2025_supplemental.pdf | null | @InProceedings{Tang_2025_CVPR,
author = {Tang, Jiajun and Fei, Fan and Li, Zhihao and Tang, Xiao and Liu, Shiyong and Chen, Youyu and Huang, Binxiao and Chen, Zhenyu and Wu, Xiaofei and Shi, Boxin},
title = {SpecTRe-GS: Modeling Highly Specular Surfaces with Reflected Nearby Objects by Tracing Rays in 3D... | 3D Gaussian Splatting (3DGS), a recently emerged multi-view 3D reconstruction technique, has shown significant advantages in real-time rendering and explicit editing. However, 3DGS encounters challenges in the accurate modeling of both high-frequency view-dependent appearances and global illumination effects, including... | [
0.016315316781401634,
0.0004173048073425889,
0.055468518286943436,
0.030228182673454285,
0.024463476613163948,
0.013645111583173275,
0.0214544590562582,
0.013558763079345226,
-0.013674191199243069,
-0.09040196239948273,
-0.018412664532661438,
-0.01463665533810854,
-0.03807218745350838,
0.0... |
72 | Seurat: From Moving Points to Depth | [
"Seokju Cho",
"Jiahui Huang",
"Seungryong Kim",
"Joon-Young Lee"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Cho_Seurat_From_Moving_Points_to_Depth_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Cho_Seurat_From_Moving_Points_to_Depth_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Cho_Seurat_From_Moving_CVPR_2025_supplemental.pdf | 2504.14687 | @InProceedings{Cho_2025_CVPR,
author = {Cho, Seokju and Huang, Jiahui and Kim, Seungryong and Lee, Joon-Young},
title = {Seurat: From Moving Points to Depth},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
... | Accurate depth estimation from monocular videos remains challenging due to ambiguities inherent in single-view geometry, as crucial depth cues like stereopsis are absent. However, humans often perceive relative depth intuitively by observing variations in the size and spacing of objects as they move. Inspired by this, ... | [
0.03205997124314308,
0.012741771526634693,
0.006856644060462713,
0.02775312028825283,
0.012814121320843697,
0.03908213600516319,
0.04612046852707863,
0.0435342863202095,
-0.031785935163497925,
-0.047615453600883484,
0.0009023458696901798,
-0.025068145245313644,
-0.03818325698375702,
-0.002... |
73 | AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360deg Unbounded Scene Inpainting | [
"Chung-Ho Wu",
"Yang-Jung Chen",
"Ying-Huan Chen",
"Jie-Ying Lee",
"Bo-Hsu Ke",
"Chun-Wei Tuan Mu",
"Yi-Chuan Huang",
"Chin-Yang Lin",
"Min-Hung Chen",
"Yen-Yu Lin",
"Yu-Lun Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wu_AuraFusion360_Augmented_Unseen_Region_Alignment_for_Reference-based_360deg_Unbounded_Scene_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_AuraFusion360_Augmented_Unseen_Region_Alignment_for_Reference-based_360deg_Unbounded_Scene_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_AuraFusion360_Augmented_Unseen_CVPR_2025_supplemental.pdf | null | @InProceedings{Wu_2025_CVPR,
author = {Wu, Chung-Ho and Chen, Yang-Jung and Chen, Ying-Huan and Lee, Jie-Ying and Ke, Bo-Hsu and Mu, Chun-Wei Tuan and Huang, Yi-Chuan and Lin, Chin-Yang and Chen, Min-Hung and Lin, Yen-Yu and Liu, Yu-Lun},
title = {AuraFusion360: Augmented Unseen Region Alignment for Refe... | Three-dimensional scene inpainting is crucial for applications from virtual reality to architectural visualization, yet existing methods struggle with view consistency and geometric accuracy in 360deg unbounded scenes. We present AuraFusion360, a novel reference-based method that enables high-quality object removal and... | [
0.050478700548410416,
0.011199361644685268,
0.030255088582634926,
0.024778960272669792,
0.038233887404203415,
0.042631056159734726,
0.039475832134485245,
0.019485289230942726,
-0.021158551797270775,
-0.07683538645505905,
-0.030918963253498077,
-0.00016147762653417885,
-0.05936659127473831,
... |
74 | Language-Guided Image Tokenization for Generation | [
"Kaiwen Zha",
"Lijun Yu",
"Alireza Fathi",
"David A. Ross",
"Cordelia Schmid",
"Dina Katabi",
"Xiuye Gu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zha_Language-Guided_Image_Tokenization_for_Generation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zha_Language-Guided_Image_Tokenization_for_Generation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zha_Language-Guided_Image_Tokenization_CVPR_2025_supplemental.pdf | 2412.05796 | @InProceedings{Zha_2025_CVPR,
author = {Zha, Kaiwen and Yu, Lijun and Fathi, Alireza and Ross, David A. and Schmid, Cordelia and Katabi, Dina and Gu, Xiuye},
title = {Language-Guided Image Tokenization for Generation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference... | Image tokenization, the process of transforming raw image pixels into a compact low-dimensional latent representation, has proven crucial for scalable and efficient image generation. However, mainstream image tokenization methods generally have limited compression rates, making high-resolution image generation computat... | [
0.012206404469907284,
-0.05022415146231651,
0.006917441263794899,
0.05290317162871361,
0.027701361104846,
0.03934725746512413,
-0.013509776443243027,
-0.0009210198186337948,
-0.022111304104328156,
-0.01402780506759882,
-0.036940935999155045,
-0.028216036036610603,
-0.031826432794332504,
-0... |
75 | Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models | [
"Qirui Jiao",
"Daoyuan Chen",
"Yilun Huang",
"Bolin Ding",
"Yaliang Li",
"Ying Shen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Jiao_Img-Diff_Contrastive_Data_Synthesis_for_Multimodal_Large_Language_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Jiao_Img-Diff_Contrastive_Data_Synthesis_for_Multimodal_Large_Language_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Jiao_Img-Diff_Contrastive_Data_CVPR_2025_supplemental.pdf | null | @InProceedings{Jiao_2025_CVPR,
author = {Jiao, Qirui and Chen, Daoyuan and Huang, Yilun and Ding, Bolin and Li, Yaliang and Shen, Ying},
title = {Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Confere... | High-performance Multimodal Large Language Models (MLLMs) rely heavily on data quality. This study introduces a novel data synthesis method, leveraging insights from contrastive learning and image difference captioning to enhance fine-grained image recognition in MLLMs. By analyzing object differences in detailed regio... | [
0.024326108396053314,
-0.02954462356865406,
0.0007448691758327186,
0.06394011527299881,
0.03781091794371605,
0.008116090670228004,
0.03208402171730995,
0.014930998906493187,
-0.03134947642683983,
-0.0040732454508543015,
-0.015146801248192787,
0.001324910786934197,
-0.07573924958705902,
0.0... |
76 | CocoER: Aligning Multi-Level Feature by Competition and Coordination for Emotion Recognition | [
"Xuli Shen",
"Hua Cai",
"Weilin Shen",
"Qing Xu",
"Dingding Yu",
"Weifeng Ge",
"Xiangyang Xue"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Shen_CocoER_Aligning_Multi-Level_Feature_by__Competition_and_Coordination_for_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Shen_CocoER_Aligning_Multi-Level_Feature_by__Competition_and_Coordination_for_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Shen_CocoER_Aligning_Multi-Level_CVPR_2025_supplemental.pdf | null | @InProceedings{Shen_2025_CVPR,
author = {Shen, Xuli and Cai, Hua and Shen, Weilin and Xu, Qing and Yu, Dingding and Ge, Weifeng and Xue, Xiangyang},
title = {CocoER: Aligning Multi-Level Feature by Competition and Coordination for Emotion Recognition},
booktitle = {Proceedings of the Computer Vision... | With the explosion of human-machine interaction, emotion recognition has reignited attention. Previous works focus on improving visual feature fusion and reasoning from multiple image levels. Although it is non-trivial to deduce a person's emotion by integrating multi-level feature (head, body and context), the emo... | [
-0.023833081126213074,
0.0034629032015800476,
-0.00403621606528759,
0.04876360297203064,
-0.010613687336444855,
0.02997242659330368,
0.015987234190106392,
0.010254113003611565,
-0.017265666276216507,
-0.03436272591352463,
-0.02134121023118496,
0.005757868755608797,
-0.04530453681945801,
-0... |
77 | Hyperbolic Uncertainty-Aware Few-Shot Incremental Point Cloud Segmentation | [
"Tanuj Sur",
"Samrat Mukherjee",
"Kaizer Rahaman",
"Subhasis Chaudhuri",
"Muhammad Haris Khan",
"Biplab Banerjee"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Sur_Hyperbolic_Uncertainty-Aware_Few-Shot_Incremental_Point_Cloud_Segmentation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Sur_Hyperbolic_Uncertainty-Aware_Few-Shot_Incremental_Point_Cloud_Segmentation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sur_Hyperbolic_Uncertainty-Aware_Few-Shot_CVPR_2025_supplemental.pdf | null | @InProceedings{Sur_2025_CVPR,
author = {Sur, Tanuj and Mukherjee, Samrat and Rahaman, Kaizer and Chaudhuri, Subhasis and Khan, Muhammad Haris and Banerjee, Biplab},
title = {Hyperbolic Uncertainty-Aware Few-Shot Incremental Point Cloud Segmentation},
booktitle = {Proceedings of the Computer Vision an... | 3D point cloud segmentation is essential across a range of applications; however, conventional methods often struggle in evolving environments, particularly when tasked with identifying novel categories under limited supervision. Few-Shot Learning (FSL) and Class Incremental Learning (CIL) have been adapted previously ... | [
0.011166906915605068,
-0.014900228008627892,
0.019774513319134712,
0.04665432125329971,
0.019326861947774887,
0.04534561559557915,
0.019200023263692856,
-0.004272794816643,
-0.030703866854310036,
-0.052610836923122406,
-0.042449429631233215,
-0.018289774656295776,
-0.053273364901542664,
0.... |
78 | Enhancing Creative Generation on Stable Diffusion-based Models | [
"Jiyeon Han",
"Dahee Kwon",
"Gayoung Lee",
"Junho Kim",
"Jaesik Choi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Han_Enhancing_Creative_Generation_on_Stable_Diffusion-based_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Han_Enhancing_Creative_Generation_on_Stable_Diffusion-based_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Han_Enhancing_Creative_Generation_CVPR_2025_supplemental.zip | 2503.23538 | @InProceedings{Han_2025_CVPR,
author = {Han, Jiyeon and Kwon, Dahee and Lee, Gayoung and Kim, Junho and Choi, Jaesik},
title = {Enhancing Creative Generation on Stable Diffusion-based Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = ... | Recent text-to-image generative models, particularly Stable Diffusion and its distilled variants, have achieved impressive fidelity and strong text-image alignment. However, their creative generation capacity remains limited, as simply adding the term "creative" to prompts often fails to yield genuinely creative result... | [
0.018029624596238136,
-0.009002653881907463,
-0.008579913526773453,
0.07002025842666626,
0.0560106597840786,
0.01358572393655777,
0.005707849748432636,
0.0033646300435066223,
-0.029882879927754402,
-0.06283071637153625,
-0.008329573087394238,
-0.022257886826992035,
-0.05767375975847244,
0.... |
79 | The Devil is in the Prompts: Retrieval-Augmented Prompt Optimization for Text-to-Video Generation | [
"Bingjie Gao",
"Xinyu Gao",
"Xiaoxue Wu",
"Yujie Zhou",
"Yu Qiao",
"Li Niu",
"Xinyuan Chen",
"Yaohui Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Gao_The_Devil_is_in_the_Prompts_Retrieval-Augmented_Prompt_Optimization_for_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Gao_The_Devil_is_in_the_Prompts_Retrieval-Augmented_Prompt_Optimization_for_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Gao_The_Devil_is_CVPR_2025_supplemental.zip | 2504.11739 | @InProceedings{Gao_2025_CVPR,
author = {Gao, Bingjie and Gao, Xinyu and Wu, Xiaoxue and Zhou, Yujie and Qiao, Yu and Niu, Li and Chen, Xinyuan and Wang, Yaohui},
title = {The Devil is in the Prompts: Retrieval-Augmented Prompt Optimization for Text-to-Video Generation},
booktitle = {Proceedings of th... | The evolution of Text-to-video (T2V) generative models, trained on large-scale datasets, has been marked by significant progress. However, the sensitivity of T2V generative models to input prompts highlights the critical role of prompt design in influencing generative outcomes. Prior research has predominantly relied o... | [
-0.015374570153653622,
-0.030830414965748787,
-0.009711979888379574,
0.0875265896320343,
0.03010028786957264,
0.006235525477677584,
0.010917610488831997,
0.03518838807940483,
-0.02400808595120907,
0.0016591812018305063,
-0.05504819378256798,
0.02813565731048584,
-0.054273467510938644,
-0.0... |
80 | Denoising Functional Maps: Diffusion Models for Shape Correspondence | [
"Aleksei Zhuravlev",
"Zorah Lähner",
"Vladislav Golyanik"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhuravlev_Denoising_Functional_Maps_Diffusion_Models_for_Shape_Correspondence_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhuravlev_Denoising_Functional_Maps_Diffusion_Models_for_Shape_Correspondence_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhuravlev_Denoising_Functional_Maps_CVPR_2025_supplemental.pdf | null | @InProceedings{Zhuravlev_2025_CVPR,
author = {Zhuravlev, Aleksei and L\"ahner, Zorah and Golyanik, Vladislav},
title = {Denoising Functional Maps: Diffusion Models for Shape Correspondence},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {J... | Estimating correspondences between pairs of deformable shapes remains a challenging problem. Despite substantial progress, existing methods lack broad generalization capabilities and require category-specific training data. To address these limitations, we propose a fundamentally new approach to shape correspondence ba... | [
-0.0016571045853197575,
0.006435322109609842,
-0.005519780330359936,
0.027100669220089912,
0.045091938227415085,
0.09314737468957901,
0.012787151150405407,
0.0039165811613202095,
-0.0118643743917346,
-0.09757746756076813,
-0.022367361932992935,
-0.017507515847682953,
-0.0505049005150795,
0... |
81 | ProReflow: Progressive Reflow with Decomposed Velocity | [
"Lei Ke",
"Haohang Xu",
"Xuefei Ning",
"Yu Li",
"Jiajun Li",
"Haoling Li",
"Yuxuan Lin",
"Dongsheng Jiang",
"Yujiu Yang",
"Linfeng Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ke_ProReflow_Progressive_Reflow_with_Decomposed_Velocity_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ke_ProReflow_Progressive_Reflow_with_Decomposed_Velocity_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ke_ProReflow_Progressive_Reflow_CVPR_2025_supplemental.pdf | 2503.04824 | @InProceedings{Ke_2025_CVPR,
author = {Ke, Lei and Xu, Haohang and Ning, Xuefei and Li, Yu and Li, Jiajun and Li, Haoling and Lin, Yuxuan and Jiang, Dongsheng and Yang, Yujiu and Zhang, Linfeng},
title = {ProReflow: Progressive Reflow with Decomposed Velocity},
booktitle = {Proceedings of the Compute... | Diffusion models have achieved significant progress in both image and video generation while still suffering from huge computation costs. As an effective solution, rectified flow aims to rectify the diffusion process of diffusion models into a straight line for few-step and even one-step generation. However, in this pa... | [
0.006833280902355909,
-0.023216966539621353,
0.010756092146039009,
0.04373548924922943,
0.07031012326478958,
0.050312865525484085,
0.028605841100215912,
-0.006094225682318211,
-0.0017012913012877107,
-0.08073858916759491,
0.02018039859831333,
-0.04638214409351349,
-0.043016061186790466,
0.... |
82 | DnLUT: Ultra-Efficient Color Image Denoising via Channel-Aware Lookup Tables | [
"Sidi Yang",
"Binxiao Huang",
"Yulun Zhang",
"Dahai Yu",
"Yujiu Yang",
"Ngai Wong"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yang_DnLUT_Ultra-Efficient_Color_Image_Denoising_via_Channel-Aware_Lookup_Tables_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yang_DnLUT_Ultra-Efficient_Color_Image_Denoising_via_Channel-Aware_Lookup_Tables_CVPR_2025_paper.pdf | null | 2503.15931 | @InProceedings{Yang_2025_CVPR,
author = {Yang, Sidi and Huang, Binxiao and Zhang, Yulun and Yu, Dahai and Yang, Yujiu and Wong, Ngai},
title = {DnLUT: Ultra-Efficient Color Image Denoising via Channel-Aware Lookup Tables},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Confer... | While deep neural networks have revolutionized image denoising capabilities, their deployment on edge devices remains challenging due to substantial computational and memory requirements. To this end, we present DnLUT, an ultra-efficient lookup table-based framework that achieves high-quality color image denoising with... | [
0.008906462229788303,
-0.024193452671170235,
-0.01448078453540802,
0.057813700288534164,
0.04317830875515938,
0.04752199351787567,
0.01103863213211298,
-0.003081475617364049,
-0.007142531685531139,
-0.07545053213834763,
-0.02387182228267193,
-0.019808601588010788,
-0.06480597704648972,
-0.... |
83 | Devil is in the Detail: Towards Injecting Fine Details of Image Prompt in Image Generation via Conflict-free Guidance and Stratified Attention | [
"Kyungmin Jo",
"Jooyeol Yun",
"Jaegul Choo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Jo_Devil_is_in_the_Detail_Towards_Injecting_Fine_Details_of_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Jo_Devil_is_in_the_Detail_Towards_Injecting_Fine_Details_of_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Jo_Devil_is_in_CVPR_2025_supplemental.pdf | null | @InProceedings{Jo_2025_CVPR,
author = {Jo, Kyungmin and Yun, Jooyeol and Choo, Jaegul},
title = {Devil is in the Detail: Towards Injecting Fine Details of Image Prompt in Image Generation via Conflict-free Guidance and Stratified Attention},
booktitle = {Proceedings of the Computer Vision and Pattern... | While large-scale text-to-image diffusion models enable the generation of high-quality, diverse images from text prompts, these prompts struggle to capture intricate details, such as textures, preventing the user intent from being reflected. This limitation has led to efforts to generate images conditioned on user-prov... | [
-0.00470968009904027,
-0.02973797172307968,
-0.00844237394630909,
0.050810664892196655,
0.011804278939962387,
0.021675769239664078,
0.023255538195371628,
0.0051270294934511185,
-0.042624231427907944,
-0.03948161378502846,
-0.07838277518749237,
0.012711445800960064,
-0.04950668662786484,
0.... |
84 | D^3-Human: Dynamic Disentangled Digital Human from Monocular Video | [
"Honghu Chen",
"Bo Peng",
"Yunfan Tao",
"Juyong Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_D3-Human_Dynamic_Disentangled_Digital_Human_from_Monocular_Video_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_D3-Human_Dynamic_Disentangled_Digital_Human_from_Monocular_Video_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_D3-Human_Dynamic_Disentangled_CVPR_2025_supplemental.pdf | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Honghu and Peng, Bo and Tao, Yunfan and Zhang, Juyong},
title = {D{\textasciicircum}3-Human: Dynamic Disentangled Digital Human from Monocular Video},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month ... | We introduce \text D ^3\text -Human , a method for reconstructing Dynamic Disentangled Digital Human geometry from monocular videos. Past monocular video human reconstruction primarily focuses on reconstructing undecoupled clothed human bodies or only reconstructing clothing, making it difficult to apply directly in ap... | [
0.007843228057026863,
0.003938536625355482,
-0.0250988882035017,
0.05673399940133095,
0.04240642860531807,
0.011985321529209614,
0.030898524448275566,
0.01408974826335907,
-0.028796423226594925,
-0.06241527944803238,
-0.0219714492559433,
-0.02452833205461502,
-0.06987518817186356,
0.010009... |
85 | BiM-VFI: Bidirectional Motion Field-Guided Frame Interpolation for Video with Non-uniform Motions | [
"Wonyong Seo",
"Jihyong Oh",
"Munchurl Kim"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Seo_BiM-VFI_Bidirectional_Motion_Field-Guided_Frame_Interpolation_for_Video_with_Non-uniform_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Seo_BiM-VFI_Bidirectional_Motion_Field-Guided_Frame_Interpolation_for_Video_with_Non-uniform_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Seo_BiM-VFI_Bidirectional_Motion_CVPR_2025_supplemental.pdf | null | @InProceedings{Seo_2025_CVPR,
author = {Seo, Wonyong and Oh, Jihyong and Kim, Munchurl},
title = {BiM-VFI: Bidirectional Motion Field-Guided Frame Interpolation for Video with Non-uniform Motions},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month ... | Existing Video Frame interpolation (VFI) models tend to suffer from time-to-location ambiguity when trained with video of non-uniform motions, such as accelerating, decelerating, and changing directions, which often yield blurred interpolated frames.In this paper, we propose (i) a novel motion description map, Bidirect... | [
0.06832178682088852,
-0.013818319886922836,
0.03879319503903389,
0.012600211426615715,
0.044706426560878754,
0.025648286566138268,
0.020869862288236618,
0.0015920849982649088,
-0.01531558483839035,
-0.06464750319719315,
-0.016369400545954704,
-0.04020810127258301,
-0.05492547154426575,
0.0... |
86 | Curriculum Coarse-to-Fine Selection for High-IPC Dataset Distillation | [
"Yanda Chen",
"Gongwei Chen",
"Miao Zhang",
"Weili Guan",
"Liqiang Nie"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Curriculum_Coarse-to-Fine_Selection_for_High-IPC_Dataset_Distillation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Curriculum_Coarse-to-Fine_Selection_for_High-IPC_Dataset_Distillation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Curriculum_Coarse-to-Fine_Selection_CVPR_2025_supplemental.pdf | 2503.18872 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Yanda and Chen, Gongwei and Zhang, Miao and Guan, Weili and Nie, Liqiang},
title = {Curriculum Coarse-to-Fine Selection for High-IPC Dataset Distillation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
mo... | Dataset distillation (DD) excels in synthesizing a small number of images per class (IPC) but struggles to maintain its effectiveness in high-IPC settings. Recent works on dataset distillation demonstrate that combining distilled and real data can mitigate the effectiveness decay. However, our analysis of the combinati... | [
0.017758052796125412,
-0.03128944709897041,
-0.022311696782708168,
0.09848537296056747,
0.08805117756128311,
0.011191991157829762,
0.0024083296302706003,
0.004253694321960211,
-0.013697994872927666,
-0.03289914131164551,
0.0006427603075280786,
-0.009386030025780201,
-0.04508805274963379,
0... |
87 | BADGR: Bundle Adjustment Diffusion Conditioned by Gradients for Wide-Baseline Floor Plan Reconstruction | [
"Yuguang Li",
"Ivaylo Boyadzhiev",
"Zixuan Liu",
"Linda Shapiro",
"Alex Colburn"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_BADGR_Bundle_Adjustment_Diffusion_Conditioned_by_Gradients_for_Wide-Baseline_Floor_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_BADGR_Bundle_Adjustment_Diffusion_Conditioned_by_Gradients_for_Wide-Baseline_Floor_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_BADGR_Bundle_Adjustment_CVPR_2025_supplemental.pdf | 2503.19340 | @InProceedings{Li_2025_CVPR,
author = {Li, Yuguang and Boyadzhiev, Ivaylo and Liu, Zixuan and Shapiro, Linda and Colburn, Alex},
title = {BADGR: Bundle Adjustment Diffusion Conditioned by Gradients for Wide-Baseline Floor Plan Reconstruction},
booktitle = {Proceedings of the Computer Vision and Patte... | Reconstructing precise camera poses and floor plan layouts from wide-baseline RGB panoramas is a difficult and unsolved problem. We introduce BADGR, a novel diffusion model that jointly performs reconstruction and bundle adjustment (BA) to refine poses and layouts from a coarse state, using 1D floor boundary prediction... | [
0.01697998307645321,
0.003459477797150612,
0.011348304338753223,
0.020852690562605858,
0.06724748760461807,
0.03131624683737755,
0.029356325045228004,
-0.021539922803640366,
-0.046911709010601044,
-0.07363691180944443,
-0.018522851169109344,
-0.027923129498958588,
-0.04139833524823189,
-0.... |
88 | Three Cars Approaching within 100m! Enhancing Distant Geometry by Tri-Axis Voxel Scanning for Camera-based Semantic Scene Completion | [
"Jongseong Bae",
"Junwoo Ha",
"Ha Young Kim"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Bae_Three_Cars_Approaching_within_100m_Enhancing_Distant_Geometry_by_Tri-Axis_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Bae_Three_Cars_Approaching_within_100m_Enhancing_Distant_Geometry_by_Tri-Axis_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Bae_Three_Cars_Approaching_CVPR_2025_supplemental.pdf | 2411.16129 | @InProceedings{Bae_2025_CVPR,
author = {Bae, Jongseong and Ha, Junwoo and Kim, Ha Young},
title = {Three Cars Approaching within 100m! Enhancing Distant Geometry by Tri-Axis Voxel Scanning for Camera-based Semantic Scene Completion},
booktitle = {Proceedings of the Computer Vision and Pattern Recogni... | Camera-based Semantic Scene Completion (SSC) is gaining attentions in the 3D perception field. However, properties such as perspective and occlusion lead to the underestimation of the geometry in distant regions, posing a critical issue for safety-focused autonomous driving systems. To tackle this, we propose ScanSSC, ... | [
-0.0012009443016722798,
-0.022393710911273956,
0.045990243554115295,
0.06058415025472641,
0.020005064085125923,
0.02119254693388939,
0.024183493107557297,
0.01096037495881319,
-0.010563882999122143,
-0.04561829939484596,
-0.04203861206769943,
-0.017032548785209656,
-0.03548143059015274,
0.... |
89 | MetaShadow: Object-Centered Shadow Detection, Removal, and Synthesis | [
"Tianyu Wang",
"Jianming Zhang",
"Haitian Zheng",
"Zhihong Ding",
"Scott Cohen",
"Zhe Lin",
"Wei Xiong",
"Chi-Wing Fu",
"Luis Figueroa",
"Soo Ye Kim"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_MetaShadow_Object-Centered_Shadow_Detection_Removal_and_Synthesis_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_MetaShadow_Object-Centered_Shadow_Detection_Removal_and_Synthesis_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_MetaShadow_Object-Centered_Shadow_CVPR_2025_supplemental.pdf | 2412.02635 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Tianyu and Zhang, Jianming and Zheng, Haitian and Ding, Zhihong and Cohen, Scott and Lin, Zhe and Xiong, Wei and Fu, Chi-Wing and Figueroa, Luis and Kim, Soo Ye},
title = {MetaShadow: Object-Centered Shadow Detection, Removal, and Synthesis},
booktitle =... | Shadows are often underconsidered or even ignored in image editing applications, limiting the realism of the edited results. In this paper, we introduce MetaShadow, a three-in-one versatile framework that enables detection, removal, and controllable synthesis of shadows in natural images in an object-centered fashion. ... | [
0.041417017579078674,
-0.007438084576278925,
0.004485301673412323,
0.012296847067773342,
0.04627707228064537,
0.01181809138506651,
0.012853777036070824,
0.011916808784008026,
-0.0650416687130928,
-0.0747373178601265,
-0.032069481909275055,
-0.01710665039718151,
-0.05341381952166557,
0.0125... |
90 | TANGO: Training-free Embodied AI Agents for Open-world Tasks | [
"Filippo Ziliotto",
"Tommaso Campari",
"Luciano Serafini",
"Lamberto Ballan"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ziliotto_TANGO_Training-free_Embodied_AI_Agents_for_Open-world_Tasks_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ziliotto_TANGO_Training-free_Embodied_AI_Agents_for_Open-world_Tasks_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ziliotto_TANGO_Training-free_Embodied_CVPR_2025_supplemental.pdf | 2412.10402 | @InProceedings{Ziliotto_2025_CVPR,
author = {Ziliotto, Filippo and Campari, Tommaso and Serafini, Luciano and Ballan, Lamberto},
title = {TANGO: Training-free Embodied AI Agents for Open-world Tasks},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
mont... | Large Language Models (LLMs) have demonstrated excellent capabilities in composing various modules together to create programs that can perform complex reasoning tasks on images. In this paper, we propose TANGO, an approach that extends the program composition via LLMs already observed for images, aiming to integrate t... | [
-0.003161393804475665,
-0.0018572107655927539,
0.005599311552941799,
0.03651564195752144,
0.03153428062796593,
0.006595022976398468,
0.02819109708070755,
0.03450856730341911,
-0.02275834046304226,
-0.01731051877140999,
-0.03790450468659401,
0.022283129394054413,
-0.07292091846466064,
-0.03... |
91 | SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers | [
"Nick Nikzad",
"Yi Liao",
"Yongsheng Gao",
"Jun Zhou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Nikzad_SATA_Spatial_Autocorrelation_Token_Analysis_for_Enhancing_the_Robustness_of_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Nikzad_SATA_Spatial_Autocorrelation_Token_Analysis_for_Enhancing_the_Robustness_of_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Nikzad_SATA_Spatial_Autocorrelation_CVPR_2025_supplemental.pdf | 2409.19850 | @InProceedings{Nikzad_2025_CVPR,
author = {Nikzad, Nick and Liao, Yi and Gao, Yongsheng and Zhou, Jun},
title = {SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)... | Over the past few years, vision transformers (ViTs) have consistently demonstrated remarkable performance across various visual recognition tasks. However, attempts to enhance their robustness have yielded limited success, mainly focusing on different training strategies, input patch augmentation, or network structural... | [
0.05579308792948723,
-0.009566724300384521,
0.0035809725522994995,
0.023628022521734238,
0.013421603478491306,
0.053201619535684586,
0.005836431868374348,
0.035266898572444916,
-0.011862889863550663,
-0.03186910226941109,
-0.02394699864089489,
-0.01734873652458191,
-0.060991425067186356,
0... |
92 | DViN: Dynamic Visual Routing Network for Weakly Supervised Referring Expression Comprehension | [
"Xiaofu Chen",
"Yaxin Luo",
"Gen Luo",
"Jiayi Ji",
"Henghui Ding",
"Yiyi Zhou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_DViN_Dynamic_Visual_Routing_Network_for_Weakly_Supervised_Referring_Expression_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_DViN_Dynamic_Visual_Routing_Network_for_Weakly_Supervised_Referring_Expression_CVPR_2025_paper.pdf | null | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Xiaofu and Luo, Yaxin and Luo, Gen and Ji, Jiayi and Ding, Henghui and Zhou, Yiyi},
title = {DViN: Dynamic Visual Routing Network for Weakly Supervised Referring Expression Comprehension},
booktitle = {Proceedings of the Computer Vision and Pattern Recog... | In this paper, we focus on weakly supervised referring expression comprehension (REC), and identify that the lack of fine-grained visual capability greatly limits the upper performance bound of existing methods. To address this issue, we propose a novel framework for weakly supervised REC, namely Dynamic Visual routing... | [
0.03077496960759163,
0.00139671063516289,
-0.021231072023510933,
0.027677208185195923,
0.016512244939804077,
0.045106105506420135,
0.040553346276283264,
0.007713695056736469,
0.003962408751249313,
-0.018014756962656975,
0.01829189993441105,
0.025145096704363823,
-0.08246953040361404,
0.041... |
93 | Nested Diffusion Models Using Hierarchical Latent Priors | [
"Xiao Zhang",
"Ruoxi Jiang",
"Rebecca Willett",
"Michael Maire"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_Nested_Diffusion_Models_Using_Hierarchical_Latent_Priors_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Nested_Diffusion_Models_Using_Hierarchical_Latent_Priors_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_Nested_Diffusion_Models_CVPR_2025_supplemental.pdf | 2412.05984 | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Xiao and Jiang, Ruoxi and Willett, Rebecca and Maire, Michael},
title = {Nested Diffusion Models Using Hierarchical Latent Priors},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
... | We introduce nested diffusion models, an efficient and powerful hierarchical generative framework that substantially enhances the generation quality of diffusion models, particularly for images of complex scenes. Our approach employs a series of diffusion models to progressively generate latent variables at different s... | [
0.007574594579637051,
0.007362969685345888,
0.011698465794324875,
0.07351532578468323,
0.04032628610730171,
0.04745129123330116,
0.014958960935473442,
-0.00940899271517992,
-0.020958418026566505,
-0.05894690752029419,
0.004102302249521017,
-0.04863634705543518,
-0.020746098831295967,
0.016... |
94 | A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains | [
"Dexuan Zhang",
"Thomas Westfechtel",
"Tatsuya Harada"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_A_Theory_of_Learning_Unified_Model_via_Knowledge_Integration_from_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_A_Theory_of_Learning_Unified_Model_via_Knowledge_Integration_from_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_A_Theory_of_CVPR_2025_supplemental.pdf | null | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Dexuan and Westfechtel, Thomas and Harada, Tatsuya},
title = {A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
... | Existing domain adaptation systems can hardly be applied to real-world problems with new classes presenting at deployment time, especially regarding source-free scenarios where multiple source domains do not share the label space despite being given a few labeled target data. To address this, we consider a challenging ... | [
-0.013777988962829113,
-0.015407847240567207,
-0.001996108330786228,
0.04453473165631294,
0.05454959347844124,
0.016079280525445938,
0.029851529747247696,
-0.024300551041960716,
-0.009898796677589417,
-0.00635678181424737,
-0.031199028715491295,
0.0446469709277153,
-0.10143274813890457,
0.... |
95 | HiLoTs: High-Low Temporal Sensitive Representation Learning for Semi-Supervised LiDAR Segmentation in Autonomous Driving | [
"R.D. Lin",
"Pengcheng Weng",
"Yinqiao Wang",
"Han Ding",
"Jinsong Han",
"Fei Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lin_HiLoTs_High-Low_Temporal_Sensitive_Representation_Learning_for_Semi-Supervised_LiDAR_Segmentation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lin_HiLoTs_High-Low_Temporal_Sensitive_Representation_Learning_for_Semi-Supervised_LiDAR_Segmentation_CVPR_2025_paper.pdf | null | 2503.17752 | @InProceedings{Lin_2025_CVPR,
author = {Lin, R.D. and Weng, Pengcheng and Wang, Yinqiao and Ding, Han and Han, Jinsong and Wang, Fei},
title = {HiLoTs: High-Low Temporal Sensitive Representation Learning for Semi-Supervised LiDAR Segmentation in Autonomous Driving},
booktitle = {Proceedings of the Co... | LiDAR point cloud semantic segmentation plays a crucial role in autonomous driving. In recent years, semi-supervised methods have gained popularity due to their significant reduction in annotation labor and time costs. Current semi-supervised methods typically focus on point cloud spatial distribution or consider short... | [
-0.0022369183134287596,
-0.00031342479633167386,
0.0007812526309862733,
0.03913065046072006,
0.03208713233470917,
0.011852719821035862,
0.009407690726220608,
0.005028367508202791,
-0.006496808025985956,
-0.03056141547858715,
-0.038252104073762894,
-0.026657957583665848,
-0.05974746495485306,... |
96 | Spiking Transformer with Spatial-Temporal Attention | [
"Donghyun Lee",
"Yuhang Li",
"Youngeun Kim",
"Shiting Xiao",
"Priyadarshini Panda"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lee_Spiking_Transformer_with_Spatial-Temporal_Attention_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_Spiking_Transformer_with_Spatial-Temporal_Attention_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lee_Spiking_Transformer_with_CVPR_2025_supplemental.pdf | 2409.19764 | @InProceedings{Lee_2025_CVPR,
author = {Lee, Donghyun and Li, Yuhang and Kim, Youngeun and Xiao, Shiting and Panda, Priyadarshini},
title = {Spiking Transformer with Spatial-Temporal Attention},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month ... | Spike-based Transformer presents a compelling and energy-efficient alternative to traditional Artificial Neural Network (ANN)-based Transformers, achieving impressive results through sparse binary computations. However, existing spike-based transformers predominantly focus on spatial attention while neglecting crucial ... | [
-0.006042798515409231,
-0.027888260781764984,
0.01381066907197237,
0.04759989678859711,
0.019260434433817863,
0.02399878203868866,
0.015341190621256828,
0.016754278913140297,
-0.028247321024537086,
-0.019633769989013672,
-0.009495983831584454,
-0.040191348642110825,
-0.050968289375305176,
... |
97 | Perceptual Video Compression with Neural Wrapping | [
"Muhammad Umar Karim Khan",
"Aaron Chadha",
"Mohammad Ashraful Anam",
"Yiannis Andreopoulos"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Khan_Perceptual_Video_Compression_with_Neural_Wrapping_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Khan_Perceptual_Video_Compression_with_Neural_Wrapping_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Khan_Perceptual_Video_Compression_CVPR_2025_supplemental.pdf | null | @InProceedings{Khan_2025_CVPR,
author = {Khan, Muhammad Umar Karim and Chadha, Aaron and Anam, Mohammad Ashraful and Andreopoulos, Yiannis},
title = {Perceptual Video Compression with Neural Wrapping},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
mon... | Standard video codecs are rate-distortion optimization machines, where distortion is typically quantified using PSNR versus the source. However, it is now widely accepted that increasing PSNR does not necessarily translate to better visual quality. In this paper, a better balance between perception and fidelity is targ... | [
0.02115122601389885,
0.011175629682838917,
-0.00010247021418763325,
0.025163209065794945,
0.03937379643321037,
0.03779997304081917,
0.0010360395535826683,
-0.008013732731342316,
-0.03316682577133179,
-0.07119244337081909,
0.016129357740283012,
0.0048106154426932335,
-0.027778418734669685,
... |
98 | ViKIENet: Towards Efficient 3D Object Detection with Virtual Key Instance Enhanced Network | [
"Zhuochen Yu",
"Bijie Qiu",
"Andy W. H. Khong"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yu_ViKIENet_Towards_Efficient_3D_Object_Detection_with_Virtual_Key_Instance_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yu_ViKIENet_Towards_Efficient_3D_Object_Detection_with_Virtual_Key_Instance_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yu_ViKIENet_Towards_Efficient_CVPR_2025_supplemental.pdf | null | @InProceedings{Yu_2025_CVPR,
author = {Yu, Zhuochen and Qiu, Bijie and Khong, Andy W. H.},
title = {ViKIENet: Towards Efficient 3D Object Detection with Virtual Key Instance Enhanced Network},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = ... | The sparsity of point clouds and inadequacy of semantic information pose challenges to current LiDAR-only 3D object detection methods. Recent methods alleviate these challenges by converting RGB images into virtual points via depth completion to be fused with LiDAR points. Although these methods have shown outstanding ... | [
0.03984049707651138,
-0.02762763202190399,
0.06368888169527054,
0.06899012625217438,
0.008206757716834545,
0.0320720449090004,
0.03127920627593994,
0.004601798485964537,
-0.03870229795575142,
-0.06088540330529213,
-0.05401081591844559,
-0.01570090651512146,
-0.04867297410964966,
0.02714762... |
99 | DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture | [
"Qianlong Xiang",
"Miao Zhang",
"Yuzhang Shang",
"Jianlong Wu",
"Yan Yan",
"Liqiang Nie"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xiang_DKDM_Data-Free_Knowledge_Distillation_for_Diffusion_Models_with_Any_Architecture_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xiang_DKDM_Data-Free_Knowledge_Distillation_for_Diffusion_Models_with_Any_Architecture_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xiang_DKDM_Data-Free_Knowledge_CVPR_2025_supplemental.pdf | 2409.03550 | @InProceedings{Xiang_2025_CVPR,
author = {Xiang, Qianlong and Zhang, Miao and Shang, Yuzhang and Wu, Jianlong and Yan, Yan and Nie, Liqiang},
title = {DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture},
booktitle = {Proceedings of the Computer Vision and Pattern Recogn... | Diffusion models (DMs) have demonstrated exceptional generative capabilities across various domains, including image, video, and so on. A key factor contributing to their effectiveness is the high quantity and quality of data used during training. However, mainstream DMs now consume increasingly large amounts of data. ... | [
-0.00715624401345849,
-0.015009052120149136,
0.0010122661478817463,
0.08479106426239014,
0.0566573329269886,
-0.00048393046017736197,
0.02725745551288128,
-0.016700975596904755,
0.004369630943983793,
-0.01724252663552761,
-0.022709030658006668,
-0.015145204029977322,
-0.050966981798410416,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.