vecId stringlengths 12 23 | id stringlengths 2 13 | conference stringclasses 11
values | year float64 2.02k 2.03k | title stringlengths 6 189 | abstract stringlengths 10 4.74k | author stringlengths 0 7.45k | aff stringlengths 0 7.16k | status stringclasses 11
values | track stringclasses 4
values | keywords stringlengths 0 804 | github stringlengths 0 141 | site stringlengths 0 193 | gsCitation float64 -1 11.1k | arxiv stringlengths 0 12 | text stringlengths 58 4.82k | vector list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
wacv_2025_4ca431a1d2 | 4ca431a1d2 | wacv | 2,025 | Face Anonymization Made Simple | Current face anonymization techniques often depend on identity loss calculated by face recognition models which can be inaccurate and unreliable. Additionally many methods require supplementary data such as facial landmarks and masks to guide the synthesis process. In contrast our approach uses diffusion models with on... | Han-Wei Kung; Tuomas Varanka; Sanjay Saha; Terence Sim; Nicu Sebe | University of Trento; University of Oulu; National University of Singapore; National University of Singapore; University of Trento | Poster | main | https://github.com/hanweikung/face_anon_simple | https://openaccess.thecvf.com/content/WACV2025/html/Kung_Face_Anonymization_Made_Simple_WACV_2025_paper.html | 5 | 2411.00762 | Face Anonymization Made Simple
Current face anonymization techniques often depend on identity loss calculated by face recognition models which can be inaccurate and unreliable. Additionally many methods require supplementary data such as facial landmarks and masks to guide the synthesis process. In contrast our approac... | [
-0.03268769383430481,
-0.005154813639819622,
-0.018280411139130592,
-0.01628771424293518,
-0.010777397081255913,
-0.0018196211894974113,
0.011394851841032505,
0.026382168754935265,
0.026157639920711517,
0.034708455204963684,
-0.017625534906983376,
-0.03439037501811981,
0.012947844341397285,
... | |
wacv_2025_69c39b4eda | 69c39b4eda | wacv | 2,025 | Facial Expression Recognition with Controlled Privacy Preservation and Feature Compensation | Facial expression recognition (FER) systems raise significant privacy concerns due to the potential exposure of sensitive identity information. This paper presents a study on removing identity information while preserving FER capabilities. Drawing on the observation that low-frequency components predominantly contain i... | Feng Xu; David Ahmedt-Aristizabal; Lars Petersson; Dadong Wang; Xun Li | UNSW Sydney + CSIRO’s Data61; CSIRO’s Data61; CSIRO’s Data61; CSIRO’s Data61; CSIRO’s Data61 | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Xu_Facial_Expression_Recognition_with_Controlled_Privacy_Preservation_and_Feature_Compensation_WACV_2025_paper.html | 0 | 2412.00277 | Facial Expression Recognition with Controlled Privacy Preservation and Feature Compensation
Facial expression recognition (FER) systems raise significant privacy concerns due to the potential exposure of sensitive identity information. This paper presents a study on removing identity information while preserving FER ca... | [
-0.059466276317834854,
-0.02173876389861107,
-0.02613474242389202,
-0.007094775792211294,
-0.022517798468470573,
-0.00463247112929821,
0.0343146026134491,
0.04351462423801422,
0.04444204643368721,
0.023482317104935646,
-0.02173876389861107,
0.008212318643927574,
-0.026171838864684105,
0.00... | ||
wacv_2025_2b1309692a | 2b1309692a | wacv | 2,025 | Fair Domain Generalization with Heterogeneous Sensitive Attributes Across Domains | Domain generalization(DG) techniques classify data from unseen domains by capitalizing on data from multiple source domains. Most methods in DG focus on improving predictive performance measures on the unseen domain. Recent studies have also attempted to enforce fairness measures on the unseen domain. However these stu... | Ragja Palakkadavath; Hung Le; Thanh Nguyen-Tang; Sunil Gupta; Svetha Venkatesh | Deakin University, Australia; Deakin University, Australia; Johns Hopkins University, USA; Deakin University, Australia; Deakin University, Australia | Poster | main | https://github.com/ragjapk/SISA | https://openaccess.thecvf.com/content/WACV2025/html/Palakkadavath_Fair_Domain_Generalization_with_Heterogeneous_Sensitive_Attributes_Across_Domains_WACV_2025_paper.html | 0 | Fair Domain Generalization with Heterogeneous Sensitive Attributes Across Domains
Domain generalization(DG) techniques classify data from unseen domains by capitalizing on data from multiple source domains. Most methods in DG focus on improving predictive performance measures on the unseen domain. Recent studies have a... | [
-0.08853644132614136,
0.008141119964420795,
-0.003841166850179434,
-0.00808494258671999,
-0.011937812902033329,
-0.005585023667663336,
-0.018988145515322685,
0.02741483598947525,
0.015888994559645653,
0.047638896852731705,
-0.02668452262878418,
0.028220053762197495,
0.01574854925274849,
0.... | ||
wacv_2025_3746f85d3f | 3746f85d3f | wacv | 2,025 | Fairer Analysis and Demographically Balanced Face Generation for Fairer Face Verification | Face recognition and verification are two computer vision tasks whose performances have advanced with the introduction of deep representations. However ethical legal and technical challenges due to the sensitive nature of face data and biases in real-world training datasets hinder their development. Generative AI addre... | Alexandre Fournier-Montgieux; Michaël Soumm; Adrian Popescu; Bertrand Luvison; Hervé Le Borgne | Université Paris-Saclay, CEA, LIST,F-91120, Palaiseau, France; Université Paris-Saclay, CEA, LIST,F-91120, Palaiseau, France; Université Paris-Saclay, CEA, LIST,F-91120, Palaiseau, France; Université Paris-Saclay, CEA, LIST,F-91120, Palaiseau, France; Université Paris-Saclay, CEA, LIST,F-91120, Palaiseau, France | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Fournier-Montgieux_Fairer_Analysis_and_Demographically_Balanced_Face_Generation_for_Fairer_Face_WACV_2025_paper.html | 0 | 2412.03349 | Fairer Analysis and Demographically Balanced Face Generation for Fairer Face Verification
Face recognition and verification are two computer vision tasks whose performances have advanced with the introduction of deep representations. However ethical legal and technical challenges due to the sensitive nature of face dat... | [
-0.03513353317975998,
-0.008026602678000927,
-0.026960302144289017,
0.009265830740332603,
-0.033941600471735,
0.0035072043538093567,
-0.03138746693730354,
0.06742913275957108,
0.028568461537361145,
0.048471786081790924,
-0.01862625777721405,
-0.026184603571891785,
0.007596183568239212,
-0.... | ||
wacv_2025_be977123e9 | be977123e9 | wacv | 2,025 | FastVideoEdit: Leveraging Consistency Models for Efficient Text-to-Video Editing | Diffusion models have demonstrated remarkable capabilities in text-to-image and text-to-video generation opening up possibilities for video editing based on textual input. However the computational cost associated with sequential sampling in diffusion models poses challenges for efficient video editing. Existing approa... | Youyuan Zhang; Xuan Ju; James J. Clark | McGill University; The Chinese University of Hong Kong; McGill University | Poster | main | github.com/youyuan-zhang/FastVideoEdit | https://openaccess.thecvf.com/content/WACV2025/html/Zhang_FastVideoEdit_Leveraging_Consistency_Models_for_Efficient_Text-to-Video_Editing_WACV_2025_paper.html | 5 | 2403.06269 | FastVideoEdit: Leveraging Consistency Models for Efficient Text-to-Video Editing
Diffusion models have demonstrated remarkable capabilities in text-to-image and text-to-video generation opening up possibilities for video editing based on textual input. However the computational cost associated with sequential sampling ... | [
-0.05376892164349556,
-0.035563141107559204,
-0.03222246840596199,
0.015130242332816124,
0.009138242341578007,
-0.018099728971719742,
-0.01880674995481968,
0.019266312941908836,
-0.00013180688256397843,
0.013698525726795197,
-0.02736170031130314,
-0.048395562916994095,
0.007618147414177656,
... | |
wacv_2025_e11c81605d | e11c81605d | wacv | 2,025 | Feasibility of Federated Learning from Client Databases with Different Brain Diseases and MRI Modalities | Segmentation models for brain lesions in MRI are commonly developed for a specific disease and trained on data with a predefined set of MRI modalities. Such models cannot segment the disease using data with a different set of MRI modalities nor can they segment other types of diseases. Moreover this training paradigm p... | Felix Wagner; Wentian Xu; Pramit Saha; Ziyun Liang; Daniel Whitehouse; David Menon; Virginia Newcombe; Natalie Voets; J. Alison Noble; Konstantinos Kamnitsas | Department of Engineering Science, University of Oxford; Department of Engineering Science, University of Oxford; Department of Engineering Science, University of Oxford; Department of Engineering Science, University of Oxford; Department of Medicine, University of Cambridge; Department of Medicine, University of Cambr... | Poster | main | https://github.com/FelixWag/FedUniBrain | https://openaccess.thecvf.com/content/WACV2025/html/Wagner_Feasibility_of_Federated_Learning_from_Client_Databases_with_Different_Brain_WACV_2025_paper.html | 5 | 2406.11636 | Feasibility of Federated Learning from Client Databases with Different Brain Diseases and MRI Modalities
Segmentation models for brain lesions in MRI are commonly developed for a specific disease and trained on data with a predefined set of MRI modalities. Such models cannot segment the disease using data with a differ... | [
0.0010172388283535838,
-0.04843362048268318,
-0.03332101181149483,
-0.004046057350933552,
-0.001062462106347084,
0.007354802452027798,
0.007322745397686958,
0.0017093275673687458,
0.024033626541495323,
-0.007959307171404362,
-0.022659752517938614,
0.011531378142535686,
0.02022341825067997,
... | |
wacv_2025_5600eb43cf | 5600eb43cf | wacv | 2,025 | Feature Augmentation Based Test-Time Adaptation | Test-time adaptation (TTA) allows a model to be adapted to an unseen domain without accessing the source data. Due to the nature of practical environments TTA has a limited amount of data for adaptation. Recent TTA methods further restrict this by filtering input data for reliability making the effective data size even... | Younggeol Cho; Youngrae Kim; Junho Yoon; Seunghoon Hong; Dongman Lee | KAIST; KAIST; KAIST; KAIST; KAIST | Poster | main | https://github.com/RangeWING/FATA | https://openaccess.thecvf.com/content/WACV2025/html/Cho_Feature_Augmentation_Based_Test-Time_Adaptation_WACV_2025_paper.html | 0 | 2410.14178 | Feature Augmentation Based Test-Time Adaptation
Test-time adaptation (TTA) allows a model to be adapted to an unseen domain without accessing the source data. Due to the nature of practical environments TTA has a limited amount of data for adaptation. Recent TTA methods further restrict this by filtering input data for... | [
-0.03068459965288639,
0.0026318086311221123,
-0.043390512466430664,
0.04262156784534454,
-0.017868835479021072,
-0.035536281764507294,
0.028505919501185417,
0.04009503126144409,
0.0154338413849473,
0.033650532364845276,
-0.041120294481515884,
-0.0026180774439126253,
-0.011946122162044048,
... | |
wacv_2025_61198b6255 | 61198b6255 | wacv | 2,025 | Feature Design for Bridging SAM and CLIP toward Referring Image Segmentation | Referring Image Segmentation (RIS) is a task aimed at segmenting objects expressed in natural language within an image. This task requires an understanding of the relationship between vision and language along with precise segmentation capabilities. In the field of computer vision CLIP and Segment anything model (SAM) ... | Koichiro Ito | Hitachi, Ltd. R&D Group | Poster | main | https://github.com/hitachi-rd-cv/dfam | https://openaccess.thecvf.com/content/WACV2025/html/Ito_Feature_Design_for_Bridging_SAM_and_CLIP_toward_Referring_Image_WACV_2025_paper.html | 0 | Feature Design for Bridging SAM and CLIP toward Referring Image Segmentation
Referring Image Segmentation (RIS) is a task aimed at segmenting objects expressed in natural language within an image. This task requires an understanding of the relationship between vision and language along with precise segmentation capabil... | [
-0.020859790965914726,
-0.01234540343284607,
0.007377913687378168,
0.015287403017282486,
-0.005384502001106739,
-0.006026059854775667,
-0.0003809248737525195,
-0.030428165569901466,
0.015965621918439865,
0.04842844232916832,
-0.018376044929027557,
-0.003860802622511983,
0.004816265311092138,... | ||
wacv_2025_3a69060f7e | 3a69060f7e | wacv | 2,025 | Feature Fusion Transferability Aware Transformer for Unsupervised Domain Adaptation | Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from labeled source domains to improve performance on the unlabeled target domains. While Convolutional Neural Networks (CNNs) have been dominant in previous UDA methods recent research has shown promise in applying Vision Transformers (ViTs) t... | Xiaowei Yu; Zhe Huang; Zao Zhang | Department of Computer Science and Engineering, University of Texas at Arlington; Department of Computer Science, Tufts University; Yiwu Industria & Commercial College | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Yu_Feature_Fusion_Transferability_Aware_Transformer_for_Unsupervised_Domain_Adaptation_WACV_2025_paper.html | 4 | 2411.07794 | Feature Fusion Transferability Aware Transformer for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from labeled source domains to improve performance on the unlabeled target domains. While Convolutional Neural Networks (CNNs) have been dominant in previous UD... | [
-0.060997530817985535,
-0.00762923015281558,
-0.03309478983283043,
0.02764858305454254,
-0.002111540175974369,
-0.0006864491733722389,
0.022129759192466736,
0.015676002949476242,
0.035908665508031845,
0.03772407025098801,
-0.037760376930236816,
0.00546890078112483,
-0.01303459145128727,
0.... | ||
wacv_2025_d6844976fc | d6844976fc | wacv | 2,025 | Feature Space Perturbation: A Panacea to Enhanced Transferability Estimation | Leveraging a transferability estimation metric facilitates the non-trivial challenge of selecting the optimal model for the downstream task from a pool of pre-trained models. Most existing metrics primarily focus on identifying the statistical relationship between feature embeddings and the corresponding labels within ... | Prafful Kumar Khoba; Zijian Wang; Chetan Arora; Mahsa Baktashmotlagh | UQ–IITD Research Academy, New Delhi, India; The University of Queensland, Brisbane, Australia; Indian Institute of Technology Delhi, New Delhi, India; The University of Queensland, Brisbane, Australia | Poster | main | https://github.com/prafful-kumar/enhancing_TE.git | https://openaccess.thecvf.com/content/WACV2025/html/Khoba_Feature_Space_Perturbation_A_Panacea_to_Enhanced_Transferability_Estimation_WACV_2025_paper.html | 0 | Feature Space Perturbation: A Panacea to Enhanced Transferability Estimation
Leveraging a transferability estimation metric facilitates the non-trivial challenge of selecting the optimal model for the downstream task from a pool of pre-trained models. Most existing metrics primarily focus on identifying the statistical... | [
-0.030272427946329117,
-0.03240714222192764,
-0.0329224169254303,
0.017813805490732193,
0.013185529038310051,
-0.006960818078368902,
0.014492119662463665,
0.007356476038694382,
0.01919400878250599,
0.03755989298224449,
-0.043724797666072845,
0.007328872103244066,
-0.012477025389671326,
0.0... | ||
wacv_2025_d121082a8f | d121082a8f | wacv | 2,025 | Feature-Level and Spatial-Level Activation Expansion for Weakly-Supervised Semantic Segmentation | Weakly-supervised Semantic Segmentation (WSSS) aims to provide a precise semantic segmentation results without expensive pixel-wise segmentation labels. With the supervision gap between classification and segmentation Image-level WSSS mainly relies on Class Activation Maps (CAMs) from the classification model to emulat... | Junsu Choi; Jin-Seop Lee; Noo-ri Kim; SuHyun Yoon; Jee-Hyong Lee | Sungkyunkwan University; Sungkyunkwan University; Sungkyunkwan University; Sungkyunkwan University + Robotics Lab, Hyundai Motor Company; Sungkyunkwan University | Poster | main | https://github.com/obeychoi0120/FSAE | https://openaccess.thecvf.com/content/WACV2025/html/Choi_Feature-Level_and_Spatial-Level_Activation_Expansion_for_Weakly-Supervised_Semantic_Segmentation_WACV_2025_paper.html | 0 | Feature-Level and Spatial-Level Activation Expansion for Weakly-Supervised Semantic Segmentation
Weakly-supervised Semantic Segmentation (WSSS) aims to provide a precise semantic segmentation results without expensive pixel-wise segmentation labels. With the supervision gap between classification and segmentation Image... | [
-0.07840830832719803,
-0.012482283636927605,
-0.022319555282592773,
-0.004411372821778059,
0.006209437735378742,
-0.02827988937497139,
0.057864170521497726,
0.009755746461451054,
-0.023877575993537903,
0.05851636454463005,
-0.015851955860853195,
0.025761693716049194,
-0.0007880686898715794,
... | ||
wacv_2025_705e37c687 | 705e37c687 | wacv | 2,025 | Federated Source-Free Domain Adaptation for Classification: Weighted Cluster Aggregation for Unlabeled Data | Federated learning (FL) commonly assumes that the server or some clients have labeled data which is often impractical due to annotation costs and privacy concerns. Addressing this problem we focus on a source-free domain adaptation task where (1) the server holds a pre-trained model on labeled source domain data (2) cl... | Junki Mori; Kosuke Kihara; Taiki Miyagawa; Akinori F. Ebihara; Isamu Teranishi; Hisashi Kashima | NEC Corporation + Kyoto University; NEC Corporation; NEC Corporation; NEC Corporation; NEC Corporation; Kyoto University | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Mori_Federated_Source-Free_Domain_Adaptation_for_Classification_Weighted_Cluster_Aggregation_for_WACV_2025_paper.html | 0 | 2412.13757 | Federated Source-Free Domain Adaptation for Classification: Weighted Cluster Aggregation for Unlabeled Data
Federated learning (FL) commonly assumes that the server or some clients have labeled data which is often impractical due to annotation costs and privacy concerns. Addressing this problem we focus on a source-fre... | [
-0.05645236372947693,
-0.022097067907452583,
-0.03055596351623535,
0.006644353736191988,
0.0119266826659441,
-0.023960892111063004,
0.026738706976175308,
0.009193670935928822,
0.019014589488506317,
0.03629080578684807,
-0.003418504260480404,
0.00134410394821316,
-0.01592315174639225,
0.007... | ||
wacv_2025_0332020f62 | 0332020f62 | wacv | 2,025 | Federated Voxel Scene Graph for Intracranial Hemorrhage | Intracranial Hemorrhage is a potentially lethal condition whose manifestation is vastly diverse and shifts across clinical centers worldwide. Deep-learning-based solutions are starting to model complex relations between brain structures but still struggle to generalize. While gathering more diverse data is the most nat... | Antoine P. Sanner; Jonathan Stieber; Nils F. Grauhan; Suam Kim; Marc A. Brockmann; Ahmed E. Othman; Anirban Mukhopadhyay | Department of Computer Science, Technical University of Darmstadt, Germany+Department of Neuroradiology, University Medical Center Mainz, Germany; Department of Computer Science, Technical University of Darmstadt, Germany; Department of Neuroradiology, University Medical Center Mainz, Germany; Department of Neuroradiol... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Sanner_Federated_Voxel_Scene_Graph_for_Intracranial_Hemorrhage_WACV_2025_paper.html | 0 | 2411.00578 | Federated Voxel Scene Graph for Intracranial Hemorrhage
Intracranial Hemorrhage is a potentially lethal condition whose manifestation is vastly diverse and shifts across clinical centers worldwide. Deep-learning-based solutions are starting to model complex relations between brain structures but still struggle to gener... | [
-0.032203804701566696,
-0.03193999081850052,
-0.006797847803682089,
0.012446233071386814,
-0.01398199051618576,
-0.037743836641311646,
-0.0030550272203981876,
0.018730591982603073,
0.021689044311642647,
0.008592706173658371,
-0.0023719449527561665,
0.03275027126073837,
0.006548169534653425,
... | ||
wacv_2025_71edca5097 | 71edca5097 | wacv | 2,025 | Federated-Continual Dynamic Segmentation of Histopathology Guided by Barlow Continuity | Federated- and Continual Learning have been established as approaches to enable privacy-aware learning on continuously changing data as required for deploying AI systems in histopathology images. However data shifts can occur in a dynamic world spatially between institutions and temporally due to changing data over tim... | Niklas Babendererde; Haozhe Zhu; Moritz Fuchs; Jonathan Stieber; Anirban Mukhopadhyay | Technical University of Darmstadt; Technical University of Darmstadt; Technical University of Darmstadt; Technical University of Darmstadt; Technical University of Darmstadt | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Babendererde_Federated-Continual_Dynamic_Segmentation_of_Histopathology_Guided_by_Barlow_Continuity_WACV_2025_paper.html | 1 | 2501.04588 | Federated-Continual Dynamic Segmentation of Histopathology Guided by Barlow Continuity
Federated- and Continual Learning have been established as approaches to enable privacy-aware learning on continuously changing data as required for deploying AI systems in histopathology images. However data shifts can occur in a dy... | [
-0.10228663682937622,
-0.06600354611873627,
-0.012719826772809029,
-0.015256248414516449,
-0.010730290785431862,
-0.021517155691981316,
0.02868325635790825,
0.020310234278440475,
0.008104292675852776,
0.022969234734773636,
-0.016038861125707626,
-0.011154599487781525,
-0.03430298715829849,
... | ||
wacv_2025_23e16ad6c3 | 23e16ad6c3 | wacv | 2,025 | Few-Shot Structure-Informed Machinery Part Segmentation with Foundation Models and Graph Neural Networks | This paper proposes a novel approach to few-shot semantic segmentation for machinery with multiple parts that exhibit spatial and hierarchical relationships. Our method integrates the foundation models CLIPSeg and Segment Anything Model (SAM) with the interest point detector SuperPoint and a graph convolutional network... | Michael Schwingshackl; Fabio F. Oberweger; Markus Murschitz | AIT Austrian Institute of Technology; AIT Austrian Institute of Technology; AIT Austrian Institute of Technology | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Schwingshackl_Few-Shot_Structure-Informed_Machinery_Part_Segmentation_with_Foundation_Models_and_Graph_WACV_2025_paper.html | 1 | 2501.10080 | Few-Shot Structure-Informed Machinery Part Segmentation with Foundation Models and Graph Neural Networks
This paper proposes a novel approach to few-shot semantic segmentation for machinery with multiple parts that exhibit spatial and hierarchical relationships. Our method integrates the foundation models CLIPSeg and S... | [
-0.02696136198937893,
-0.04321105033159256,
-0.006644700653851032,
0.011139779351651669,
0.008470968343317509,
-0.04233662784099579,
0.03698078915476799,
0.005474249832332134,
-0.01354444120079279,
0.04922270402312279,
-0.03916684538125992,
0.004624875728040934,
0.011586098931729794,
0.029... | ||
wacv_2025_c8e335c76b | c8e335c76b | wacv | 2,025 | Finding Dino: A Plug-and-Play Framework for Zero-Shot Detection of Out-of-Distribution Objects using Prototypes | Detecting and localising unknown or Out-of-distribution (OOD) objects in any scene can be a challenging task in vision particularly in safety-critical cases involving autonomous systems like automated vehicles or trains. Supervised anomaly segmentation or open-world object detection models depend on training on exhaust... | Poulami Sinhamahapatra; Franziska Schwaiger; Shirsha Bose; Huiyu Wang; Karsten Roscher; Stephan Günnemann | Fraunhofer IKS, Germany + Technical University of Munich, Germany; Fraunhofer IKS, Germany; Fraunhofer IKS, Germany + Technical University of Munich, Germany; Fraunhofer IKS, Germany; Fraunhofer IKS, Germany; Technical University of Munich, Germany | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Sinhamahapatra_Finding_Dino_A_Plug-and-Play_Framework_for_Zero-Shot_Detection_of_Out-of-Distribution_WACV_2025_paper.html | 0 | 2404.07664 | Finding Dino: A Plug-and-Play Framework for Zero-Shot Detection of Out-of-Distribution Objects using Prototypes
Detecting and localising unknown or Out-of-distribution (OOD) objects in any scene can be a challenging task in vision particularly in safety-critical cases involving autonomous systems like automated vehicle... | [
-0.08902764320373535,
-0.0166835468262434,
-0.01846519485116005,
-0.0038488192949444056,
-0.030790554359555244,
0.023810142651200294,
0.036820750683546066,
0.0008314363076351583,
0.01947936601936817,
0.010114286094903946,
-0.016382036730647087,
-0.016546497121453285,
0.014947580173611641,
... | ||
wacv_2025_8cbc7c3f14 | 8cbc7c3f14 | wacv | 2,025 | Fine-Grained Controllable Video Generation via Object Appearance and Context | While text-to-video generation shows state-of-the-art results fine-grained output control remains challenging for users relying solely on natural language prompts. In this work we present FACTOR for fine-grained controllable video generation. FACTOR provides an intuitive interface where users can manipulate the traject... | Hsin-Ping Huang; Yu-Chuan Su; Deqing Sun; Lu Jiang; Xuhui Jia; Yukun Zhu; Ming-Hsuan Yang | ;;;;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Huang_Fine-Grained_Controllable_Video_Generation_via_Object_Appearance_and_Context_WACV_2025_paper.html | 12 | 2312.02919 | Fine-Grained Controllable Video Generation via Object Appearance and Context
While text-to-video generation shows state-of-the-art results fine-grained output control remains challenging for users relying solely on natural language prompts. In this work we present FACTOR for fine-grained controllable video generation. ... | [
-0.038051266223192215,
-0.025017231702804565,
-0.01739407330751419,
0.012960290536284447,
-0.007415756583213806,
0.0019023327622562647,
-0.05158306285738945,
0.03217028081417084,
-0.02785632759332657,
0.035949595272541046,
-0.021496014669537544,
-0.012480962090194225,
-0.005102077499032021,
... | ||
wacv_2025_eead0cf029 | eead0cf029 | wacv | 2,025 | Fine-Grained Spatial and Verbal Losses for 3D Visual Grounding | 3D visual grounding consists of identifying the instance in a 3D scene which is referred to by an accompanying language description. While several architectures have been proposed within the commonly employed grounding-by-selection framework the utilized losses are comparatively under-explored. In particular most metho... | Sombit Dey; Ozan Unal; Christos Sakaridis; Luc Van Gool | ETH Zurich + INSAIT; ETH Zurich + Huawei Technologies + KU Leuven; ETH Zurich; ETH Zurich + INSAIT + KU Leuven | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Dey_Fine-Grained_Spatial_and_Verbal_Losses_for_3D_Visual_Grounding_WACV_2025_paper.html | 0 | 2411.03405 | Fine-Grained Spatial and Verbal Losses for 3D Visual Grounding
3D visual grounding consists of identifying the instance in a 3D scene which is referred to by an accompanying language description. While several architectures have been proposed within the commonly employed grounding-by-selection framework the utilized lo... | [
-0.02386966533958912,
-0.017714478075504303,
-0.03971560299396515,
0.012566838413476944,
-0.006443710532039404,
-0.006865047384053469,
0.029988212510943413,
-0.001185010652989149,
-0.023210180923342705,
0.000015814453945495188,
-0.04140095412731171,
-0.020810389891266823,
0.02958519384264946... | ||
wacv_2025_3a36bdd3ad | 3a36bdd3ad | wacv | 2,025 | Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think | Recent work showed that large diffusion models can be reused as highly precise monocular depth estimators by casting depth estimation as an image-conditional image generation task. While the proposed model achieved state-of-the-art results high computational demands due to multi-step inference limited its use in many s... | Gonzalo Martin Garcia; Karim Abou Zeid; Christian Schmidt; Daan de Geus; Alexander Hermans; Bastian Leibe | ;;;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Garcia_Fine-Tuning_Image-Conditional_Diffusion_Models_is_Easier_than_You_Think_WACV_2025_paper.html | 19 | 2409.11355 | Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think
Recent work showed that large diffusion models can be reused as highly precise monocular depth estimators by casting depth estimation as an image-conditional image generation task. While the proposed model achieved state-of-the-art results high com... | [
-0.039775870740413666,
-0.025658341124653816,
-0.039340369403362274,
0.002402067184448242,
0.017048098146915436,
-0.012919900007545948,
-0.0029282988980412483,
0.04202596843242645,
0.011803925968706608,
0.03854195028543472,
-0.056833043694496155,
-0.013246526941657066,
-0.008773555979132652,... | ||
wacv_2025_6f73dca056 | 6f73dca056 | wacv | 2,025 | FineControlNet: Fine-Level Text Control for Image Generation with Spatially Aligned Text Control Injection | Recently introduced ControlNet has the ability to steer the text-driven image generation process with geometric input such as human 2D pose or edge representations. While ControlNet provides control over the geometric form of the instances in the generated image it lacks the capability to dictate the visual appearance ... | Hongsuk Choi; Isaac Kasahara; Selim Engin; Moritz A. Graule; Nikhil Chavan-Dafle; Volkan Isler | University of California, Berkeley; ; ; ; ; University of Minnesota, Twin Cities | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Choi_FineControlNet_Fine-Level_Text_Control_for_Image_Generation_with_Spatially_Aligned_WACV_2025_paper.html | 4 | 2312.09252 | FineControlNet: Fine-Level Text Control for Image Generation with Spatially Aligned Text Control Injection
Recently introduced ControlNet has the ability to steer the text-driven image generation process with geometric input such as human 2D pose or edge representations. While ControlNet provides control over the geome... | [
-0.03485943377017975,
-0.004387227818369865,
-0.007623782381415367,
-0.005308683030307293,
-0.014211500063538551,
0.006922375876456499,
-0.03293400630354881,
0.021143045276403427,
-0.008178489282727242,
0.03297068178653717,
-0.0313386507332325,
-0.018832528963685036,
0.005638756323605776,
... | ||
wacv_2025_70d43dbbb6 | 70d43dbbb6 | wacv | 2,025 | FitDiff: Robust Monocular 3D Facial Shape and Reflectance Estimation using Diffusion Models | The remarkable progress in 3D face reconstruction has resulted in high-detail and photorealistic facial representations. Recently Diffusion Models have revolutionized the capabilities of generative methods by surpassing the performance of GANs. In this work we present FitDiff a diffusion-based 3D facial avatar generati... | Stathis Galanakis; Alexandros Lattas; Stylianos Moschoglou; Stefanos Zafeiriou | ;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Galanakis_FitDiff_Robust_Monocular_3D_Facial_Shape_and_Reflectance_Estimation_using_WACV_2025_paper.html | 0 | 2312.04465 | FitDiff: Robust Monocular 3D Facial Shape and Reflectance Estimation using Diffusion Models
The remarkable progress in 3D face reconstruction has resulted in high-detail and photorealistic facial representations. Recently Diffusion Models have revolutionized the capabilities of generative methods by surpassing the perf... | [
-0.04956439137458801,
-0.018292469903826714,
-0.017623884603381157,
-0.011927546001970768,
-0.024621734395623207,
0.004225453361868858,
-0.007127109915018082,
-0.013737181201577187,
0.03836783021688461,
0.006810646504163742,
-0.028651069849729538,
-0.02225048840045929,
-0.012266295030713081,... | ||
wacv_2025_6b663d92d5 | 6b663d92d5 | wacv | 2,025 | FlashMix: Fast Map-Free LiDAR Localization via Feature Mixing and Contrastive-Constrained Accelerated Training | Map-free LiDAR localization systems accurately localize within known environments by predicting sensor position and orientation directly from raw point clouds eliminating the need for large maps and descriptors. However their long training times hinder rapid adaptation to new environments. To address this we propose Fl... | Raktim Gautam Goswami; Naman Patel; Prashanth Krishnamurthy; Farshad Khorrami | Control/Robotics Research Laboratory (CRRL), Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, 11201; Control/Robotics Research Laboratory (CRRL), Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, 11201; Control/Robotics R... | Poster | main | https://github.com/raktimgg/FlashMix | https://openaccess.thecvf.com/content/WACV2025/html/Goswami_FlashMix_Fast_Map-Free_LiDAR_Localization_via_Feature_Mixing_and_Contrastive-Constrained_WACV_2025_paper.html | 2 | 2410.00702 | FlashMix: Fast Map-Free LiDAR Localization via Feature Mixing and Contrastive-Constrained Accelerated Training
Map-free LiDAR localization systems accurately localize within known environments by predicting sensor position and orientation directly from raw point clouds eliminating the need for large maps and descriptor... | [
-0.04412456601858139,
-0.033894363790750504,
-0.06305589526891708,
0.005133303347975016,
-0.022571971639990807,
0.006220944691449404,
0.003267474938184023,
0.012469194829463959,
0.04867536574602127,
0.028542622923851013,
-0.03720734640955925,
-0.04969474673271179,
0.014152990654110909,
0.0... | |
wacv_2025_032adf92f8 | 032adf92f8 | wacv | 2,025 | FlashVTG: Feature Layering and Adaptive Score Handling Network for Video Temporal Grounding | Text-guided Video Temporal Grounding (VTG) aims to localize relevant segments in untrimmed videos based on textual descriptions encompassing two subtasks: Moment Retrieval (MR) and Highlight Detection (HD). Although previous typical methods have achieved commendable results it is still challenging to retrieve short vid... | Zhuo Cao; Bingqing Zhang; Heming Du; Xin Yu; Xue Li; Sen Wang | The University of Queensland, Australia; The University of Queensland, Australia; The University of Queensland, Australia; The University of Queensland, Australia; The University of Queensland, Australia; The University of Queensland, Australia | Poster | main | https://github.com/Zhuo-Cao/FlashVTG | https://openaccess.thecvf.com/content/WACV2025/html/Cao_FlashVTG_Feature_Layering_and_Adaptive_Score_Handling_Network_for_Video_WACV_2025_paper.html | 2 | 2412.13441 | FlashVTG: Feature Layering and Adaptive Score Handling Network for Video Temporal Grounding
Text-guided Video Temporal Grounding (VTG) aims to localize relevant segments in untrimmed videos based on textual descriptions encompassing two subtasks: Moment Retrieval (MR) and Highlight Detection (HD). Although previous typ... | [
-0.04206829518079758,
-0.055637527257204056,
-0.019324203953146935,
0.024784991517663002,
0.005580300465226173,
-0.024233397096395493,
0.02290956862270832,
0.012282176874577999,
-0.016704129055142403,
0.010507880710065365,
-0.015021763741970062,
-0.041148968040943146,
-0.003583067562431097,
... | |
wacv_2025_1b541b521a | 1b541b521a | wacv | 2,025 | Flatness Improves Backbone Generalisation in Few-Shot Classification | Deployment of deep neural networks in real-world settings typically requires adaptation to new tasks with few examples. Few-shot classification (FSC) provides a solution to this problem by leveraging pre-trained backbones for fast adaptation to new classes. However approaches for multi-domain FSC typically result in co... | Rui Li; Martin Trapp; Marcus Klasson; Arno Solin | ;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Li_Flatness_Improves_Backbone_Generalisation_in_Few-Shot_Classification_WACV_2025_paper.html | 0 | 2404.07696 | Flatness Improves Backbone Generalisation in Few-Shot Classification
Deployment of deep neural networks in real-world settings typically requires adaptation to new tasks with few examples. Few-shot classification (FSC) provides a solution to this problem by leveraging pre-trained backbones for fast adaptation to new cl... | [
-0.03699258342385292,
-0.025084631517529488,
-0.05901413410902023,
0.016865065321326256,
0.0006768455496057868,
-0.0349082387983799,
0.07779135555028915,
0.024087771773338318,
0.027150852605700493,
0.031500786542892456,
-0.05752790719270706,
0.0033122950699180365,
-0.010775154456496239,
0.... | ||
wacv_2025_f41b3b3204 | f41b3b3204 | wacv | 2,025 | Flowering Time Prediction of Wheat from DIA-MS Data | Traditional methods utilising data-independent acquisition mass spectrometry (DIA-MS) data for predictions depend on database searches against predefined spectral libraries for characterisation and quantification of the proteomes limiting scalability and adaptability across various applications. However directly applyi... | Yan Yang; Utpal Bose; James Broadbent; Sally Stockwell; Keren A Byrne; Md Zakir Hossain; Eric A Stone; Shannon Dillon | A&F, CSIRO, Australia+BDSI, Australian National University, Australia+Curtin University, Australia; A&F, CSIRO, Australia+BDSI, Australian National University, Australia; A&F, CSIRO, Australia; A&F, CSIRO, Australia; A&F, CSIRO, Australia; A&F, CSIRO, Australia+BDSI, Australian National University, Australia+Curtin Uni... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Yang_Flowering_Time_Prediction_of_Wheat_from_DIA-MS_Data_WACV_2025_paper.html | 0 | Flowering Time Prediction of Wheat from DIA-MS Data
Traditional methods utilising data-independent acquisition mass spectrometry (DIA-MS) data for predictions depend on database searches against predefined spectral libraries for characterisation and quantification of the proteomes limiting scalability and adaptability ... | [
-0.0290896724909544,
-0.016133924946188927,
-0.0657321885228157,
0.01758280023932457,
-0.031370483338832855,
-0.04875698313117027,
-0.027164069935679436,
0.028659682720899582,
0.012815533205866814,
0.05503855645656586,
-0.06341399252414703,
0.0016627010190859437,
0.007043402176350355,
0.04... | |||
wacv_2025_533ec3f0ff | 533ec3f0ff | wacv | 2,025 | FluoNeRF: Fluorescent Novel-View Synthesis under Novel Light Source Colors | Synthesizing photo-realistic images of a scene from arbitrary viewpoints and under arbitrary lighting environments is one of the important research topics in computer vision and graphics. In this paper we propose a method for synthesizing photo-realistic images of a scene with fluorescent objects from novel viewpoints ... | Lin Shi; Kengo Matsufuji; Ryo Kawahara; Takahiro Okabe | Kyushu Institute of Technology*; Kyushu Institute of Technology†; Kyoto University‡; Okayama University§ | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Shi_FluoNeRF_Fluorescent_Novel-View_Synthesis_under_Novel_Light_Source_Colors_WACV_2025_paper.html | 0 | FluoNeRF: Fluorescent Novel-View Synthesis under Novel Light Source Colors
Synthesizing photo-realistic images of a scene from arbitrary viewpoints and under arbitrary lighting environments is one of the important research topics in computer vision and graphics. In this paper we propose a method for synthesizing photo-... | [
-0.001374516636133194,
-0.005520842503756285,
-0.02022486925125122,
0.00426362082362175,
-0.01843925006687641,
-0.008217491209506989,
-0.008604679256677628,
0.0609661340713501,
0.04675406590104103,
0.00686916708946228,
-0.02055284008383751,
-0.02374144457280636,
0.00007071159780025482,
0.0... | |||
wacv_2025_b6cb2c9222 | b6cb2c9222 | wacv | 2,025 | Focusing on What to Decode and What to Train: SOV Decoding with Specific Target Guided DeNoising and Vision Language Advisor | Recent transformer-based methods achieve notable gains in the Human-object Interaction Detection (HOID) task by leveraging the detection of DETR and the prior knowledge of Vision-Language Model (VLM). However these methods suffer from extended training times and complex optimization due to the entanglement of object de... | Junwen Chen; Yingcheng Wang; Keiji Yanai | Department of Informatics, The University of Electro-Communications, Tokyo, Japan; Department of Informatics, The University of Electro-Communications, Tokyo, Japan; Department of Informatics, The University of Electro-Communications, Tokyo, Japan | Poster | main | https://github.com/cjw2021/SOV-STG-VLA | https://openaccess.thecvf.com/content/WACV2025/html/Chen_Focusing_on_What_to_Decode_and_What_to_Train_SOV_WACV_2025_paper.html | 0 | 2307.02291 | Focusing on What to Decode and What to Train: SOV Decoding with Specific Target Guided DeNoising and Vision Language Advisor
Recent transformer-based methods achieve notable gains in the Human-object Interaction Detection (HOID) task by leveraging the detection of DETR and the prior knowledge of Vision-Language Model (... | [
-0.006412055343389511,
-0.03791956603527069,
-0.022202663123607635,
0.02164989896118641,
-0.007614315487444401,
-0.004576419945806265,
-0.022608021274209023,
-0.01763315126299858,
0.0006466180202551186,
0.021834153681993484,
0.0006719529628753662,
-0.03303682431578636,
0.010051080957055092,
... | |
wacv_2025_749e334895 | 749e334895 | wacv | 2,025 | Forensic Iris Image-Based Post-Mortem Interval Estimation | Post-mortem iris recognition is an emerging application of iris-based human identification in a forensic setup. One factor that may be useful in conditioning iris recognition methods is the tissue decomposition level which is correlated with the post-mortem interval (PMI) i.e. the number of hours that have elapsed sinc... | Rasel Ahmed Bhuiyan; Adam Czajka | University of Notre Dame; University of Notre Dame | Poster | main | https://github.com/CVRL/Forensic-Iris-PMI-Estimation | https://openaccess.thecvf.com/content/WACV2025/html/Bhuiyan_Forensic_Iris_Image-Based_Post-Mortem_Interval_Estimation_WACV_2025_paper.html | 0 | 2404.10172 | Forensic Iris Image-Based Post-Mortem Interval Estimation
Post-mortem iris recognition is an emerging application of iris-based human identification in a forensic setup. One factor that may be useful in conditioning iris recognition methods is the tissue decomposition level which is correlated with the post-mortem inte... | [
-0.0026117402594536543,
-0.02201291173696518,
-0.03349631279706955,
0.03147846460342407,
-0.010538681410253048,
-0.028066463768482208,
0.0010456133168190718,
0.003393657272681594,
0.03690831735730171,
0.0495290532708168,
-0.06126927211880684,
0.019371362403035164,
0.005067555699497461,
0.0... | |
wacv_2025_f28d32fb4e | f28d32fb4e | wacv | 2,025 | Foundation Models and Adaptive Feature Selection: A Synergistic Approach to Video Question Answering | This paper tackles the intricate challenge of video question-answering (VideoQA). Despite notable progress current methods fall short of effectively integrating questions with video frames and semantic object-level abstractions to create question-aware video representations. We introduce Local - Global Question Aware V... | Sai Bhargav Rongali; Mohamad Hassan N C; Ankit Jha; Neha Bhargava; Saurabh Prasad; Biplab Banerjee | Indian Institute of Technology Bombay, India; Indian Institute of Technology Bombay, India; LNMIIT, Jaipur, India; Fractal AI Research, India; Univesity of Houston; Indian Institute of Technology Bombay, India | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Rongali_Foundation_Models_and_Adaptive_Feature_Selection_A_Synergistic_Approach_to_WACV_2025_paper.html | 0 | 2412.09230 | Foundation Models and Adaptive Feature Selection: A Synergistic Approach to Video Question Answering
This paper tackles the intricate challenge of video question-answering (VideoQA). Despite notable progress current methods fall short of effectively integrating questions with video frames and semantic object-level abst... | [
-0.021985569968819618,
-0.020906412973999977,
0.021253937855362892,
0.009268875233829021,
-0.007416929118335247,
-0.030271314084529877,
0.03169799968600273,
0.0024075298570096493,
-0.028204450383782387,
-0.00359414704144001,
-0.016269687563180923,
-0.010133116506040096,
-0.000166046389495022... | ||
wacv_2025_f00f49a6a1 | f00f49a6a1 | wacv | 2,025 | Foundation X: Integrating Classification Localization and Segmentation through Lock-Release Pretraining Strategy for Chest X-ray Analysis | Developing robust and versatile deep-learning models is essential for enhancing diagnostic accuracy and guiding clinical interventions in medical imaging but it requires a large amount of annotated data. The advancement of deep learning has facilitated the creation of numerous medical datasets with diverse expert-level... | Nahid Ul Islam; DongAo Ma; Jiaxuan Pang; Shivasakthi Senthil Velan; Michael Gotway; Jianming Liang | Arizona State University, USA; Arizona State University, USA; Arizona State University, USA; Arizona State University, USA; Mayo Clinic, USA; Arizona State University, USA | Poster | main | GitHub.com/JLiangLab/Foundation X | https://openaccess.thecvf.com/content/WACV2025/html/Islam_Foundation_X_Integrating_Classification_Localization_and_Segmentation_through_Lock-Release_Pretraining_WACV_2025_paper.html | 0 | Foundation X: Integrating Classification Localization and Segmentation through Lock-Release Pretraining Strategy for Chest X-ray Analysis
Developing robust and versatile deep-learning models is essential for enhancing diagnostic accuracy and guiding clinical interventions in medical imaging but it requires a large amou... | [
0.02806043066084385,
-0.06895682215690613,
-0.05072392523288727,
0.023593369871377945,
-0.020676106214523315,
0.00608523003757,
0.032728053629398346,
0.009845765307545662,
0.030047817155718803,
-0.009408175945281982,
-0.010183073580265045,
-0.012626281939446926,
-0.01887105032801628,
0.024... | ||
wacv_2025_2ffacd575d | 2ffacd575d | wacv | 2,025 | Frame by Familiar Frame: Understanding Replication in Video Diffusion Models | Building on the momentum of image generation diffusion models there is an increasing interest in video-based diffusion models. However video generation poses greater challenges due to its higher-dimensional nature the scarcity of training data and the complex spatiotemporal relationships involved. Image generation mode... | Aimon Rahman; Malsha V. Perera; Vishal M. Patel | Johns Hopkins University; Johns Hopkins University; Johns Hopkins University | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Rahman_Frame_by_Familiar_Frame_Understanding_Replication_in_Video_Diffusion_Models_WACV_2025_paper.html | 7 | 2403.19593 | Frame by Familiar Frame: Understanding Replication in Video Diffusion Models
Building on the momentum of image generation diffusion models there is an increasing interest in video-based diffusion models. However video generation poses greater challenges due to its higher-dimensional nature the scarcity of training data... | [
-0.026978949084877968,
-0.025175495073199272,
-0.031350959092378616,
0.015456879511475563,
-0.01867212913930416,
0.0012535374844446778,
-0.013188899494707584,
0.038437262177467346,
0.018471745774149895,
-0.0019059234764426947,
-0.026432448998093605,
-0.020493799820542336,
-0.0411333329975605... | ||
wacv_2025_cfe00a9e3c | cfe00a9e3c | wacv | 2,025 | Frequency-Domain Refinement of Vision Transformers for Robust Medical Image Segmentation under Degradation | Medical image segmentation is crucial for precise diagnosis treatment planning and disease monitoring in clinical settings. While convolutional neural networks (CNNs) have achieved remarkable success they struggle with modeling long-range dependencies. Vision Transformers (ViTs) address this limitation by leveraging se... | Sanaz Karimijafarbigloo; Sina Ghorbani Kolahi; Reza Azad; Ulas Bagci; Dorit Merhof | Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Department of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran; Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Department of Radiology, Northwestern University, Ch... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Karimijafarbigloo_Frequency-Domain_Refinement_of_Vision_Transformers_for_Robust_Medical_Image_Segmentation_WACV_2025_paper.html | 0 | Frequency-Domain Refinement of Vision Transformers for Robust Medical Image Segmentation under Degradation
Medical image segmentation is crucial for precise diagnosis treatment planning and disease monitoring in clinical settings. While convolutional neural networks (CNNs) have achieved remarkable success they struggle... | [
-0.06041097640991211,
-0.04799174144864082,
-0.00953423697501421,
0.02112002857029438,
0.011146172881126404,
-0.012574935331940651,
0.03967561572790146,
-0.013023712672293186,
0.023153265938162804,
0.05323053151369095,
-0.021010125055909157,
-0.004609130322933197,
-0.013619029894471169,
0.... | |||
wacv_2025_e2fb31f3ad | e2fb31f3ad | wacv | 2,025 | From Visual Explanations to Counterfactual Explanations with Latent Diffusion | Visual counterfactual explanations are ideal hypothetical images that change the decision-making of the classifier with high confidence toward the desired class while remaining visually plausible and close to the initial image. In this paper we propose a new approach to tackle two key challenges in recent prominent wor... | Tung Luu; Nam Le; Duc Le; Bac Le | Faculty of Information Technology, University of Science, VNU-HCM; Faculty of Information Technology, University of Science, VNU-HCM; Faculty of Information Technology, University of Science, VNU-HCM; Faculty of Information Technology, University of Science, VNU-HCM | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Luu_From_Visual_Explanations_to_Counterfactual_Explanations_with_Latent_Diffusion_WACV_2025_paper.html | 0 | From Visual Explanations to Counterfactual Explanations with Latent Diffusion
Visual counterfactual explanations are ideal hypothetical images that change the decision-making of the classifier with high confidence toward the desired class while remaining visually plausible and close to the initial image. In this paper ... | [
-0.06311102956533432,
-0.017007919028401375,
-0.009859933517873287,
0.021005945280194283,
-0.012422770261764526,
-0.04197461158037186,
-0.008331550285220146,
0.02495737373828888,
0.035376470535993576,
0.02613161876797676,
-0.023559462279081345,
-0.0446586012840271,
-0.025143761187791824,
0... | |||
wacv_2025_75a1206024 | 75a1206024 | wacv | 2,025 | GANESH: Generalizable NeRF for Lensless Imaging | Lensless imaging offers a significant opportunity to develop ultra-compact cameras by removing the conventional bulky lens system. However without a focusing element the sensor's output is no longer a direct image but a complex multiplexed scene representation. Traditional methods have attempted to address this challen... | Rakesh Raj Madhavan; Akshat Kaimal; Badhrinarayanan K.V; Vinayak Gupta; Rohit Choudhary; Chandrakala Shanmuganathan; Kaushik Mitra | Shiv Nadar University, Chennai; Shiv Nadar University, Chennai; Shiv Nadar University, Chennai; Indian Institute of Technology, Madras; Indian Institute of Technology, Madras; Shiv Nadar University, Chennai; Indian Institute of Technology, Madras | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Madhavan_GANESH_Generalizable_NeRF_for_Lensless_Imaging_WACV_2025_paper.html | 0 | GANESH: Generalizable NeRF for Lensless Imaging
Lensless imaging offers a significant opportunity to develop ultra-compact cameras by removing the conventional bulky lens system. However without a focusing element the sensor's output is no longer a direct image but a complex multiplexed scene representation. Traditiona... | [
-0.032387468963861465,
-0.006194673478603363,
-0.003765613306313753,
-0.011814583092927933,
-0.006961024831980467,
-0.01719728857278824,
0.011933185160160065,
0.03309908136725426,
0.006317837163805962,
0.008283893577754498,
-0.05889957770705223,
-0.0022420338355004787,
-0.0028532904107123613... | |||
wacv_2025_ceace482c9 | ceace482c9 | wacv | 2,025 | GANFusion: Feed-Forward Text-to-3D with Diffusion in GAN Space | We train a feed-forward text-to-3D diffusion generator for human characters using only single-view 2D data for supervision. Existing 3D generative models cannot yet match the fidelity of image and/or video generative models. State-of-the-art 3D generators are either limited by the volume and diversity of existing 3D da... | Souhaib Attaiki; Paul Guerrero; Duygu Ceylan; Niloy Mitra; Maks Ovsjanikov | LIX, École Polytechnique, IPP Paris; Adobe Research; Adobe Research; University College London (UCL); LIX, École Polytechnique, IPP Paris | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Attaiki_GANFusion_Feed-Forward_Text-to-3D_with_Diffusion_in_GAN_Space_WACV_2025_paper.html | 0 | 2412.16717 | GANFusion: Feed-Forward Text-to-3D with Diffusion in GAN Space
We train a feed-forward text-to-3D diffusion generator for human characters using only single-view 2D data for supervision. Existing 3D generative models cannot yet match the fidelity of image and/or video generative models. State-of-the-art 3D generators a... | [
-0.03438754379749298,
-0.022341065108776093,
-0.02141997031867504,
-0.04215363785624504,
-0.025321079418063164,
-0.014602063223719597,
0.014602063223719597,
0.0464881993830204,
-0.025393322110176086,
0.027271632105112076,
-0.03557954728603363,
-0.056782789528369904,
-0.009229009039700031,
... | ||
wacv_2025_c7b1ce4364 | c7b1ce4364 | wacv | 2,025 | GAUDA: Generative Adaptive Uncertainty-Guided Diffusion-Based Augmentation for Surgical Segmentation | Augmentation by generative modelling yields a promising alternative to the accumulation of surgical data where ethical organisational and regulatory aspects must be considered. Yet the joint synthesis of (image mask) pairs for segmentation a major application in surgery is rather unexplored. We propose to learn semanti... | Yannik Frisch; Christina Bornberg; Moritz Fuchs; Anirban Mukhopadhyay | Technical University Darmstadt; University of Girona; Technical University Darmstadt; Technical University Darmstadt | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Frisch_GAUDA_Generative_Adaptive_Uncertainty-Guided_Diffusion-Based_Augmentation_for_Surgical_Segmentation_WACV_2025_paper.html | 1 | 2501.10819 | GAUDA: Generative Adaptive Uncertainty-Guided Diffusion-Based Augmentation for Surgical Segmentation
Augmentation by generative modelling yields a promising alternative to the accumulation of surgical data where ethical organisational and regulatory aspects must be considered. Yet the joint synthesis of (image mask) pa... | [
-0.04990668594837189,
-0.01225337479263544,
-0.05820585787296295,
-0.012048686854541302,
-0.0080665722489357,
-0.011853302828967571,
0.024543965235352516,
0.05076265335083008,
0.008141004480421543,
0.03877909481525421,
-0.04499416798353195,
0.01358384732156992,
-0.0370299406349659,
0.01822... | ||
wacv_2025_bd91a431e7 | bd91a431e7 | wacv | 2,025 | GET-UP: GEomeTric-Aware Depth Estimation with Radar Points UPsampling | Depth estimation plays a pivotal role in autonomous driving facilitating a comprehensive understanding of the vehicle's 3D surroundings. Radar with its robustness to adverse weather conditions and capability to measure distances has drawn significant interest for radar-camera depth estimation. However existing algorith... | Huawei Sun; Zixu Wang; Hao Feng; Julius Ott; Lorenzo Servadei; Robert Wille | ;;;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Sun_GET-UP_GEomeTric-Aware_Depth_Estimation_with_Radar_Points_UPsampling_WACV_2025_paper.html | 3 | GET-UP: GEomeTric-Aware Depth Estimation with Radar Points UPsampling
Depth estimation plays a pivotal role in autonomous driving facilitating a comprehensive understanding of the vehicle's 3D surroundings. Radar with its robustness to adverse weather conditions and capability to measure distances has drawn significant... | [
-0.0428507924079895,
-0.0305172149091959,
-0.026772428303956985,
-0.02420138195157051,
-0.011877121403813362,
-0.05548245832324028,
0.013032229617238045,
0.00708435382694006,
-0.013032229617238045,
0.025617320090532303,
-0.02680969052016735,
-0.00850960798561573,
-0.03776459023356438,
0.00... | |||
wacv_2025_938a10a888 | 938a10a888 | wacv | 2,025 | GEXIA: Granularity Expansion and Iterative Approximation for Scalable Multi-Grained Video-Language Learning | In various video-language learning tasks the challenge of achieving cross-modality alignment with multi-grained data persists. We propose a method to tackle this challenge from two crucial perspectives: data and modeling. Given the absence of a multi-grained video-text pretraining dataset we introduce a Granularity EXp... | Yicheng Wang; Zhikang Zhang; Jue Wang; David Fan; Zhenlin Xu; Linda Liu; Xiang Hao; Vimal Bhat; Xinyu Li | Texas A&M University+Amazon; Amazon; Amazon; Amazon; Amazon; Amazon; Amazon; Amazon; Amazon | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Wang_GEXIA_Granularity_Expansion_and_Iterative_Approximation_for_Scalable_Multi-Grained_Video-Language_WACV_2025_paper.html | 1 | 2412.07704 | GEXIA: Granularity Expansion and Iterative Approximation for Scalable Multi-Grained Video-Language Learning
In various video-language learning tasks the challenge of achieving cross-modality alignment with multi-grained data persists. We propose a method to tackle this challenge from two crucial perspectives: data and ... | [
-0.0728427842259407,
-0.0456247441470623,
-0.022532351315021515,
-0.01682925783097744,
-0.03660807013511658,
0.01734263077378273,
-0.013020973652601242,
0.021598948165774345,
-0.02772207371890545,
0.017258623614907265,
-0.029644884169101715,
-0.03545065224170685,
-0.005077713169157505,
0.0... | ||
wacv_2025_7f3dcfb90f | 7f3dcfb90f | wacv | 2,025 | GHOST: Grounded Human Motion Generation with Open Vocabulary Scene-and-Text Contexts | The connection between our 3D surroundings and the descriptive language that characterizes them would be well-suited for localizing and generating human motion in context but for one problem. The complexity introduced by multiple modalities makes capturing this connection challenging with a fixed set of descriptors. Sp... | Zoltán Á. Milacski; Koichiro Niinuma; Ryosuke Kawamura; Fernando de la Torre; László A. Jeni | Robotics Institute, Carnegie Mellon University, Pittsburgh PA, USA; Fujitsu Research of America, Pittsburgh PA, USA; Fujitsu Research of America, Pittsburgh PA, USA; Robotics Institute, Carnegie Mellon University, Pittsburgh PA, USA; Robotics Institute, Carnegie Mellon University, Pittsburgh PA, USA | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Milacski_GHOST_Grounded_Human_Motion_Generation_with_Open_Vocabulary_Scene-and-Text_Contexts_WACV_2025_paper.html | 0 | 2405.18438 | GHOST: Grounded Human Motion Generation with Open Vocabulary Scene-and-Text Contexts
The connection between our 3D surroundings and the descriptive language that characterizes them would be well-suited for localizing and generating human motion in context but for one problem. The complexity introduced by multiple modal... | [
-0.0030027972534298897,
-0.016365360468626022,
-0.038592107594013214,
0.001217633136548102,
-0.002528304234147072,
-0.033009834587574005,
-0.02569705992937088,
0.013685870915651321,
0.00016063569637481123,
0.0008995599346235394,
-0.021715039387345314,
0.0008995599346235394,
0.021287064999341... | ||
wacv_2025_27fce6d9f3 | 27fce6d9f3 | wacv | 2,025 | GMT: Guided Mask Transformer for Leaf Instance Segmentation | Leaf instance segmentation is a challenging multi-instance segmentation task aiming to separate and delineate each leaf in an image of a plant. Accurate segmentation of each leaf is crucial for plant-related applications such as the fine-grained monitoring of plant growth and crop yield estimation. This task is challen... | Feng Chen; Sotirios A. Tsaftaris; Mario Valerio Giuffrida | IDCOM, School of Engineering, University of Edinburgh; IDCOM, School of Engineering, University of Edinburgh; School of Computer Science, University of Nottingham | Poster | main | https://github.com/vios-s/gmt-leaf-ins-seg | https://openaccess.thecvf.com/content/WACV2025/html/Chen_GMT_Guided_Mask_Transformer_for_Leaf_Instance_Segmentation_WACV_2025_paper.html | 5 | 2406.17109 | GMT: Guided Mask Transformer for Leaf Instance Segmentation
Leaf instance segmentation is a challenging multi-instance segmentation task aiming to separate and delineate each leaf in an image of a plant. Accurate segmentation of each leaf is crucial for plant-related applications such as the fine-grained monitoring of ... | [
-0.047640830278396606,
-0.020134786143898964,
-0.029285311698913574,
-0.017429571598768234,
0.01904543861746788,
-0.04720509052276611,
-0.0036334304604679346,
-0.016113275662064552,
0.032462578266859055,
0.029158221557736397,
-0.016076965257525444,
-0.011692337691783905,
-0.01027618534862995... | |
wacv_2025_d5ce4bfcd5 | d5ce4bfcd5 | wacv | 2,025 | GStex: Per-Primitive Texturing of 2D Gaussian Splatting for Decoupled Appearance and Geometry Modeling | Gaussian splatting has demonstrated excellent performance for view synthesis and scene reconstruction. The representation achieves photorealistic quality by optimizing the position scale color and opacity of thousands to millions of 2D or 3D Gaussian primitives within a scene. However since each Gaussian primitive enco... | Victor Rong; Jingxiang Chen; Sherwin Bahmani; Kiriakos Kutulakos; David Lindell | University of Toronto + Vector Institute; University of Toronto; University of Toronto + Vector Institute; University of Toronto + Vector Institute; University of Toronto + Vector Institute | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Rong_GStex_Per-Primitive_Texturing_of_2D_Gaussian_Splatting_for_Decoupled_Appearance_WACV_2025_paper.html | 1 | 2409.12954 | GStex: Per-Primitive Texturing of 2D Gaussian Splatting for Decoupled Appearance and Geometry Modeling
Gaussian splatting has demonstrated excellent performance for view synthesis and scene reconstruction. The representation achieves photorealistic quality by optimizing the position scale color and opacity of thousands... | [
-0.04824970290064812,
-0.0018308196449652314,
-0.0006466223858296871,
-0.024200299754738808,
-0.022314073517918587,
-0.019465869292616844,
0.01780598796904087,
-0.00012253111344762146,
0.007210105191916227,
0.0686209574341774,
-0.034706588834524155,
-0.003524887841194868,
0.02787844464182853... | ||
wacv_2025_5765190e62 | 5765190e62 | wacv | 2,025 | GTA-HDR: A Large-Scale Synthetic Dataset for HDR Image Reconstruction | High Dynamic Range (HDR) content (i.e. images and videos) has a broad range of applications. However capturing HDR content from real-world scenes is expensive and time-consuming. Therefore the challenging task of reconstructing visually accurate HDR images from their Low Dynamic Range (LDR) counterparts is gaining atte... | Hrishav Bakul Barua; Kalin Stefanov; KokSheik Wong; Abhinav Dhall; Ganesh Krishnasamy | Monash University & TCS Research; Monash University; Monash University; Monash University; Monash University | Poster | main | https://github.com/HrishavBakulBarua/GTA-HDR | https://openaccess.thecvf.com/content/WACV2025/html/Barua_GTA-HDR_A_Large-Scale_Synthetic_Dataset_for_HDR_Image_Reconstruction_WACV_2025_paper.html | 3 | GTA-HDR: A Large-Scale Synthetic Dataset for HDR Image Reconstruction
High Dynamic Range (HDR) content (i.e. images and videos) has a broad range of applications. However capturing HDR content from real-world scenes is expensive and time-consuming. Therefore the challenging task of reconstructing visually accurate HDR ... | [
-0.034288190305233,
-0.05111583694815636,
-0.035516757518053055,
0.006831577979028225,
-0.03626134619116783,
-0.016660114750266075,
0.036987315863370895,
0.04344660043716431,
-0.003136569168418646,
-0.01450081542134285,
-0.01267657894641161,
-0.016222670674324036,
-0.0372479222714901,
0.03... | ||
wacv_2025_80fb699844 | 80fb699844 | wacv | 2,025 | GaitCloud: Leveraging Spatial-Temporal Information for LiDAR-Base Gait Recognition with A True-3D Gait Representation | Gait recognition using point clouds captured by LiDAR (Light Detection And Ranging) sensors offers better adaptability to variations in walking conditions compared to camera-based methods due to the precise spatial information captured. However existing methods typically project the point clouds into a sequence of 2D d... | Shaoxiong Zhang; Hiromitsu Awano; Takashi Sato | Kyoto University; Kyoto University; Kyoto University | Poster | main | https://github.com/seagrgz/GaitCloud-master.git | https://openaccess.thecvf.com/content/WACV2025/html/Zhang_GaitCloud_Leveraging_Spatial-Temporal_Information_for_LiDAR-Base_Gait_Recognition_with_A_WACV_2025_paper.html | 0 | GaitCloud: Leveraging Spatial-Temporal Information for LiDAR-Base Gait Recognition with A True-3D Gait Representation
Gait recognition using point clouds captured by LiDAR (Light Detection And Ranging) sensors offers better adaptability to variations in walking conditions compared to camera-based methods due to the pre... | [
-0.04282765835523605,
-0.026412812992930412,
-0.04635421559214592,
0.008448283188045025,
-0.014378895983099937,
-0.0197959765791893,
-0.02088666334748268,
0.03239341452717781,
0.001439480111002922,
0.03473839536309242,
-0.021595610305666924,
-0.03324778750538826,
0.005407991819083691,
0.01... | ||
wacv_2025_0d5f92fe58 | 0d5f92fe58 | wacv | 2,025 | GaitContour: Efficient Gait Recognition Based on a Contour-Pose Representation | Gait recognition holds the promise to robustly identify subjects based on walking patterns instead of appearance information. In recent years this field has been dominated by learning methods based on two input formats: silhouette images and sparse keypoints. Compared to image-based approaches keypoint-based methods ca... | Yuxiang Guo; Anshul Shah; Jiang Liu; Ayush Gupta; Rama Chellappa; Cheng Peng | Johns Hopkins University, Baltimore, MD, USA; Johns Hopkins University, Baltimore, MD, USA; Johns Hopkins University, Baltimore, MD, USA; Johns Hopkins University, Baltimore, MD, USA; Johns Hopkins University, Baltimore, MD, USA; Johns Hopkins University, Baltimore, MD, USA | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Guo_GaitContour_Efficient_Gait_Recognition_Based_on_a_Contour-Pose_Representation_WACV_2025_paper.html | 4 | 2311.16497 | GaitContour: Efficient Gait Recognition Based on a Contour-Pose Representation
Gait recognition holds the promise to robustly identify subjects based on walking patterns instead of appearance information. In recent years this field has been dominated by learning methods based on two input formats: silhouette images and... | [
0.007623972371220589,
-0.012034419924020767,
-0.03548849746584892,
0.03584245219826698,
-0.009440313093364239,
-0.0024706886615604162,
-0.002433430403470993,
0.0005876909126527607,
0.020678337663412094,
0.04146844893693924,
-0.011717724613845348,
-0.010106304660439491,
0.017269205302000046,
... | ||
wacv_2025_4d8648031f | 4d8648031f | wacv | 2,025 | GauFRe: Gaussian Deformation Fields for Real-Time Dynamic Novel View Synthesis | We propose a method that achieves state-of-the-art rendering quality and efficiency on monocular dynamic scene reconstruction using deformable 3D Gaussians. Implicit deformable representations commonly model motion with a canonical space and time-dependent backward-warping deformation field. Our method GauFRe uses a fo... | Yiqing Liang; Numair Khan; Zhengqin Li; Thu H Nguyen-Phuoc; Douglas Lanman; James Tompkin; Lei Xiao | Meta‡; Brown University; Meta‡; Brown University; Meta‡; Brown University; Meta‡ | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Liang_GauFRe_Gaussian_Deformation_Fields_for_Real-Time_Dynamic_Novel_View_Synthesis_WACV_2025_paper.html | 57 | 2312.11458 | GauFRe: Gaussian Deformation Fields for Real-Time Dynamic Novel View Synthesis
We propose a method that achieves state-of-the-art rendering quality and efficiency on monocular dynamic scene reconstruction using deformable 3D Gaussians. Implicit deformable representations commonly model motion with a canonical space and... | [
-0.029561374336481094,
-0.009266648441553116,
-0.008184901438653469,
-0.03078671544790268,
-0.026498019695281982,
0.001455093501135707,
0.00234657758846879,
-0.0035874757450073957,
0.013880826532840729,
0.06226268783211708,
-0.028738098219037056,
-0.01815037801861763,
0.034769076853990555,
... | ||
wacv_2025_c389f3bfbc | c389f3bfbc | wacv | 2,025 | Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with Enhanced Generalization and Personalization Abilities | Recent advancements in 3D Gaussian Splatting (3DGS) have unlocked significant potential for modeling 3D head avatars providing greater flexibility than mesh-based methods and more efficient rendering compared to NeRF-based approaches. Despite these advancements the creation of controllable 3DGS-based head avatars remai... | Peizhi Yan; Rabab Ward; Qiang Tang; Shan Du | University of British Columbia; University of British Columbia; Huawei Canada; University of British Columbia (Okanagan) | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Yan_Gaussian_Deja-vu_Creating_Controllable_3D_Gaussian_Head-Avatars_with_Enhanced_Generalization_WACV_2025_paper.html | 0 | Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with Enhanced Generalization and Personalization Abilities
Recent advancements in 3D Gaussian Splatting (3DGS) have unlocked significant potential for modeling 3D head avatars providing greater flexibility than mesh-based methods and more efficient render... | [
0.005211993586272001,
-0.017513765022158623,
-0.01767864264547825,
-0.019895343109965324,
-0.01067130547016859,
-0.010533906519412994,
0.005440991837531328,
-0.030777325853705406,
0.007002757862210274,
0.04070667922496796,
-0.014023835770785809,
-0.03979068621993065,
0.033030666410923004,
... | |||
wacv_2025_9c27b179c5 | 9c27b179c5 | wacv | 2,025 | GaussianBeV : 3D Gaussian Representation Meets Perception Models for BeV Segmentation | The Bird's-eye View (BeV) representation is widely used for 3D perception from multi-view camera images. It allows to merge features from different cameras into a common space providing a unified representation of the 3D scene. The key component is the view transformer which transforms image views into the BeV. However... | Florian Chabot; Nicolas Granger; Guillaume Lapouge | ;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Chabot_GaussianBeV__3D_Gaussian_Representation_Meets_Perception_Models_for_BeV_WACV_2025_paper.html | 5 | 2407.14108 | GaussianBeV : 3D Gaussian Representation Meets Perception Models for BeV Segmentation
The Bird's-eye View (BeV) representation is widely used for 3D perception from multi-view camera images. It allows to merge features from different cameras into a common space providing a unified representation of the 3D scene. The ke... | [
-0.03265494480729103,
-0.02246866002678871,
-0.019103877246379852,
-0.006246912758797407,
0.008963561616837978,
0.028481140732765198,
0.012098509818315506,
-0.0033349045552313328,
0.03967869654297829,
0.03291235864162445,
-0.025943763554096222,
-0.009910481050610542,
0.03875935822725296,
0... | ||
wacv_2025_ae515a2d6d | ae515a2d6d | wacv | 2,025 | GazeSearch: Radiology Findings Search Benchmark | Medical eye-tracking data is an important information source for understanding how radiologists visually interpret medical images. This information not only improves the accuracy of deep learning models for X-ray analysis but also their interpretability enhancing transparency in decision-making. However the current eye... | Trong Thang Pham; Tien-Phat Nguyen; Yuki Ikebe; Akash Awasthi; Zhigang Deng; Carol C. Wu; Hien Nguyen; Ngan Le | ;;;;;;; | Poster | main | https://github.com/UARK-AICV/GazeSearch | https://openaccess.thecvf.com/content/WACV2025/html/Pham_GazeSearch_Radiology_Findings_Search_Benchmark_WACV_2025_paper.html | 2 | 2411.05780 | GazeSearch: Radiology Findings Search Benchmark
Medical eye-tracking data is an important information source for understanding how radiologists visually interpret medical images. This information not only improves the accuracy of deep learning models for X-ray analysis but also their interpretability enhancing transpar... | [
-0.008075321093201637,
-0.05353621765971184,
-0.03695501387119293,
0.011112155392765999,
0.015481897629797459,
-0.023470191285014153,
0.016627008095383644,
-0.007168393582105637,
-0.0001608880702406168,
0.012046565301716328,
-0.026145169511437416,
-0.026676500216126442,
-0.026694823056459427... | |
wacv_2025_30a75ee229 | 30a75ee229 | wacv | 2,025 | Generalist YOLO: Towards Real-Time End-to-End Multi-Task Visual Language Models | Generalist models capable of handling multiple modalities and tasks simultaneously are currently one of the hottest research topics. However due to interference between different tasks during the training process existing generalist models require a very large decoder to achieve good results in various tasks which make... | Hung-Shuo Chang; Chien-Yao Wang; Richard Robert Wang; Gene Chou; Hong-Yuan Mark Liao | ;;;; | Poster | main | https://github.com/WongKinYiu/GeneralistYOLO | https://openaccess.thecvf.com/content/WACV2025/html/Chang_Generalist_YOLO_Towards_Real-Time_End-to-End_Multi-Task_Visual_Language_Models_WACV_2025_paper.html | 0 | Generalist YOLO: Towards Real-Time End-to-End Multi-Task Visual Language Models
Generalist models capable of handling multiple modalities and tasks simultaneously are currently one of the hottest research topics. However due to interference between different tasks during the training process existing generalist models ... | [
-0.02402561530470848,
-0.023701461032032967,
-0.04118677228689194,
0.014291428029537201,
-0.015683388337492943,
0.01500647608190775,
-0.05289449170231819,
0.020650589838624,
-0.009414800442755222,
-0.009953469969332218,
-0.033407047390937805,
-0.012527642771601677,
0.012499040924012661,
0.... | ||
wacv_2025_bd7159d3c1 | bd7159d3c1 | wacv | 2,025 | Generalizable Single-Source Cross-Modality Medical Image Segmentation via Invariant Causal Mechanisms | Single-source domain generalization (SDG) aims to learn a model from a single source domain that can generalize well on unseen target domains. This is an important task in computer vision particularly relevant to medical imaging where domain shifts are common. In this work we consider a challenging yet practical settin... | Boqi Chen; Yuanzhi Zhu; Yunke Ao; Sebastiano Caprara; Reto Sutter; Gunnar Rätsch; Ender Konukoglu; Anna Susmelj | Department of Computer Science, ETH Zurich + ETH AI Center + Computer Vision Lab, ETH Zurich; Computer Vision Lab, ETH Zurich; Department of Computer Science, ETH Zurich + ETH AI Center + Balgrist University Hospital; Balgrist University Hospital + University of Zurich; Balgrist University Hospital + University of Zuri... | Poster | main | https://github.com/ratschlab/ICMSeg | https://openaccess.thecvf.com/content/WACV2025/html/Chen_Generalizable_Single-Source_Cross-Modality_Medical_Image_Segmentation_via_Invariant_Causal_Mechanisms_WACV_2025_paper.html | 0 | 2411.05223 | Generalizable Single-Source Cross-Modality Medical Image Segmentation via Invariant Causal Mechanisms
Single-source domain generalization (SDG) aims to learn a model from a single source domain that can generalize well on unseen target domains. This is an important task in computer vision particularly relevant to medic... | [
-0.09224221110343933,
-0.019168859347701073,
-0.027821091935038567,
-0.028816642239689827,
-0.011973748914897442,
-0.0032898394856601954,
-0.016969598829746246,
-0.0017704948550090194,
0.01511425618082285,
0.011177308857440948,
-0.021938297897577286,
0.006163358222693205,
-0.0066023054532706... | |
wacv_2025_dc1e164927 | dc1e164927 | wacv | 2,025 | Generalizable Single-View Object Pose Estimation by Two-Side Generating and Matching | In this paper we present a novel generalizable object pose estimation method to determine the object pose using only one RGB image. Unlike traditional approaches that rely on instance-level object pose estimation and necessitate extensive training data our method offers generalization to unseen objects without extensiv... | Yujing Sun; Caiyi Sun; Yuan Liu; Yuexin Ma; Siu Ming Yiu | The University of Hong Kong; ShanghaiTech University; The University of Hong Kong; ShanghaiTech University; The University of Hong Kong | Poster | main | https://github.com/scy639/Gen2SM | https://openaccess.thecvf.com/content/WACV2025/html/Sun_Generalizable_Single-View_Object_Pose_Estimation_by_Two-Side_Generating_and_Matching_WACV_2025_paper.html | 1 | 2411.15860 | Generalizable Single-View Object Pose Estimation by Two-Side Generating and Matching
In this paper we present a novel generalizable object pose estimation method to determine the object pose using only one RGB image. Unlike traditional approaches that rely on instance-level object pose estimation and necessitate extens... | [
-0.06743791699409485,
-0.012151548638939857,
-0.019795117899775505,
0.002383503131568432,
-0.004957868251949549,
-0.024884773418307304,
-0.013169479556381702,
0.015405291691422462,
0.02690245769917965,
0.0022914803121238947,
-0.04849713295698166,
-0.032337479293346405,
0.031174130737781525,
... | |
wacv_2025_07854e9efc | 07854e9efc | wacv | 2,025 | GeneralizeFormer: Layer-Adaptive Model Generation across Test-Time Distribution Shifts | We consider the problem of test-time domain generalization where a model is trained on several source domains and adjusted on target domains never seen during training. Different from the common methods that fine-tune the model or adjust the classifier parameters online we propose to generate multiple layer parameters ... | Sameer Ambekar; Zehao Xiao; Xiantong Zhen; Cees Snoek | AIM Lab, University of Amsterdam + TU Munich, Germany; AIM Lab, University of Amsterdam; AIM Lab, University of Amsterdam + Core42 + United Imaging Healthcare, Co., Ltd., China; AIM Lab, University of Amsterdam | Poster | main | https://github.com/ambekarsameer96/generalizeformer | https://openaccess.thecvf.com/content/WACV2025/html/Ambekar_GeneralizeFormer_Layer-Adaptive_Model_Generation_across_Test-Time_Distribution_Shifts_WACV_2025_paper.html | 0 | 2502.12195 | GeneralizeFormer: Layer-Adaptive Model Generation across Test-Time Distribution Shifts
We consider the problem of test-time domain generalization where a model is trained on several source domains and adjusted on target domains never seen during training. Different from the common methods that fine-tune the model or ad... | [
-0.027269192039966583,
-0.026056021451950073,
-0.029025575146079063,
0.010936648584902287,
0.003433546284213662,
-0.022018153220415115,
-0.039292264729738235,
-0.0062967208214104176,
0.007885613478720188,
0.010619775392115116,
-0.02955067902803421,
-0.001300311298109591,
-0.02542227506637573... | |
wacv_2025_232a9c13a5 | 232a9c13a5 | wacv | 2,025 | Generating Long-Take Videos via Effective Keyframes and Guidance | We tackle the challenge of generating long-take videos encompassing multiple non-repetitive yet coherent events. Existing approaches generate long videos conditioned on single input guidance often leading to repetitive content. To address this problem we develop a framework that uses multiple guidance sources to enhanc... | Hsin-Ping Huang; Yu-Chuan Su; Ming-Hsuan Yang | ;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Huang_Generating_Long-Take_Videos_via_Effective_Keyframes_and_Guidance_WACV_2025_paper.html | 0 | Generating Long-Take Videos via Effective Keyframes and Guidance
We tackle the challenge of generating long-take videos encompassing multiple non-repetitive yet coherent events. Existing approaches generate long videos conditioned on single input guidance often leading to repetitive content. To address this problem we ... | [
-0.01176061574369669,
-0.038560621440410614,
-0.04042886570096016,
0.021129874512553215,
-0.04252130538225174,
-0.005945696495473385,
-0.031199727207422256,
0.04524894431233406,
-0.057243090122938156,
0.007384246680885553,
-0.0020363894291222095,
-0.04308177903294563,
0.013909099623560905,
... | |||
wacv_2025_85fa6214a1 | 85fa6214a1 | wacv | 2,025 | Generating Visual Explanations from Deep Networks using Implicit Neural Representations | Explaining deep learning models in a way that humans can easily understand is essential for responsible artificial intelligence applications. Attribution methods constitute an important area of explainable deep learning. The attribution problem involves finding parts of the network's input that are the most responsible... | Michal Byra; Henrik Skibbe | Institute of Fundamental Technological Research, Polish Academy of Sciences, Poland+RIKEN Center for Brain Science, Japan+Samsung AI Center Warsaw, Poland; RIKEN Center for Brain Science, Japan | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Byra_Generating_Visual_Explanations_from_Deep_Networks_using_Implicit_Neural_Representations_WACV_2025_paper.html | 0 | 2501.11784 | Generating Visual Explanations from Deep Networks using Implicit Neural Representations
Explaining deep learning models in a way that humans can easily understand is essential for responsible artificial intelligence applications. Attribution methods constitute an important area of explainable deep learning. The attribu... | [
-0.03298049047589302,
-0.03924401104450226,
0.011261405423283577,
0.010134341195225716,
-0.03754417598247528,
-0.00875784456729889,
0.0078755933791399,
-0.015871282666921616,
0.03565957769751549,
0.014097542501986027,
-0.009090420790016651,
0.000270940043265,
-0.035622622817754745,
0.05310... | ||
wacv_2025_ab8153a107 | ab8153a107 | wacv | 2,025 | Generation of Complex 3D Human Motion by Temporal and Spatial Composition of Diffusion Models | In this paper we address the challenge of generating realistic 3D human motions for action classes that were never seen during the training phase. Our approach involves decomposing complex actions into simpler movements specifically those observed during training by leveraging the knowledge of human motion contained in... | Lorenzo Mandelli; Stefano Berretti | University of Florence; University of Florence | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Mandelli_Generation_of_Complex_3D_Human_Motion_by_Temporal_and_Spatial_WACV_2025_paper.html | 2 | 2409.11920 | Generation of Complex 3D Human Motion by Temporal and Spatial Composition of Diffusion Models
In this paper we address the challenge of generating realistic 3D human motions for action classes that were never seen during the training phase. Our approach involves decomposing complex actions into simpler movements specif... | [
-0.015205959789454937,
-0.03561689332127571,
-0.010093932040035725,
0.010242645628750324,
0.015884464606642723,
-0.003208959475159645,
0.01230604574084282,
0.0302817951887846,
-0.003643481759354472,
0.010047459043562412,
-0.011302229017019272,
-0.0200019720941782,
-0.007393851410597563,
-0... | ||
wacv_2025_2a71f39695 | 2a71f39695 | wacv | 2,025 | Generative Model-Based Fusion for Improved Few-Shot Semantic Segmentation of Infrared Images | Infrared (IR) imaging is commonly used in various scenarios including autonomous driving fire safety and defense applications. Thus semantic segmentation of such images is of great interest. However this task faces several challenges including data scarcity differing contrast and input channel number compared to natura... | Junno Yun; Mehmet Akçakaya | University of Minnesota; University of Minnesota | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Yun_Generative_Model-Based_Fusion_for_Improved_Few-Shot_Semantic_Segmentation_of_Infrared_WACV_2025_paper.html | 0 | Generative Model-Based Fusion for Improved Few-Shot Semantic Segmentation of Infrared Images
Infrared (IR) imaging is commonly used in various scenarios including autonomous driving fire safety and defense applications. Thus semantic segmentation of such images is of great interest. However this task faces several chal... | [
-0.04756748676300049,
-0.006329544819891453,
-0.0028496652375906706,
0.01493334211409092,
-0.012330282479524612,
-0.018184883520007133,
0.015170813538134098,
0.03395850956439972,
0.002947850851342082,
0.05695677176117897,
-0.05947762727737427,
-0.018815096467733383,
-0.02294345758855343,
0... | |||
wacv_2025_009a2f2948 | 009a2f2948 | wacv | 2,025 | GeoDiffuser: Geometry-Based Image Editing with Diffusion Models | The success of image generative models has enabled us to build methods that can edit images based on text or other user input. However these methods are bespoke imprecise require additional information or are limited to only 2D image edits. We present GeoDiffuser a zero-shot optimization-based method that unifies commo... | Rahul Sajnani; Jeroen Vanbaar; Jie Min; Kapil D Katyal; Srinath Sridhar | Brown University; Amazon Robotics; Amazon Robotics; Amazon Robotics; Brown University + Amazon Robotics | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Sajnani_GeoDiffuser_Geometry-Based_Image_Editing_with_Diffusion_Models_WACV_2025_paper.html | 6 | 2404.14403 | GeoDiffuser: Geometry-Based Image Editing with Diffusion Models
The success of image generative models has enabled us to build methods that can edit images based on text or other user input. However these methods are bespoke imprecise require additional information or are limited to only 2D image edits. We present GeoD... | [
-0.05899191275238991,
-0.00795543473213911,
-0.015099550597369671,
-0.008099669590592384,
0.01226894836872816,
-0.00693677831441164,
-0.008541387505829334,
0.007910361513495445,
0.016064118593931198,
0.0347965769469738,
-0.04579445719718933,
-0.015514224767684937,
0.0007989239529706538,
-0... | ||
wacv_2025_a982274fdf | a982274fdf | wacv | 2,025 | GeoGuide: Geometric Guidance of Diffusion Models | Diffusion models are currently one of the most effective tools in image generation. This is in particular due to the fact that contrary to GANs during training they can be easily conditioned. However given a pretrained diffusion guiding it to obtain desired result is typically a more delicate task. A typical technique ... | Mateusz Poleski; Jacek Tabor; Przemyslaw Spurek | Faculty of Mathematics and Computer Science, Jagiellonian University; Faculty of Mathematics and Computer Science, Jagiellonian University; Faculty of Mathematics and Computer Science, Jagiellonian University+IDEAS NCBR | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Poleski_GeoGuide_Geometric_Guidance_of_Diffusion_Models_WACV_2025_paper.html | 1 | 2407.12889 | GeoGuide: Geometric Guidance of Diffusion Models
Diffusion models are currently one of the most effective tools in image generation. This is in particular due to the fact that contrary to GANs during training they can be easily conditioned. However given a pretrained diffusion guiding it to obtain desired result is typ... | [
-0.08247990906238556,
-0.039674803614616394,
-0.030029093846678734,
-0.02682599052786827,
-0.015433134511113167,
-0.04589901491999626,
-0.03017468936741352,
0.01889103092253208,
0.03423316776752472,
0.029483109712600708,
-0.0652996301651001,
-0.009427315555512905,
-0.0013842957559973001,
0... | ||
wacv_2025_8d0c00b5f3 | 8d0c00b5f3 | wacv | 2,025 | GeoPos: A Minimal Positional Encoding for Enhanced Fine-Grained Details in Image Synthesis using Convolutional Neural Networks | The enduring inability of image generative models to recreate intricate geometric features such as those present in human hands and fingers has been an ongoing problem in image generation for nearly a decade. While strides have been made by increasing model sizes and diversifying training datasets this issue remains pr... | Mehran Hosseini; Peyman Hosseini | King’s College London; Queen Mary University of London | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Hosseini_GeoPos_A_Minimal_Positional_Encoding_for_Enhanced_Fine-Grained_Details_in_WACV_2025_paper.html | 0 | 2401.01951 | GeoPos: A Minimal Positional Encoding for Enhanced Fine-Grained Details in Image Synthesis using Convolutional Neural Networks
The enduring inability of image generative models to recreate intricate geometric features such as those present in human hands and fingers has been an ongoing problem in image generation for n... | [
0.003643688280135393,
-0.029261477291584015,
0.004105564206838608,
-0.009540772065520287,
0.007147414144128561,
-0.010758444666862488,
0.017355339601635933,
0.02853367291390896,
-0.02991463430225849,
0.05243925377726555,
-0.019520092755556107,
-0.0066622113808989525,
-0.008220459334552288,
... | ||
wacv_2025_5be29df9ac | 5be29df9ac | wacv | 2,025 | Geometry-Aware Deep Learning for 3D Skeleton-Based Motion Prediction | The field of human motion prediction in computer vision faces challenges especially in 3D Skeleton-based Human Motion. Deep learning models albeit successful in most vision tasks were designed for data characterized by an underlying Euclidean structure which is not always fulfilled as pre-processed data may often resid... | Mayssa Zaier; Hazem Wannous; Hassen Drira | IMT Nord Europe, Lille, France; IMT Nord Europe, Lille, France; University of Strasbourg, Strasbourg, France | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Zaier_Geometry-Aware_Deep_Learning_for_3D_Skeleton-Based_Motion_Prediction_WACV_2025_paper.html | 0 | Geometry-Aware Deep Learning for 3D Skeleton-Based Motion Prediction
The field of human motion prediction in computer vision faces challenges especially in 3D Skeleton-based Human Motion. Deep learning models albeit successful in most vision tasks were designed for data characterized by an underlying Euclidean structur... | [
-0.036524973809719086,
-0.01038492750376463,
-0.02203795127570629,
-0.007728653494268656,
0.002910854062065482,
0.000153858563862741,
-0.014659943990409374,
0.00301893032155931,
0.017138494178652763,
-0.009765290655195713,
-0.005009935237467289,
0.006988931447267532,
-0.004580031614750624,
... | |||
wacv_2025_1dae4ad335 | 1dae4ad335 | wacv | 2,025 | Global-Guided Focal Neural Radiance Field for Large-Scale Scene Rendering | Neural radiance fields (NeRF) have recently been applied to render large-scale scenes. However their limited model capacity typically results in blurred rendering results. Existing large-scale NeRFs primarily address this limitation by partitioning the scene into blocks which are subsequently handled by separate sub-Ne... | Mingqi Shao; Feng Xiong; Hang Zhang; Shuang Yang; Mu Xu; Wei Bian; Xueqian Wang | Tsinghua Shenzhen International Graduate School+AMAP; AMAP; AMAP; AMAP; AMAP; AMAP; Tsinghua Shenzhen International Graduate School | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Shao_Global-Guided_Focal_Neural_Radiance_Field_for_Large-Scale_Scene_Rendering_WACV_2025_paper.html | 0 | 2403.12839 | Global-Guided Focal Neural Radiance Field for Large-Scale Scene Rendering
Neural radiance fields (NeRF) have recently been applied to render large-scale scenes. However their limited model capacity typically results in blurred rendering results. Existing large-scale NeRFs primarily address this limitation by partitioni... | [
-0.03317407891154289,
-0.01314479298889637,
-0.015246857888996601,
-0.020451534539461136,
0.011033548042178154,
-0.011373182758688927,
-0.008591847494244576,
-0.002565621165558696,
0.021369466558098793,
0.03936094418168068,
-0.028088733553886414,
-0.006907441653311253,
0.0011118457186967134,... | ||
wacv_2025_8ea8e4b74e | 8ea8e4b74e | wacv | 2,025 | GlobalDoc: A Cross-Modal Vision-Language Framework for Real-World Document Image Retrieval and Classification | Visual document understanding (VDU) has rapidly advanced with the development of powerful multi-modal language models. However these models typically require extensive document pre-training data to learn intermediate representations and often suffer a significant performance drop in real-world online industrial setting... | Souhail Bakkali; Sanket Biswas; Zuheng Ming; Mickaël Coustaty; Marçal Rusiñol; Oriol Ramos Terrades; Josep Lladós | L3i, La Rochelle Université, France; Computer Vision Center, Universitat Autònoma de Barcelona, Spain; L2TI, Université Sorbonne Paris Nord, France; L3i, La Rochelle Université, France; AllRead Machine Learning Technologies, Spain; Computer Vision Center, Universitat Autònoma de Barcelona, Spain; Computer Vision Center... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Bakkali_GlobalDoc_A_Cross-Modal_Vision-Language_Framework_for_Real-World_Document_Image_Retrieval_WACV_2025_paper.html | 0 | 2309.05756 | GlobalDoc: A Cross-Modal Vision-Language Framework for Real-World Document Image Retrieval and Classification
Visual document understanding (VDU) has rapidly advanced with the development of powerful multi-modal language models. However these models typically require extensive document pre-training data to learn interm... | [
-0.0518203005194664,
-0.015515437349677086,
-0.029245832934975624,
0.008393301628530025,
-0.00869982410222292,
0.01619158871471882,
-0.02086154744029045,
0.008627701550722122,
0.02893931046128273,
-0.0027496851980686188,
-0.042480383068323135,
-0.0710410475730896,
0.005702216178178787,
0.0... | ||
wacv_2025_9a76aae92f | 9a76aae92f | wacv | 2,025 | Good Seed Makes a Good Crop: Discovering Secret Seeds in Text-to-Image Diffusion Models | Recent advances in text-to-image (T2I) diffusion models have facilitated creative and photorealistic image synthesis. By varying the random seeds we can generate many images for a fixed text prompt. Technically the seed controls the initial noise and in multi-step diffusion inference the noise used for reparameterizati... | Katherine Xu; Lingzhi Zhang; Jianbo Shi | University of Pennsylvania; Adobe Inc.; University of Pennsylvania | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Xu_Good_Seed_Makes_a_Good_Crop_Discovering_Secret_Seeds_in_WACV_2025_paper.html | 9 | 2405.14828 | Good Seed Makes a Good Crop: Discovering Secret Seeds in Text-to-Image Diffusion Models
Recent advances in text-to-image (T2I) diffusion models have facilitated creative and photorealistic image synthesis. By varying the random seeds we can generate many images for a fixed text prompt. Technically the seed controls the... | [
-0.06425227224826813,
-0.01045167725533247,
-0.036381129175424576,
-0.006387136410921812,
-0.018190564587712288,
-0.036028094589710236,
-0.048978883773088455,
0.056336864829063416,
-0.005825068335980177,
0.029506247490644455,
-0.07064405083656311,
-0.017540236935019493,
-0.06350903958082199,... | ||
wacv_2025_a2545d3914 | a2545d3914 | wacv | 2,025 | Graph-Jigsaw Conditioned Diffusion Model for Skeleton-Based Video Anomaly Detection | Skeleton-based video anomaly detection (SVAD) is a crucial task in computer vision. Accurately identifying abnormal patterns or events enables operators to promptly detect suspicious activities thereby enhancing safety. Achieving this demands a comprehensive understanding of human motions both at body and region levels... | Ali Karami; Thi Kieu Khanh Ho; Narges Armanfard | Department of Electrical and Computer Engineering, McGill University + Mila - Quebec AI Institute, Montreal, QC, Canada; Department of Electrical and Computer Engineering, McGill University + Mila - Quebec AI Institute, Montreal, QC, Canada; Department of Electrical and Computer Engineering, McGill University + Mila - ... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Karami_Graph-Jigsaw_Conditioned_Diffusion_Model_for_Skeleton-Based_Video_Anomaly_Detection_WACV_2025_paper.html | 3 | 2403.12172 | Graph-Jigsaw Conditioned Diffusion Model for Skeleton-Based Video Anomaly Detection
Skeleton-based video anomaly detection (SVAD) is a crucial task in computer vision. Accurately identifying abnormal patterns or events enables operators to promptly detect suspicious activities thereby enhancing safety. Achieving this d... | [
-0.0736064538359642,
-0.03959469124674797,
-0.0395212322473526,
0.0017320382175967097,
0.0044925096444785595,
-0.0029108342714607716,
-0.0028557395562529564,
-0.009448733180761337,
0.023819254711270332,
0.02986130304634571,
-0.020274832844734192,
-0.01696915365755558,
-0.011634154245257378,
... | ||
wacv_2025_3141fa4e0f | 3141fa4e0f | wacv | 2,025 | GroundingMate: Aiding Object Grounding for Goal-Oriented Vision-and-Language Navigation | Goal-Oriented Vision-and-Language Navigation (VLN) aims to enable agents to navigate to specified locations and identify designated target objects following natural language instruction. This approach has gained popularity due to its close alignment with real-world scenarios. However existing studies have predominantly... | Qianyi Liu; Siqi Zhang; Yanyuan Qiao; Junyou Zhu; Xiang Li; Longteng Guo; Qunbo Wang; Xingjian He; Qi Wu; Jing Liu | Institute of Automation, Chinese Academy of Sciences+School of Artificial Intelligence, University of Chinese Academy of Sciences; Department of Computer Science and Technology, Tongji University; Australian Institute for Machine Learning, The University of Adelaide; Institute of Automation, Chinese Academy of Sciences... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Liu_GroundingMate_Aiding_Object_Grounding_for_Goal-Oriented_Vision-and-Language_Navigation_WACV_2025_paper.html | 0 | GroundingMate: Aiding Object Grounding for Goal-Oriented Vision-and-Language Navigation
Goal-Oriented Vision-and-Language Navigation (VLN) aims to enable agents to navigate to specified locations and identify designated target objects following natural language instruction. This approach has gained popularity due to it... | [
-0.029680341482162476,
-0.022264856845140457,
-0.024362536147236824,
-0.01273328997194767,
-0.027601061388850212,
-0.006007831078022718,
0.0022989383433014154,
-0.013800530694425106,
-0.0018021193100139499,
-0.002287437906488776,
-0.013984537683427334,
-0.01650543510913849,
-0.00799970794469... | |||
wacv_2025_88fd2c0c35 | 88fd2c0c35 | wacv | 2,025 | Guardian of the Ensembles: Introducing Pairwise Adversarially Robust Loss for Resisting Adversarial Attacks in DNN Ensembles | Adversarial attacks rely on transferability where an adversarial example (AE) crafted on a surrogate classifier tends to mislead a target classifier. Recent ensemble methods demonstrate that AEs are less likely to mislead multiple classifiers in an ensemble. This paper proposes a new ensemble training using a Pairwise ... | Shubhi Shukla; Subhadeep Dalui; Manaar Alam; Shubhajit Datta; Arijit Mondal; Debdeep Mukhopadhyay; Partha Pratim Chakrabarti | ;;;;;; | Poster | main | https://github.com/shubhishukla10/PARL | https://openaccess.thecvf.com/content/WACV2025/html/Shukla_Guardian_of_the_Ensembles_Introducing_Pairwise_Adversarially_Robust_Loss_for_WACV_2025_paper.html | 0 | 2112.04948 | Guardian of the Ensembles: Introducing Pairwise Adversarially Robust Loss for Resisting Adversarial Attacks in DNN Ensembles
Adversarial attacks rely on transferability where an adversarial example (AE) crafted on a surrogate classifier tends to mislead a target classifier. Recent ensemble methods demonstrate that AEs ... | [
-0.05028365179896355,
0.006587641779333353,
-0.01268248911947012,
0.025848474353551865,
-0.010320793837308884,
-0.02945610322058201,
0.059246934950351715,
0.014588581398129463,
-0.03812185302376747,
0.020808951929211617,
-0.025420766323804855,
-0.0009350313921459019,
-0.03960953280329704,
... | |
wacv_2025_61756be4a9 | 61756be4a9 | wacv | 2,025 | Guess Future Anomalies from Normalcy: Forecasting Abnormal Behavior in Real-World Videos | Forecasting Abnormal Human Behavior (AHB) aims to predict unusual behavior in advance by analyzing early patterns of normal human interactions. Unlike typical action prediction methods this task focuses on observing only normal interactions to predict both short and long term future abnormal behavior. Despite its affir... | Snehashis Majhi; Mohammed Guermal; Antitza Dantcheva; Quan Kong; Lorenzo Garattoni; Gianpiero Francesca; François Brémond | INRIA+Côte d’Azur University; INRIA+Côte d’Azur University; INRIA+Côte d’Azur University; Woven by Toyota; Toyota Motor Europe; Toyota Motor Europe; INRIA+Côte d’Azur University | Poster | main | https://github.com/snehashismajhi/AHB-F | https://openaccess.thecvf.com/content/WACV2025/html/Majhi_Guess_Future_Anomalies_from_Normalcy_Forecasting_Abnormal_Behavior_in_Real-World_WACV_2025_paper.html | 0 | Guess Future Anomalies from Normalcy: Forecasting Abnormal Behavior in Real-World Videos
Forecasting Abnormal Human Behavior (AHB) aims to predict unusual behavior in advance by analyzing early patterns of normal human interactions. Unlike typical action prediction methods this task focuses on observing only normal int... | [
-0.028554663062095642,
-0.059402722865343094,
-0.02319713868200779,
0.028253888711333275,
-0.024982979521155357,
0.023027954623103142,
0.012717070057988167,
0.009267576038837433,
-0.020584171637892723,
0.02481379546225071,
-0.04797333851456642,
-0.014794285409152508,
0.0017752673011273146,
... | ||
wacv_2025_e864aeed44 | e864aeed44 | wacv | 2,025 | HDPNet: Hourglass Vision Transformer with Dual-Path Feature Pyramid for Camouflaged Object Detection | Existing camouflaged object detection methods often struggle with detecting small objects and fine object boundaries. To alleviate these issues we propose a novel hourglass vision Transformer with Dual-path Feature Pyramid (HDPNet). Specifically we construct an hourglass Transformer encoder that effectively captures th... | Jinpeng He; Biyuan Liu; Huaixin Chen | University of Electronic Science and Technology of China; University of Electronic Science and Technology of China; University of Electronic Science and Technology of China | Poster | main | https://github.com/LittleGrey-hjp/HDPNet | https://openaccess.thecvf.com/content/WACV2025/html/He_HDPNet_Hourglass_Vision_Transformer_with_Dual-Path_Feature_Pyramid_for_Camouflaged_WACV_2025_paper.html | 0 | HDPNet: Hourglass Vision Transformer with Dual-Path Feature Pyramid for Camouflaged Object Detection
Existing camouflaged object detection methods often struggle with detecting small objects and fine object boundaries. To alleviate these issues we propose a novel hourglass vision Transformer with Dual-path Feature Pyra... | [
-0.07309825718402863,
-0.026002520695328712,
-0.031042492017149925,
0.0056139687076210976,
-0.03132249042391777,
-0.012114598415791988,
-0.015343913808465004,
0.0015913244569674134,
0.02467719465494156,
0.03585846722126007,
-0.03585846722126007,
0.0010295775718986988,
-0.005968633107841015,
... | ||
wacv_2025_525169f9dd | 525169f9dd | wacv | 2,025 | HEX: Hierarchical Emergence Exploitation in Self-Supervised Algorithms | In this paper we propose an algorithm that can be used on top of a wide variety of self-supervised (SSL) approaches to take advantage of hierarchical structures that emerge during training. SSL approaches typically work through some invariance term to ensure consistency between similar samples and a regularization term... | Kiran Kokilepersaud; Seulgi Kim; Mohit Prabhushankar; Ghassan AlRegib | OLIVES at the Center for Signal and Information Processing CSIP; School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA; School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA; School of Electrical and Computer Engineering, Georgia Inst... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Kokilepersaud_HEX_Hierarchical_Emergence_Exploitation_in_Self-Supervised_Algorithms_WACV_2025_paper.html | 1 | 2410.23200 | HEX: Hierarchical Emergence Exploitation in Self-Supervised Algorithms
In this paper we propose an algorithm that can be used on top of a wide variety of self-supervised (SSL) approaches to take advantage of hierarchical structures that emerge during training. SSL approaches typically work through some invariance term ... | [
-0.05052924528717995,
0.011420355178415775,
-0.024425575509667397,
0.039864033460617065,
0.009928717277944088,
0.02187114581465721,
0.00161050318274647,
0.008418433368206024,
0.01651989296078682,
0.02619689702987671,
-0.029217464849352837,
0.006260219495743513,
-0.01967098005115986,
0.0263... | ||
wacv_2025_89b92b3689 | 89b92b3689 | wacv | 2,025 | HOPE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts | Compositional Zero-Shot Learning (CZSL) has emerged as an essential paradigm in machine learning aiming to overcome the constraints of traditional zero-shot learning by incorporating compositional thinking into its methodology. Conventional zero-shot learning has difficulty managing unfamiliar combinations of seen and ... | Do Huu Dat; Po-Yuan Mao; Tien Hoang Nguyen; Wray Buntine; Mohammed Bennamoun | VinUniversity; Academia Sinica+VinUniversity; VNU-UET; VinUniversity; University of Western Australia | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Dat_HOPE_A_Memory-Based_and_Composition-Aware_Framework_for_Zero-Shot_Learning_with_WACV_2025_paper.html | 5 | HOPE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts
Compositional Zero-Shot Learning (CZSL) has emerged as an essential paradigm in machine learning aiming to overcome the constraints of traditional zero-shot learning by incorporating composition... | [
-0.0533003993332386,
-0.044722896069288254,
0.006189663428813219,
0.02627563662827015,
0.02681875415146351,
-0.03472204506397247,
0.008577506989240646,
0.02256745658814907,
-0.015151097439229488,
0.0010224415455013514,
-0.003141652327030897,
0.007004339247941971,
0.005716776940971613,
0.00... | |||
wacv_2025_24d92663b1 | 24d92663b1 | wacv | 2,025 | HSDA: High-Frequency Shuffle Data Augmentation for Bird's-Eye-View Map Segmentation | Autonomous driving has garnered significant attention in recent research and Bird's-Eye-View (BEV) map segmentation plays a vital role in the field providing the basis for safe and reliable operation. While data augmentation is a commonly used technique for improving BEV map segmentation networks existing approaches pr... | Calvin Glisson; Qiuxiao Chen | School of Computer Science and Engineering, California State University, San Bernardino, San Bernardino, USA; School of Computer Science and Engineering, California State University, San Bernardino, San Bernardino, USA | Poster | main | https://github.com/Zarhult/HSDA | https://openaccess.thecvf.com/content/WACV2025/html/Glisson_HSDA_High-Frequency_Shuffle_Data_Augmentation_for_Birds-Eye-View_Map_Segmentation_WACV_2025_paper.html | 0 | HSDA: High-Frequency Shuffle Data Augmentation for Bird's-Eye-View Map Segmentation
Autonomous driving has garnered significant attention in recent research and Bird's-Eye-View (BEV) map segmentation plays a vital role in the field providing the basis for safe and reliable operation. While data augmentation is a common... | [
-0.09431954473257065,
-0.03761940449476242,
-0.025169948115944862,
0.014111794531345367,
0.0006645959219895303,
0.02471822500228882,
0.043798960745334625,
0.014220206998288631,
0.02950647845864296,
0.03686051070690155,
-0.050159208476543427,
-0.023579886183142662,
-0.0007752678357064724,
0... | ||
wacv_2025_fdfec15f7d | fdfec15f7d | wacv | 2,025 | HandCraft: Anatomically Correct Restoration of Malformed Hands in Diffusion Generated Images | Generative text-to-image models such as Stable Diffusion have demonstrated a remarkable ability to generate diverse high-quality images. However they are surprisingly inept when it comes to rendering human hands which are often anatomically incorrect or reside in the "uncanny valley". In this paper we propose a method ... | Zhenyue Qin; Yiqun Zhang; Yang Liu; Dylan Campbell | Seeing Machines‡; Australian National University†; Seeing Machines‡; Australian National University† | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Qin_HandCraft_Anatomically_Correct_Restoration_of_Malformed_Hands_in_Diffusion_Generated_WACV_2025_paper.html | 0 | 2411.04332 | HandCraft: Anatomically Correct Restoration of Malformed Hands in Diffusion Generated Images
Generative text-to-image models such as Stable Diffusion have demonstrated a remarkable ability to generate diverse high-quality images. However they are surprisingly inept when it comes to rendering human hands which are often... | [
-0.01098591834306717,
0.013521130196750164,
-0.028323795646429062,
-0.011868134140968323,
-0.010503020137548447,
0.0027650524862110615,
0.009639376774430275,
0.014551930129528046,
-0.0063658892177045345,
0.04045195132493973,
-0.02941960096359253,
-0.006347315851598978,
-0.009110046550631523,... | ||
wacv_2025_ec501b87f9 | ec501b87f9 | wacv | 2,025 | Harmonizing Attention: Training-Free Texture-Aware Geometry Transfer | Creating images where surface patterns of one object - such as cracks holes or grooves - are precisely transferred onto objects made of different materials remains a challenging task in computer graphics. For example recreating the exact pattern of wood grain cracks on a metallic surface while maintaining the realistic... | Eito Ikuta; Yohan Lee; Akihiro Iohara; Yu Saito; Toshiyuki Tanaka | DATAGRID Inc.; DATAGRID Inc.; DATAGRID Inc.; DATAGRID Inc.; Graduate School of Informatics, Kyoto University | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Ikuta_Harmonizing_Attention_Training-Free_Texture-Aware_Geometry_Transfer_WACV_2025_paper.html | 0 | 2408.10846 | Harmonizing Attention: Training-Free Texture-Aware Geometry Transfer
Creating images where surface patterns of one object - such as cracks holes or grooves - are precisely transferred onto objects made of different materials remains a challenging task in computer graphics. For example recreating the exact pattern of wo... | [
-0.03507852554321289,
-0.011469054967164993,
-0.03326958417892456,
0.02321784384548664,
0.006452509202063084,
0.0012762821279466152,
0.008219489827752113,
-0.02086808532476425,
0.005510741379112005,
-0.00045019539538770914,
-0.03498528152704239,
-0.002918548183515668,
0.020495109260082245,
... | ||
wacv_2025_f228bb0549 | f228bb0549 | wacv | 2,025 | Hausdorff Distance Matching with Adaptive Query Denoising for Rotated Detection Transformer | Detection Transformers (DETR) have recently set new benchmarks in object detection. However their performance in detecting rotated objects lags behind established oriented object detectors. Our analysis identifies a key observation: the boundary discontinuity and square-like problem in bipartite matching poses an issue... | Hakjin Lee; MinKi Song; Jamyoung Koo; Junghoon Seo | SI Analytics Co. Ltd.; SI Analytics Co. Ltd.; SI Analytics Co. Ltd.; KAIST | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Lee_Hausdorff_Distance_Matching_with_Adaptive_Query_Denoising_for_Rotated_Detection_WACV_2025_paper.html | 8 | 2305.07598 | Hausdorff Distance Matching with Adaptive Query Denoising for Rotated Detection Transformer
Detection Transformers (DETR) have recently set new benchmarks in object detection. However their performance in detecting rotated objects lags behind established oriented object detectors. Our analysis identifies a key observat... | [
-0.053442180156707764,
-0.034722425043582916,
-0.01913200132548809,
0.029925372451543808,
-0.02885727770626545,
0.009313030168414116,
0.04958204925060272,
0.0025624888949096203,
0.03140571340918541,
0.01997523196041584,
-0.0530674085021019,
-0.018775969743728638,
-0.02450994774699211,
0.02... | ||
wacv_2025_8da53a0a7a | 8da53a0a7a | wacv | 2,025 | HeightLane: BEV Heightmap Guided 3D Lane Detection | Accurate 3D lane detection from monocular images presents significant challenges due to depth ambiguity and imperfect ground modeling. Previous attempts to model the ground have often used a planar ground assumption with limited degrees of freedom making them unsuitable for complex road environments with varying slopes... | Chaesong Park; Eunbin Seo; Jongwoo Lim | ME1 & IPAI2, Seoul National University; ME1 & IPAI2, Seoul National University; ME1 & IPAI2, Seoul National University | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Park_HeightLane_BEV_Heightmap_Guided_3D_Lane_Detection_WACV_2025_paper.html | 3 | 2408.08270 | HeightLane: BEV Heightmap Guided 3D Lane Detection
Accurate 3D lane detection from monocular images presents significant challenges due to depth ambiguity and imperfect ground modeling. Previous attempts to model the ground have often used a planar ground assumption with limited degrees of freedom making them unsuitabl... | [
-0.04896806925535202,
-0.056770872324705124,
-0.021567098796367645,
0.011512783356010914,
0.021640021353960037,
0.01446618139743805,
0.008058036677539349,
0.015569147653877735,
0.019853398203849792,
-0.00289642671123147,
-0.05863042175769806,
-0.010583009570837021,
0.017510732635855675,
0.... | ||
wacv_2025_2e79a03273 | 2e79a03273 | wacv | 2,025 | HeightMapNet: Explicit Height Modeling for End-to-End HD Map Learning | Recent advances in high-definition (HD) map construction from surround-view images have highlighted their cost-effectiveness in deployment. However prevailing techniques often fall short in accurately extracting and utilizing road features as well as in the implementation of view transformation. In response we introduc... | Wenzhao Qiu; Shanmin Pang; Hao Zhang; Jianwu Fang; Jianru Xue | Xi’an Jiaotong University; Xi’an Jiaotong University; Xi’an Jiaotong University; Xi’an Jiaotong University; Xi’an Jiaotong University | Poster | main | https://github.com/adasfag/HeightMapNet/ | https://openaccess.thecvf.com/content/WACV2025/html/Qiu_HeightMapNet_Explicit_Height_Modeling_for_End-to-End_HD_Map_Learning_WACV_2025_paper.html | 0 | 2411.01408 | HeightMapNet: Explicit Height Modeling for End-to-End HD Map Learning
Recent advances in high-definition (HD) map construction from surround-view images have highlighted their cost-effectiveness in deployment. However prevailing techniques often fall short in accurately extracting and utilizing road features as well as... | [
-0.0571320578455925,
-0.037541843950748444,
-0.018161002546548843,
0.019608421251177788,
0.011761412024497986,
0.014383150264620781,
0.02479727938771248,
0.00793349090963602,
0.024578802287578583,
0.005352716892957687,
-0.01295393891632557,
-0.012453258968889713,
-0.0007202954147942364,
0.... | |
wacv_2025_a8a79ef211 | a8a79ef211 | wacv | 2,025 | Heterogeneous Datasets for Unsupervised Image Anomaly Detection | Unsupervised anomaly detection (AD) is a critical task in various domains from manufacturing to infrastructure monitoring. To advance this field we introduce two novel datasets: CARS-AD and ROADS-AD designed to challenge existing unsupervised AD methods with their diverse and heterogeneous image content. CARS-AD compri... | Juan Lagos; Haider Ali; Adnan Faroque; Esa Rahtu | Tampere University; Tampere University; Tampere University; Tampere University | Poster | main | https://github.com/juanb09111/heterogeneousAD | https://openaccess.thecvf.com/content/WACV2025/html/Lagos_Heterogeneous_Datasets_for_Unsupervised_Image_Anomaly_Detection_WACV_2025_paper.html | 0 | Heterogeneous Datasets for Unsupervised Image Anomaly Detection
Unsupervised anomaly detection (AD) is a critical task in various domains from manufacturing to infrastructure monitoring. To advance this field we introduce two novel datasets: CARS-AD and ROADS-AD designed to challenge existing unsupervised AD methods wi... | [
-0.04682711884379387,
-0.029303647577762604,
-0.04102877527475357,
0.0022982354275882244,
-0.0307899322360754,
0.013275635428726673,
0.0008257133304141462,
0.032533105462789536,
-0.0098810363560915,
0.03269824758172035,
-0.041138872504234314,
-0.04201963171362877,
0.01308296900242567,
0.01... | ||
wacv_2025_5260db148c | 5260db148c | wacv | 2,025 | HexaGen3D: StableDiffusion is One Step Away from Fast and Diverse Text-to-3D Generation | Despite the latest remarkable advances in generative modeling efficient generation of high-quality 3D objects from textual prompts remains a difficult task. A key challenge lies in data scarcity: the most extensive 3D datasets encompass merely millions of samples while their 2D counterparts contain billions of text-ima... | Antoine Mercier; Ramin Nakhli; Mahesh Reddy; Rajeev Yasarla; Hong Cai; Fatih Porikli; Guillaume Berger | ;;;;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Mercier_HexaGen3D_StableDiffusion_is_One_Step_Away_from_Fast_and_Diverse_WACV_2025_paper.html | 0 | HexaGen3D: StableDiffusion is One Step Away from Fast and Diverse Text-to-3D Generation
Despite the latest remarkable advances in generative modeling efficient generation of high-quality 3D objects from textual prompts remains a difficult task. A key challenge lies in data scarcity: the most extensive 3D datasets encom... | [
-0.03319433704018593,
-0.009319166652858257,
-0.0018010282656177878,
-0.010963181965053082,
0.002576855942606926,
-0.02940755896270275,
-0.02593480795621872,
-0.005453883204609156,
-0.008658790029585361,
-0.009707080200314522,
-0.03845888003706932,
-0.00018472084775567055,
0.0066868946887552... | |||
wacv_2025_eb90ff976d | eb90ff976d | wacv | 2,025 | Hierarchical Light Transformer Ensembles for Multimodal Trajectory Forecasting | Accurate trajectory forecasting is crucial for the performance of various systems such as advanced driver-assistance systems and self-driving vehicles. These forecasts allow us to anticipate events that lead to collisions and therefore to mitigate them. Deep Neural Networks have excelled in motion forecasting but overc... | Adrien Lafage; Mathieu Barbier; Gianni Franchi; David Filliat | U2IS, ENSTA Paris, Institut Polytechnique de Paris+ Ampere Software Technology; Ampere Software Technology; U2IS, ENSTA Paris, Institut Polytechnique de Paris; U2IS, ENSTA Paris, Institut Polytechnique de Paris | Poster | main | github.com/alafage/hlt-ens | https://openaccess.thecvf.com/content/WACV2025/html/Lafage_Hierarchical_Light_Transformer_Ensembles_for_Multimodal_Trajectory_Forecasting_WACV_2025_paper.html | 4 | 2403.17678 | Hierarchical Light Transformer Ensembles for Multimodal Trajectory Forecasting
Accurate trajectory forecasting is crucial for the performance of various systems such as advanced driver-assistance systems and self-driving vehicles. These forecasts allow us to anticipate events that lead to collisions and therefore to mi... | [
-0.036117635667324066,
-0.030447475612163544,
-0.023245828226208687,
0.04412149265408516,
-0.017019592225551605,
-0.03185134381055832,
-0.002244817791506648,
0.0066729201935231686,
-0.009444187395274639,
0.0033660870976746082,
-0.03134084492921829,
-0.016199151054024696,
-0.00275075621902942... | |
wacv_2025_c24737bb6d | c24737bb6d | wacv | 2,025 | High-Fidelity Document Stain Removal via A Large-Scale Real-World Dataset and A Memory-Augmented Transformer | Document images are often degraded by various stains significantly impacting their readability and hindering downstream applications such as document digitization and analysis. The absence of a comprehensive stained document dataset has limited the effectiveness of existing document enhancement methods in removing stai... | Mingxian Li; Hao Sun; Yingtie Lei; Xiaofeng Zhang; Yihang Dong; Yilin Zhou; Zimeng Li; Xuhang Chen | School of Computer Science and Engineering, Huizhou University; School of Computer Science and Engineering, Huizhou University; Department of Computer Science and Engineering, The Ohio State University; School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University; Shenzhen Institute of Adv... | Poster | main | https://github.com/CXH-Research/StainRestorer | https://openaccess.thecvf.com/content/WACV2025/html/Li_High-Fidelity_Document_Stain_Removal_via_A_Large-Scale_Real-World_Dataset_and_WACV_2025_paper.html | 13 | 2410.22922 | High-Fidelity Document Stain Removal via A Large-Scale Real-World Dataset and A Memory-Augmented Transformer
Document images are often degraded by various stains significantly impacting their readability and hindering downstream applications such as document digitization and analysis. The absence of a comprehensive sta... | [
-0.01617773063480854,
-0.020335601642727852,
-0.02811424806714058,
-0.004333816934376955,
-0.007204507477581501,
0.01623329147696495,
-0.008806537836790085,
0.05482092872262001,
-0.0016842156182974577,
0.00453522801399231,
0.0002436614449834451,
0.007283220067620277,
-0.02455829456448555,
... | |
wacv_2025_ebdb4fdae6 | ebdb4fdae6 | wacv | 2,025 | High-Pass Kernel Prediction for Efficient Video Deblurring | State-of-the-art video deblurring methods use deep network architectures to recover sharpened video frames. Blurring especially degrades high-frequency (HF) information yet this aspect is often overlooked by recent models that focus more on enhancing architectural design. Recovering these fine details is challenging pa... | Bo Ji; Angela Yao | National University of Singapore; National University of Singapore | Poster | main | https://github.com/jibo27/AHFNet | https://openaccess.thecvf.com/content/WACV2025/html/Ji_High-Pass_Kernel_Prediction_for_Efficient_Video_Deblurring_WACV_2025_paper.html | 0 | 2412.01559 | High-Pass Kernel Prediction for Efficient Video Deblurring
State-of-the-art video deblurring methods use deep network architectures to recover sharpened video frames. Blurring especially degrades high-frequency (HF) information yet this aspect is often overlooked by recent models that focus more on enhancing architectu... | [
-0.016479432582855225,
-0.027958856895565987,
-0.03627347946166992,
0.015140479430556297,
-0.01536519918590784,
-0.01573973149061203,
0.021591803058981895,
-0.024494430050253868,
-0.016797786578536034,
0.022041242569684982,
-0.06179787591099739,
-0.0323408879339695,
-0.010271555744111538,
... | |
wacv_2025_742789104e | 742789104e | wacv | 2,025 | Hijacking Vision-and-Language Navigation Agents with Adversarial Environmental Attacks | Assistive embodied agents that can be instructed in natural language to perform tasks in open-world environments have the potential to significantly impact labor tasks like manufacturing or in-home care - benefiting the lives of those who come to depend on them. In this work we consider how this benefit might be hijack... | Zijiao Yang; Xiangxi Shi; Eric Slyman; Stefan Lee | Oregon State University; Oregon State University; Oregon State University; Oregon State University | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Yang_Hijacking_Vision-and-Language_Navigation_Agents_with_Adversarial_Environmental_Attacks_WACV_2025_paper.html | 1 | 2412.02795 | Hijacking Vision-and-Language Navigation Agents with Adversarial Environmental Attacks
Assistive embodied agents that can be instructed in natural language to perform tasks in open-world environments have the potential to significantly impact labor tasks like manufacturing or in-home care - benefiting the lives of thos... | [
-0.043149642646312714,
0.010085105895996094,
-0.008750725537538528,
0.012501035816967487,
-0.03185657411813736,
-0.006957505363970995,
-0.000524388044141233,
-0.01069376990199089,
0.01923380419611931,
0.03144455328583717,
-0.0005223396583460271,
-0.015019971877336502,
-0.02812967263162136,
... | ||
wacv_2025_f78a761d90 | f78a761d90 | wacv | 2,025 | HybridDepth: Robust Metric Depth Fusion by Leveraging Depth from Focus and Single-Image Priors | We propose HYBRIDDEPTH a robust depth estimation pipeline that addresses key challenges in depth estimation including scale ambiguity hardware heterogeneity and generalizability. HYBRIDDEPTH leverages focal stack data conveniently accessible in common mobile devices to produce accurate metric depth maps. By incorporati... | Ashkan Ganj; Hang Su; Tian Guo | Worcester Polytechnic Institute; Nvidia Research; Worcester Polytechnic Institute | Poster | main | https://github.com/cake-lab/HybridDepth/ | https://openaccess.thecvf.com/content/WACV2025/html/Ganj_HybridDepth_Robust_Metric_Depth_Fusion_by_Leveraging_Depth_from_Focus_WACV_2025_paper.html | 0 | 2407.18443 | HybridDepth: Robust Metric Depth Fusion by Leveraging Depth from Focus and Single-Image Priors
We propose HYBRIDDEPTH a robust depth estimation pipeline that addresses key challenges in depth estimation including scale ambiguity hardware heterogeneity and generalizability. HYBRIDDEPTH leverages focal stack data conveni... | [
-0.0573987141251564,
-0.03778143227100372,
-0.045991625636816025,
-0.011089213192462921,
0.004550119396299124,
-0.03180542588233948,
0.002279600827023387,
-0.0028949114494025707,
0.031242337077856064,
0.025102855637669563,
-0.028862833976745605,
0.0014826487749814987,
0.013568620197474957,
... | |
wacv_2025_56a0165433 | 56a0165433 | wacv | 2,025 | Hyperdimensional Representation for Adaptive Information Association and Memorization | Many computer vision applications rely on interpretable machine learning algorithms to analyze the data collected from various sources. We leverage Hyperdimensional Computing (HDC) as an innovative computational model that mimics key brain functionalities to achieve efficient and robust cognitive learning. We propose H... | Zhuowen Zou; Prathyush Poduval; Narayan Srinivasa; Mohsen Imani | University of California, Irvine; University of California, Irvine; Intel Labs; University of California, Irvine | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Zou_Hyperdimensional_Representation_for_Adaptive_Information_Association_and_Memorization_WACV_2025_paper.html | 0 | Hyperdimensional Representation for Adaptive Information Association and Memorization
Many computer vision applications rely on interpretable machine learning algorithms to analyze the data collected from various sources. We leverage Hyperdimensional Computing (HDC) as an innovative computational model that mimics key ... | [
-0.018712040036916733,
-0.02202284336090088,
-0.01727336086332798,
0.016415687277913094,
-0.02899489924311638,
-0.020768610760569572,
0.007682172581553459,
-0.02499242313206196,
0.04917328432202339,
0.010577973909676075,
-0.03312648832798004,
-0.04334479197859764,
0.010808531194925308,
0.0... | |||
wacv_2025_9e76ab7320 | 9e76ab7320 | wacv | 2,025 | I Dream My Painting: Connecting MLLMs and Diffusion Models via Prompt Generation for Text-Guided Multi-Mask Inpainting | Inpainting focuses on filling missing or corrupted regions of an image to blend seamlessly with its surrounding content and style. While conditional diffusion models have proven effective for text-guided inpainting we introduce the novel task of multi-mask inpainting where multiple regions are simultaneously inpainted ... | Nicola Fanelli; Gennaro Vessio; Giovanna Castellano | University of Bari Aldo Moro, Italy; University of Bari Aldo Moro, Italy; University of Bari Aldo Moro, Italy | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Fanelli_I_Dream_My_Painting_Connecting_MLLMs_and_Diffusion_Models_via_WACV_2025_paper.html | 0 | 2411.19050 | I Dream My Painting: Connecting MLLMs and Diffusion Models via Prompt Generation for Text-Guided Multi-Mask Inpainting
Inpainting focuses on filling missing or corrupted regions of an image to blend seamlessly with its surrounding content and style. While conditional diffusion models have proven effective for text-guid... | [
-0.07797949761152267,
-0.012046361342072487,
-0.0020414444152265787,
-0.00029296911088749766,
-0.02710890956223011,
-0.01480506919324398,
-0.027256041765213013,
0.007439317647367716,
-0.006772629451006651,
0.010271592065691948,
-0.00488291447982192,
0.018759218975901604,
0.000299434817861765... | ||
wacv_2025_92ee5a8af8 | 92ee5a8af8 | wacv | 2,025 | I Spy with My Little Eye A Minimum Cost Multicut Investigation of Dataset Frames | Visual framing analysis is a key method in social sciences for determining common themes and concepts in a given discourse. To reduce manual effort image clustering can significantly speed up the annotation process. In this work we phrase the clustering task as a Minimum Cost Multicut Problem [MP]. Solutions to the MP ... | Katharina Prasse; Isaac Bravo; Stefanie Walter; Margret Keuper | University of Mannheim, Mannheim, Germany; Technical University Munich, Munich, Germany; Technical University Munich, Munich, Germany; University of Mannheim, Mannheim, Germany + Max-Planck-Institute for Informatics, Saarland Informatics Campus | Poster | main | https://github.com/KathPra/MP4VisualFrameDetection | https://openaccess.thecvf.com/content/WACV2025/html/Prasse_I_Spy_with_My_Little_Eye_A_Minimum_Cost_Multicut_WACV_2025_paper.html | 1 | 2412.01296 | I Spy with My Little Eye A Minimum Cost Multicut Investigation of Dataset Frames
Visual framing analysis is a key method in social sciences for determining common themes and concepts in a given discourse. To reduce manual effort image clustering can significantly speed up the annotation process. In this work we phrase ... | [
-0.06565853208303452,
-0.03199226036667824,
-0.022747984156012535,
0.049178797751665115,
-0.006310101132839918,
0.0019925411324948072,
-0.0011328654363751411,
-0.0182095468044281,
0.02643081359565258,
0.02174357697367668,
-0.03647489473223686,
-0.057028062641620636,
0.005468444433063269,
0... | |
wacv_2025_e68bfae26c | e68bfae26c | wacv | 2,025 | I3D-AE-LSTM: A 2-Stream Autoencoder for Action Quality Assessment using a Newly Created Cricket Batsman Video Dataset | In this study we introduce UJ-AQA-CricketVision a dataset comprising 8540 video clips of cricket strokes each annotated with detailed phase breakdowns. We develop a novel multi-variate approach for Action Quality Assessment (AQA) at a body level that leverages an Autoencoder for extracting sophisticated feature represe... | Tevin Moodley; Dustin Terence van der Haar | University of Johannesburg; University of Johannesburg | Poster | main | https://github.com/dvanderhaar/uj-aqa-cricketvision | https://openaccess.thecvf.com/content/WACV2025/html/Moodley_I3D-AE-LSTM_A_2-Stream_Autoencoder_for_Action_Quality_Assessment_using_a_WACV_2025_paper.html | 1 | I3D-AE-LSTM: A 2-Stream Autoencoder for Action Quality Assessment using a Newly Created Cricket Batsman Video Dataset
In this study we introduce UJ-AQA-CricketVision a dataset comprising 8540 video clips of cricket strokes each annotated with detailed phase breakdowns. We develop a novel multi-variate approach for Acti... | [
-0.02327333576977253,
-0.05630218982696533,
-0.04541230946779251,
0.009268686175346375,
0.009840594604611397,
0.018556280061602592,
0.00505263963714242,
0.016136305406689644,
-0.04185797646641731,
0.03348260000348091,
0.01374469231814146,
0.028510309755802155,
-0.012544157914817333,
0.0250... | ||
wacv_2025_1259a5a46a | 1259a5a46a | wacv | 2,025 | IRIS-VIS: A New Dataset for Visibility Estimation in an Industrial Environment | Point cloud visibility estimation is fundamental as it is useful for many computer vision applications including surface reconstruction 3D segmentation from paired images and point densification. Previous works showed outstanding results on simple object and outdoor datasets. However unlike the previously studied scene... | Flavien Armangeon; Thibaud Ehret; Enric Meinhardt-Llopis; Rafael Grompone von Gioi; Guillaume Thibault; Marc Petit; Gabriele Facciolo | ;;;;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Armangeon_IRIS-VIS_A_New_Dataset_for_Visibility_Estimation_in_an_Industrial_WACV_2025_paper.html | 0 | IRIS-VIS: A New Dataset for Visibility Estimation in an Industrial Environment
Point cloud visibility estimation is fundamental as it is useful for many computer vision applications including surface reconstruction 3D segmentation from paired images and point densification. Previous works showed outstanding results on ... | [
-0.013715012930333614,
0.008273279294371605,
-0.04928918927907944,
0.01587325893342495,
-0.030326133593916893,
0.004420255310833454,
-0.011953366920351982,
0.0028546040412038565,
0.07334349304437637,
0.041910570114851,
-0.009702887386083603,
-0.016767917200922966,
0.051539670675992966,
0.0... | |||
wacv_2025_2fb8a2daab | 2fb8a2daab | wacv | 2,025 | Identify Backdoored Model in Federated Learning via Individual Unlearning | Backdoor attacks present a significant threat to the robustness of Federated Learning (FL) due to their stealth and effectiveness. They maintain both the main task of the FL system and the backdoor task simultaneously causing malicious models to appear statistically similar to benign ones which enables them to evade de... | Jiahao Xu; Zikai Zhang; Rui Hu | University of Nevada, Reno; University of Nevada, Reno; University of Nevada, Reno | Poster | main | https://github.com/JiiahaoXU/MASA | https://openaccess.thecvf.com/content/WACV2025/html/Xu_Identify_Backdoored_Model_in_Federated_Learning_via_Individual_Unlearning_WACV_2025_paper.html | 1 | 2411.01040 | Identify Backdoored Model in Federated Learning via Individual Unlearning
Backdoor attacks present a significant threat to the robustness of Federated Learning (FL) due to their stealth and effectiveness. They maintain both the main task of the FL system and the backdoor task simultaneously causing malicious models to ... | [
-0.013868430629372597,
0.0013965396210551262,
-0.026336774230003357,
-0.02949642948806286,
-0.030007272958755493,
0.001540805329568684,
0.0338858924806118,
0.0029042346868664026,
-0.03172900155186653,
0.05872797966003418,
-0.058652300387620926,
-0.00017929747991729528,
-0.024293404072523117,... | |
wacv_2025_ce8c073c5c | ce8c073c5c | wacv | 2,025 | Identity Curvature Laplace Approximation for Improved Out-of-Distribution Detection | Uncertainty estimation is crucial in safety-critical applications where robust out-of-distribution (OOD) detection is essential. Traditional Bayesian methods though effective are often hindered by high computational demands. As an alternative Laplace approximation offers a more practical and efficient approach to uncer... | Maksim Zhdanov; Stanislav Dereka; Sergey Kolesnikov | AIRI* + NUST MISIS†; T-Bank, MIPT‡; T-Bank* | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Zhdanov_Identity_Curvature_Laplace_Approximation_for_Improved_Out-of-Distribution_Detection_WACV_2025_paper.html | 0 | 2312.10464 | Identity Curvature Laplace Approximation for Improved Out-of-Distribution Detection
Uncertainty estimation is crucial in safety-critical applications where robust out-of-distribution (OOD) detection is essential. Traditional Bayesian methods though effective are often hindered by high computational demands. As an alter... | [
-0.04623799026012421,
-0.011578463017940521,
-0.029093140736222267,
0.008856908418238163,
-0.030837971717119217,
0.005210782866925001,
0.038443151861429214,
0.01734398864209652,
0.027120724320411682,
0.03815867006778717,
-0.031862109899520874,
-0.0014081915142014623,
-0.0368310809135437,
0... | ||
wacv_2025_d5ac6a5759 | d5ac6a5759 | wacv | 2,025 | Image Adaptation for Colour Vision Deficient Viewers using Vision Transformers | Colour Vision Deficiency (CVD) occurs when anomalous retinal cone spectral responses impact the ability to distinguish between certain colours. To enhance image quality and viewing experience recolouring algorithms seek to modify pixel values so that this does not lead to a loss of detail or image quality. Recent appro... | Thomas Gillooly; Jean-Baptiste Thomas; Jon Y. Hardeberg; Giuseppe Claudio Guarnera | NTNU, Norway; NTNU, Norway+Université de Bourgogne, France; NTNU, Norway; University of York, UK+Lumirithmic, UK | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Gillooly_Image_Adaptation_for_Colour_Vision_Deficient_Viewers_using_Vision_Transformers_WACV_2025_paper.html | 0 | Image Adaptation for Colour Vision Deficient Viewers using Vision Transformers
Colour Vision Deficiency (CVD) occurs when anomalous retinal cone spectral responses impact the ability to distinguish between certain colours. To enhance image quality and viewing experience recolouring algorithms seek to modify pixel value... | [
-0.025438109412789345,
-0.035661593079566956,
-0.028722241520881653,
-0.006721338257193565,
-0.02614317648112774,
0.05228635296225548,
0.01212531328201294,
-0.002810994628816843,
0.03339795395731926,
-0.006549709942191839,
-0.018118390813469887,
-0.05306563898921013,
-0.012069649994373322,
... | |||
wacv_2025_ad335c7954 | ad335c7954 | wacv | 2,025 | Image-Caption Encoding for Improving Zero-Shot Generalization | Recent advances in vision-language models have combined contrastive approaches with generative methods to achieve state-of-the-art (SOTA) on downstream inference tasks like zero-shot image classification. However a persistent issue of these models for image classification is their out-of-distribution (OOD) generalizati... | Eric Yu; Christopher Liao; Sathvik Ravi; Theodoros Tsiligkaridis; Brian Kulis | UCSD+MIT Lincoln Laboratory; Boston University; University of Maryland; MIT Lincoln Laboratory; Boston University | Poster | main | https://github.com/Chris210634/ice | https://openaccess.thecvf.com/content/WACV2025/html/Yu_Image-Caption_Encoding_for_Improving_Zero-Shot_Generalization_WACV_2025_paper.html | 0 | 2402.02662 | Image-Caption Encoding for Improving Zero-Shot Generalization
Recent advances in vision-language models have combined contrastive approaches with generative methods to achieve state-of-the-art (SOTA) on downstream inference tasks like zero-shot image classification. However a persistent issue of these models for image ... | [
-0.038084886968135834,
-0.027708133682608604,
0.0042431531473994255,
0.011692243628203869,
0.02848087064921856,
-0.027652939781546593,
-0.017791345715522766,
0.0643211379647255,
-0.019134435802698135,
0.034993939101696014,
-0.029345599934458733,
-0.05416516959667206,
-0.009953586384654045,
... | |
wacv_2025_a7184d9fa3 | a7184d9fa3 | wacv | 2,025 | Image-Level Regression for Uncertainty-Aware Retinal Image Segmentation | Accurate retinal vessel (RV) segmentation is a crucial step in the quantitative assessment of retinal vasculature which is needed for the early detection of retinal diseases and other conditions. Numerous studies have been conducted to tackle the problem of segmenting vessels automatically using a pixel-wise classifica... | Trung D. Q. Dang; Huy Hoang Nguyen; Aleksei Tiulpin | Research Unit of Health Sciences and Technology, University of Oulu, Finland; Research Unit of Health Sciences and Technology, University of Oulu, Finland; Research Unit of Health Sciences and Technology, University of Oulu, Finland | Poster | main | https://github.com/Oulu-IMEDS/SAUNA | https://openaccess.thecvf.com/content/WACV2025/html/Dang_Image-Level_Regression_for_Uncertainty-Aware_Retinal_Image_Segmentation_WACV_2025_paper.html | 0 | 2405.16815 | Image-Level Regression for Uncertainty-Aware Retinal Image Segmentation
Accurate retinal vessel (RV) segmentation is a crucial step in the quantitative assessment of retinal vasculature which is needed for the early detection of retinal diseases and other conditions. Numerous studies have been conducted to tackle the p... | [
-0.0586414560675621,
-0.017293579876422882,
-0.04165766388177872,
0.021648865193128586,
-0.02153952792286873,
0.029102051630616188,
0.03979892283678055,
0.04169411212205887,
0.005817676894366741,
0.030414104461669922,
-0.0021366409491747618,
0.004152555018663406,
-0.00032374210422858596,
0... | |
wacv_2025_d63f20ebb9 | d63f20ebb9 | wacv | 2,025 | Importance-Guided Interpretability and Pruning for Video Transformers in Driver Action Recognition | Recently transformers have gained prominence in video action recognition due to their ability to capture spatio-temporal dependencies. Despite their effectiveness the interpretability of their self-attention mechanisms remains limited posing obstacles in understanding model decisions impacting transparency and bias ide... | Raquel Panadero Palenzuela; Dominik Schörkhuber; Margrit Gelautz | ;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Palenzuela_Importance-Guided_Interpretability_and_Pruning_for_Video_Transformers_in_Driver_Action_WACV_2025_paper.html | 0 | Importance-Guided Interpretability and Pruning for Video Transformers in Driver Action Recognition
Recently transformers have gained prominence in video action recognition due to their ability to capture spatio-temporal dependencies. Despite their effectiveness the interpretability of their self-attention mechanisms re... | [
-0.034448329359292984,
-0.055825646966695786,
-0.00010924839443759993,
0.018182585015892982,
-0.05367148295044899,
-0.015088259242475033,
0.008507112972438335,
0.006608529482036829,
0.007434595841914415,
0.04144022613763809,
-0.03236719220876694,
-0.037789106369018555,
0.013363104313611984,
... | |||
wacv_2025_16bc75f398 | 16bc75f398 | wacv | 2,025 | Improving Accuracy and Generalization for Efficient Visual Tracking | Efficient visual trackers overfit to their training distributions and lack generalization abilities resulting in them performing well on their respective in-distribution (ID) test sets and not as well on out-of-distribution (OOD) sequences imposing limitations to their deployment in-the-wild under constrained resources... | Ram Zaveri; Shivang Patel; Yu Gu; Gianfranco Doretto | ;;; | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Zaveri_Improving_Accuracy_and_Generalization_for_Efficient_Visual_Tracking_WACV_2025_paper.html | 0 | 2411.18855 | Improving Accuracy and Generalization for Efficient Visual Tracking
Efficient visual trackers overfit to their training distributions and lack generalization abilities resulting in them performing well on their respective in-distribution (ID) test sets and not as well on out-of-distribution (OOD) sequences imposing lim... | [
-0.03170096129179001,
-0.024266580119729042,
-0.059780046343803406,
0.00860195979475975,
-0.028994083404541016,
-0.03355002403259277,
-0.02308470383286476,
0.0022422282490879297,
0.01460188627243042,
0.048609409481287,
0.0013379502343013883,
-0.048533160239458084,
0.0038029716815799475,
0.... | ||
wacv_2025_7e9faef549 | 7e9faef549 | wacv | 2,025 | Improving Conditional Diffusion Models through Re-Noising from Unconditional Diffusion Priors | Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between ... | Kangfu Mei; Nithin Gopalakrishnan Nair; Vishal Patel | Johns Hopkins University; Johns Hopkins University; Johns Hopkins University | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Mei_Improving_Conditional_Diffusion_Models_through_Re-Noising_from_Unconditional_Diffusion_Priors_WACV_2025_paper.html | 1 | Improving Conditional Diffusion Models through Re-Noising from Unconditional Diffusion Priors
Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However oftentimes their results can be unrealistic with observa... | [
-0.0890231505036354,
-0.020908087491989136,
-0.03483433276414871,
-0.021338602527976036,
-0.03298124298453331,
0.0005747033283114433,
-0.00652324827387929,
0.08565390110015869,
0.004988366272300482,
0.022817330434918404,
-0.06057168170809746,
-0.0208332147449255,
-0.007908385246992111,
0.0... | |||
wacv_2025_f3902c73f8 | f3902c73f8 | wacv | 2,025 | Improving Deep Detector Robustness via Detection-Related Discriminant Maximization and Reorganization | Deep visual detectors are known to be vulnerable to adversarial attacks raising concerns about their real-world applications (e.g. self-driving perception). We argue that this vulnerability arises from the spurious dependency of final detections on irrelevant/loophole latent dimensions. The greater the number of such d... | Jung Im Choi; Qizhen Lan; Qing Tian | Dept. of Computer Science, Bowling Green State University, Ohio, USA+Dept. of Computer Science, University of Alabama at Birmingham, Alabama, USA; Dept. of Computer Science, Bowling Green State University, Ohio, USA+Dept. of Computer Science, University of Alabama at Birmingham, Alabama, USA; Dept. of Computer Science,... | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Choi_Improving_Deep_Detector_Robustness_via_Detection-Related_Discriminant_Maximization_and_Reorganization_WACV_2025_paper.html | 0 | Improving Deep Detector Robustness via Detection-Related Discriminant Maximization and Reorganization
Deep visual detectors are known to be vulnerable to adversarial attacks raising concerns about their real-world applications (e.g. self-driving perception). We argue that this vulnerability arises from the spurious dep... | [
-0.04258587583899498,
-0.008392195217311382,
0.003962337970733643,
0.0202930960804224,
-0.02951386384665966,
0.01203513890504837,
0.001830729772336781,
-0.016793647781014442,
0.05362117290496826,
0.02053379826247692,
-0.04843680560588837,
-0.015349430963397026,
-0.024847934022545815,
0.049... | |||
wacv_2025_e47ca77c67 | e47ca77c67 | wacv | 2,025 | Improving Detail in Pluralistic Image Inpainting with Feature Dequantization | Pluralistic Image Inpainting (PII) offers multiple plausible solutions for restoring missing parts of images and has been successfully applied to various applications including image editing and object removal. Recently VQGAN-based methods have been proposed and have shown that they significantly improve the structural... | Kyungri Park; Woohwan Jung | Department of Applied Artificial Intelligence, Hanyang University†; Department of Applied Artificial Intelligence, Hanyang University† | Poster | main | https://github.com/hyudsl/FDM | https://openaccess.thecvf.com/content/WACV2025/html/Park_Improving_Detail_in_Pluralistic_Image_Inpainting_with_Feature_Dequantization_WACV_2025_paper.html | 1 | 2412.01046 | Improving Detail in Pluralistic Image Inpainting with Feature Dequantization
Pluralistic Image Inpainting (PII) offers multiple plausible solutions for restoring missing parts of images and has been successfully applied to various applications including image editing and object removal. Recently VQGAN-based methods hav... | [
-0.06410872936248779,
-0.057102106511592865,
-0.01592414453625679,
-0.024410774931311607,
-0.017432253807783127,
-0.016832755878567696,
0.03132372722029686,
0.009334358386695385,
-0.023530263453722,
0.03400272876024246,
-0.06459581851959229,
0.01541831810027361,
-0.007334473077207804,
0.02... | |
wacv_2025_be93cc65ab | be93cc65ab | wacv | 2,025 | Improving Faithfulness of Text-to-Image Diffusion Models through Inference Intervention | Text-to-Image diffusion models have shown remarkable capabilities in generating high-quality images. However current models often struggle to adhere to the complete set of conditions specified in the input text and return unfaithful generations. Existing works address this problem by either fine-tuning the base model o... | Danfeng Guo; Sanchit Agarwal; Yu-Hsiang Lin; Jiun-Yu Kao; Tagyoung Chung; Nanyun Peng; Mohit Bansal | University of California, Los Angeles; Amazon AGI; Amazon AGI; Amazon AGI; Amazon AGI; University of California, Los Angeles; University of North Carolina, Chapel Hill | Poster | main | https://openaccess.thecvf.com/content/WACV2025/html/Guo_Improving_Faithfulness_of_Text-to-Image_Diffusion_Models_through_Inference_Intervention_WACV_2025_paper.html | 0 | Improving Faithfulness of Text-to-Image Diffusion Models through Inference Intervention
Text-to-Image diffusion models have shown remarkable capabilities in generating high-quality images. However current models often struggle to adhere to the complete set of conditions specified in the input text and return unfaithful... | [
-0.04393921047449112,
-0.02609238773584366,
0.009759110398590565,
0.04185924679040909,
-0.019035371020436287,
-0.0004831386322621256,
0.033799391239881516,
0.014875446446239948,
-0.01714112050831318,
0.018552523106336594,
-0.04791342467069626,
-0.018663950264453888,
-0.024476701393723488,
... | |||
wacv_2025_9242b01049 | 9242b01049 | wacv | 2,025 | Improving Pelvic MR-CT Image Alignment with Self-Supervised Reference-Augmented Pseudo-CT Generation Framework | RegistFormer our novel reference-augmented image synthesis framework generates aligned pseudo-CT images (with respect to MR) from misaligned MR and CT pairs. RegistFormer addresses the limitations of intensity-based registration methods which often fail due to dissimilar image features and complex deformation fields. U... | Daniel Kim; Mohammed A. Al-masni; Jaehun Lee; Dong-Hyun Kim; Kanghyun Ryu | Yonsei University; Sejong University; Yale University + Korea Institute of Science and Technology; Yonsei University; Korea Institute of Science and Technology | Poster | main | https://github.com/danny4159/RegistFormer | https://openaccess.thecvf.com/content/WACV2025/html/Kim_Improving_Pelvic_MR-CT_Image_Alignment_with_Self-Supervised_Reference-Augmented_Pseudo-CT_Generation_WACV_2025_paper.html | 0 | Improving Pelvic MR-CT Image Alignment with Self-Supervised Reference-Augmented Pseudo-CT Generation Framework
RegistFormer our novel reference-augmented image synthesis framework generates aligned pseudo-CT images (with respect to MR) from misaligned MR and CT pairs. RegistFormer addresses the limitations of intensity... | [
-0.046382833272218704,
-0.009307440370321274,
-0.05891382694244385,
0.013393633998930454,
0.012739842757582664,
0.0463465116918087,
0.02104843594133854,
0.0282219760119915,
-0.0008660460007376969,
0.038646306842565536,
-0.038028839975595474,
0.005956762004643679,
-0.00584325660020113,
0.03... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.