SIGCSE TS 2026 (series) / ACM Student Research Competition /
Adaptive Uncertainty-Aware Fusion for Robust Multimodal Learning
Thu 19 Feb 2026 15:00 - 17:00 at Hall 1 - Posters - ACM Student Research Competition Posters
Multimodal fusion is central to leveraging heterogeneous data, but its efficacy is consistently hindered by modality quality variance and dynamic reliability. Sensor noise (e.g., noisy medical images), domain biases (e.g., accent variability), and missing data (e.g., intermittent GPS signals) highlight the severe limitations of models that assume uniform modality reliability.
Existing methods, which often rely on uniform weighting or static correlation estimation, fail to capture this dynamic landscape, leading to over-reliance on unreliable sources. An effective dynamic fusion strategy must therefore jointly evaluate (a) the Predictive Uncertainty of each modality, and (b) the degree of Cross-modal Consistency among them.
We propose the Adaptive Uncertainty-Aware Fusion (AUAF) framework to address these requirements. AUAF is a novel approach that explicitly models and regularizes the fusion process by jointly considering predictive uncertainty (UQ), measured via Monte Carlo dropout, and cross-modal consistency (CC), quantified using latent-space cosine similarity. This design enables a principled, dynamic down-weighting of unreliable or conflicting modalities using a normalized parametric weighting function.
Extensive experiments on the CMU-MOSI and VQA v2 datasets demonstrate that AUAF achieves superior performance and robustness compared to leading correlation-based baselines, particularly under conditions of noisy or incomplete modalities. This validates the framework's effectiveness and its theoretical foundations for reliable dynamic multimodal fusion.
Thu 19 FebDisplayed time zone: Central Time (US & Canada) change
Thu 19 Feb
Displayed time zone: Central Time (US & Canada) change