Session 1: Contributed Talks
Haiyang Xu
Bayesian Diffusion Models for 3D Shape Reconstruction3D Generative AI for Spatial Intelligence
We present Bayesian Diffusion Models (BDM), a prediction algorithm that performs effective Bayesian inference by tightly coupling the top-down (prior) information with the bottom-up (data-driven) procedure via joint diffusion processes. We show the effectiveness of BDM on the 3D shape reconstruction task. Compared to prototypical deep learning data-driven approaches trained on paired (supervised) data-labels (e.g. image-point clouds) datasets, our BDM brings in rich prior information from standalone labels (e.g. point clouds) to improve the bottom-up 3D reconstruction. As opposed to the standard Bayesian frameworks where explicit prior and likelihood are required for the inference, BDM performs seamless information fusion via coupled diffusion processes with learned gradient computation networks. The specialty of our BDM lies in its capability to engage the active and effective information exchange and fusion of the top-down and bottom-up processes where each itself is a diffusion process. We demonstrate state-of-the-art results on both synthetic and real-world benchmarks for 3D shape reconstruction.
Bio:
Haiyang is a first-year Ph.D student at University of California, San Diego (UCSD) since Fall 2024, supervised by Prof. Zhuowen Tu. Previously, he received his B.S. degree in Data Science from University of Science and Technology of China (USTC) in 2024. His research focus is on generative models and multimodal learning, especially on topics such as controllable generation and vision language models. He is currently working with Prof. Saining Xie at NYU Courant on agent-based generative models.
Chhavi Yadav
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
In principle, explanations are intended as a way to increase trust in machine learning models and are often obligated by regulations. However, many circumstances where these are demanded are adversarial in nature, meaning the involved parties have misaligned interests and are incentivized to manipulate explanations for their purpose. As a result, explainability methods fail to be operational in such settings despite the demand (Bordt et al., 2022). In this paper, we take a step towards operationalizing explanations in adversarial scenarios with Zero-Knowledge Proofs (ZKPs), a cryptographic primitive. Specifically we explore ZKPamenable versions of the popular explainability algorithm LIME and evaluate their performance on Neural Networks and Random Forests.
Bio:
Chhavi Yadav is a PhD candidate at UCSD advised by Prof. Kamalika Chaudhuri. Her research interests are in the foundations of Trustworthy ML and AI Security & Safety exploiting interconnections with cryptography. Her work won a Best Paper Award at the ICLR'24 Privacy Workshop and a Best Paper Honorable Mention at the Neurips'19 ML4Health Workshop. She is a recipient of the 'Contributions to Diversity' Award at UCSD.