ECCV 2024 Redux: Robust Calibration of Large Vision-Language Adapters
Jimmy Guerrero
Posted on November 21, 2024
We empirically demonstrate that popular CLIP adaptation approaches, such as Adapters, Prompt Learning, and Test-Time Adaptation, substantially degrade the calibration capabilities of the zero-shot baseline in the presence of distributional drift. We identify the increase in logit ranges as the underlying cause of miscalibration of CLIP adaptation methods, contrasting with previous work on calibrating fully-supervised models. Motivated by these observations, we present a simple and model-agnostic solution to mitigate miscalibration, by scaling the logit range of each sample to its zero-shot prediction logits
ECCV 2024 Paper: Robust Calibration of Large Vision-Language Adapters
About the Speaker: Balamurali Murugesan is currently pursuing his Ph.D. in developing reliable deep learning models. Earlier, he completed his master’s thesis on accelerating MRI reconstruction. He has published 25+ research articles in renowned venues.
Recorded on Nov 19, 2024
Posted on November 21, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 22, 2024
November 22, 2024
November 21, 2024
November 21, 2024
November 22, 2024