Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models | |
Ma, Chengcheng1,3; Liu, Yang4; Deng, Jiankang2; Xie, Lingxi2; Dong, Weiming3; Xu, Changsheng3 | |
刊名 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY |
2023-09-01 | |
卷号 | 33期号:9页码:4616-4629 |
关键词 | Vision-language model prompt tuning over-fitting subspace learning gradient projection |
ISSN号 | 1051-8215 |
DOI | 10.1109/TCSVT.2023.3245584 |
通讯作者 | Dong, Weiming(weiming.dong@ia.ac.cn) |
英文摘要 | Pretrained vision-language models (VLMs) such as CLIP have shown impressive generalization capability in downstream vision tasks with appropriate text prompts. Instead of designing prompts manually, Context Optimization (CoOp) has been recently proposed to learn continuous prompts using task-specific training data. Despite the performance improvements on downstream tasks, several studies have reported that CoOp suffers from the overfitting issue in two aspects: (i) the test accuracy on base classes first improves and then worsens during training; (ii) the test accuracy on novel classes keeps decreasing. However, none of the existing studies can understand and mitigate such overfitting problems. In this study, we first explore the cause of overfitting by analyzing the gradient flow. Comparative experiments reveal that CoOp favors generalizable and spurious features in the early and later training stages, respectively, leading to the non-overfitting and overfitting phenomena. Given those observations, we propose Subspace Prompt Tuning (Sub PT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process and successfully eliminate the overfitting problem. In addition, we equip CoOp with a Novel Feature Learner (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set, needless of image training data. Extensive experiments on 11 classification datasets demonstrate that Sub PT+NFL consistently boost the performance of CoOp and outperform the state-of-the-art CoCoOp approach. Experiments on more challenging vision downstream tasks, including open-vocabulary object detection and zero-shot semantic segmentation, also verify the effectiveness of the proposed method. Codes can be found at https://tinyurl.com/mpe64f89. |
资助项目 | National Science Foundation of China[U20B2070] ; National Science Foundation of China[61832016] ; Beijing Natural Science Foundation[L221013] |
WOS研究方向 | Engineering |
语种 | 英语 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
WOS记录号 | WOS:001063316800016 |
资助机构 | National Science Foundation of China ; Beijing Natural Science Foundation |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/53116] |
专题 | 多模态人工智能系统全国重点实验室 |
通讯作者 | Dong, Weiming |
作者单位 | 1.Univ Chinese Acad Sci UCAS, Sch Artificial Intelligence, Beijing 100049, Peoples R China 2.Huawei Inc, Shenzhen 518129, Peoples R China 3.Chinese Acad Sci CASIA, Inst Automat, Natl Lab Pattern Recognit NLPR, Beijing 100190, Peoples R China 4.Alibaba DAMO Acad, Hangzhou 310024, Peoples R China |
推荐引用方式 GB/T 7714 | Ma, Chengcheng,Liu, Yang,Deng, Jiankang,et al. Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2023,33(9):4616-4629. |
APA | Ma, Chengcheng,Liu, Yang,Deng, Jiankang,Xie, Lingxi,Dong, Weiming,&Xu, Changsheng.(2023).Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,33(9),4616-4629. |
MLA | Ma, Chengcheng,et al."Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33.9(2023):4616-4629. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论