VLMShield: Efficient and Robust Defense of Vision-Language Models against Malicious Prompts
arXiv:2604.06502v1 Announce Type: new Abstract: Vision-Language Models (VLMs) face significant safety vulnerabilities from malicious prompt attacks due to weakened alignment during visual integration. Existing defenses …
Peigui Qi, Kunsheng Tang, Yanpu Yu, Jialin Wu, Yide Song, Wenbo Zhou, Zhicong Huang, Cheng Hong, Weiming Zhang, Nenghai Yu
25 views