← Back to Benchmarks
simmediumrlmetric · varies

Pragma-VL: Towards a Pragmatic Arbitration of Safety and Helpfulness in MLLMs

Description

Multimodal Large Language Models (MLLMs) pose critical safety challenges, as they are susceptible not only to adversarial attacks such as jailbreaking but also to inadvertently generating harmful content for benign users. While internal safety alignment via Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) is a primary mitigation strategy, current methods often face a safety-utility trade-off: they either refuse benign queries out of excessive caution or overlook latent risks in cross

Source

http://arxiv.org/abs/2603.13292v1