ProMoral-Bench: Evaluating Prompting Strategies for Moral Reasoning and Safety in LLMs
arXiv:2602.13274v1 Announce Type: new Abstract: Prompt design significantly impacts the moral competence and safety alignment of large language models (LLMs), yet empirical comparisons remain fragmented …
Rohan Subramanian Thomas, Shikhar Shiromani, Abdullah Chaudhry, Ruizhe Li, Vasu Sharma, Kevin Zhu, Sunishchal Dev
3 views