Academic

JUBAKU: An Adversarial Benchmark for Exposing Culturally Grounded Stereotypes in Japanese LLMs

arXiv:2603.20581v1 Announce Type: new Abstract: Social biases reflected in language are inherently shaped by cultural norms, which vary significantly across regions and lead to diverse manifestations of stereotypes. Existing evaluations of social bias in large language models (LLMs) for non-English contexts, however, often rely on translations of English benchmarks. Such benchmarks fail to reflect local cultural norms, including those found in Japanese. For instance, Western benchmarks may overlook Japan-specific stereotypes related to hierarchical relationships, regional dialects, or traditional gender roles. To address this limitation, we introduce Japanese cUlture adversarial BiAs benchmarK Under handcrafted creation (JUBAKU), a benchmark tailored to Japanese cultural contexts. JUBAKU uses adversarial construction to expose latent biases across ten distinct cultural categories. Unlike existing benchmarks, JUBAKU features dialogue scenarios hand-crafted by native Japanese annotators

arXiv:2603.20581v1 Announce Type: new Abstract: Social biases reflected in language are inherently shaped by cultural norms, which vary significantly across regions and lead to diverse manifestations of stereotypes. Existing evaluations of social bias in large language models (LLMs) for non-English contexts, however, often rely on translations of English benchmarks. Such benchmarks fail to reflect local cultural norms, including those found in Japanese. For instance, Western benchmarks may overlook Japan-specific stereotypes related to hierarchical relationships, regional dialects, or traditional gender roles. To address this limitation, we introduce Japanese cUlture adversarial BiAs benchmarK Under handcrafted creation (JUBAKU), a benchmark tailored to Japanese cultural contexts. JUBAKU uses adversarial construction to expose latent biases across ten distinct cultural categories. Unlike existing benchmarks, JUBAKU features dialogue scenarios hand-crafted by native Japanese annotators, specifically designed to trigger and reveal latent social biases in Japanese LLMs. We evaluated nine Japanese LLMs on JUBAKU and three others adapted from English benchmarks. All models clearly exhibited biases on JUBAKU, performing below the random baseline of 50% with an average accuracy of 23% (ranging from 13% to 33%), despite higher accuracy on the other benchmarks. Human annotators achieved 91% accuracy in identifying unbiased responses, confirming JUBAKU's reliability and its adversarial nature to LLMs.

Executive Summary

This study introduces JUBAKU, a culturally grounded benchmark tailored to Japanese language and cultural contexts. The benchmark evaluates social biases in large language models (LLMs) by using adversarial construction to expose latent biases across ten distinct cultural categories. The authors evaluated nine Japanese LLMs and three adapted from English benchmarks, finding that all models exhibited biases on JUBAKU, but performed well on other benchmarks. This highlights the importance of culturally sensitive benchmarks for evaluating social biases in LLMs. The study's findings have significant implications for the development and deployment of LLMs in non-English language contexts.

Key Points

  • JUBAKU is a culturally grounded benchmark tailored to Japanese language and cultural contexts
  • The benchmark uses adversarial construction to expose latent biases across ten distinct cultural categories
  • All nine Japanese LLMs evaluated exhibited biases on JUBAKU, but performed well on other benchmarks

Merits

Strength: Culturally Grounded Approach

The study's use of culturally grounded construction and hand-crafted dialogue scenarios by native Japanese annotators makes JUBAKU a robust and reliable benchmark for evaluating social biases in Japanese LLMs.

Demerits

Limitation: Limited Generalizability

The study's focus on Japanese cultural contexts may limit the generalizability of the findings to other non-English language contexts.

Expert Commentary

The study's findings have significant implications for the development and deployment of LLMs in non-English language contexts. The use of culturally grounded construction and hand-crafted dialogue scenarios makes JUBAKU a robust and reliable benchmark for evaluating social biases in Japanese LLMs. However, the study's focus on Japanese cultural contexts may limit the generalizability of the findings to other non-English language contexts. Further research is needed to adapt JUBAKU to other cultural contexts and to explore the broader implications of culturally sensitive evaluations of social bias in AI development and deployment.

Recommendations

  • Develop JUBAKU to other cultural contexts to increase its generalizability and applicability to non-English language contexts.
  • Explore the broader implications of culturally sensitive evaluations of social bias in AI development and deployment, including regulatory guidelines and industry standards.

Sources

Original: arXiv - cs.CL