Academic

Learning to Generate and Extract: A Multi-Agent Collaboration Framework For Zero-shot Document-level Event Arguments Extraction

arXiv:2603.02909v1 Announce Type: new Abstract: Document-level event argument extraction (DEAE) is essential for knowledge acquisition, aiming to extract participants of events from documents.In the zero-shot setting, existing methods employ LLMs to generate synthetic data to address the challenge posed by the scarcity of annotated data. However, relying solely on Event-type-only prompts makes it difficult for the generated content to accurately capture the contextual and structural relationships of unseen events. Moreover, ensuring the reliability and usability of synthetic data remains a significant challenge due to the absence of quality evaluation mechanisms. To this end, we introduce a multi-agent collaboration framework for zero-shot document-level event argument extraction (ZS-DEAE), which simulates the human collaborative cognitive process of "Propose-Evaluate-Revise." Specifically, the framework comprises a generation agent and an evaluation agent. The generation agent synthe

arXiv:2603.02909v1 Announce Type: new Abstract: Document-level event argument extraction (DEAE) is essential for knowledge acquisition, aiming to extract participants of events from documents.In the zero-shot setting, existing methods employ LLMs to generate synthetic data to address the challenge posed by the scarcity of annotated data. However, relying solely on Event-type-only prompts makes it difficult for the generated content to accurately capture the contextual and structural relationships of unseen events. Moreover, ensuring the reliability and usability of synthetic data remains a significant challenge due to the absence of quality evaluation mechanisms. To this end, we introduce a multi-agent collaboration framework for zero-shot document-level event argument extraction (ZS-DEAE), which simulates the human collaborative cognitive process of "Propose-Evaluate-Revise." Specifically, the framework comprises a generation agent and an evaluation agent. The generation agent synthesizes data for unseen events by leveraging knowledge from seen events, while the evaluation agent extracts arguments from the synthetic data and assesses their semantic consistency with the context. The evaluation results are subsequently converted into reward signals, with event structure constraints incorporated into the reward design to enable iterative optimization of both agents via reinforcement learning.In three zero-shot scenarios constructed from the RAMS and WikiEvents datasets, our method achieves improvements both in data generation quality and argument extraction performance, while the generated data also effectively enhances the zero-shot performance of other DEAE models.

Executive Summary

This article introduces a multi-agent collaboration framework for zero-shot document-level event argument extraction (ZS-DEAE). The framework consists of a generation agent and an evaluation agent that work together to improve the quality of synthetic data and argument extraction performance. The approach achieves significant improvements in three zero-shot scenarios and enhances the performance of other DEAE models. The framework's ability to simulate human collaborative cognitive processes makes it a promising solution for addressing the challenges of zero-shot learning in event argument extraction.

Key Points

  • Introduction of a multi-agent collaboration framework for ZS-DEAE
  • Simulation of human collaborative cognitive process of 'Propose-Evaluate-Revise'
  • Improvements in data generation quality and argument extraction performance

Merits

Effective Use of Reinforcement Learning

The incorporation of event structure constraints into the reward design enables iterative optimization of both agents via reinforcement learning, leading to improved performance.

Enhanced Data Generation Quality

The framework's ability to generate high-quality synthetic data enhances the performance of other DEAE models and addresses the challenge of data scarcity.

Demerits

Complexity of the Framework

The multi-agent collaboration framework may be complex to implement and require significant computational resources, which could limit its adoption.

Dependence on Event Structure Constraints

The framework's reliance on event structure constraints may limit its applicability to domains with limited or no such constraints.

Expert Commentary

The proposed multi-agent collaboration framework represents a significant advancement in the field of zero-shot learning and event argument extraction. By simulating human collaborative cognitive processes, the framework is able to generate high-quality synthetic data and improve argument extraction performance. However, the complexity of the framework and its dependence on event structure constraints may limit its applicability to certain domains. Further research is needed to address these challenges and explore the potential applications of this framework in other areas of NLP.

Recommendations

  • Further evaluation of the framework's performance in different domains and datasets
  • Exploration of the potential applications of the framework in areas such as news analysis and sentiment analysis

Sources