AILS-NTUA at SemEval-2026 Task 10: Agentic LLMs for Psycholinguistic Marker Extraction and Conspiracy Endorsement Detection
arXiv:2603.04921v1 Announce Type: new Abstract: This paper presents a novel agentic LLM pipeline for SemEval-2026 Task 10 that jointly extracts psycholinguistic conspiracy markers and detects conspiracy endorsement. Unlike traditional classifiers that conflate semantic reasoning with structural localization, our decoupled design isolates these challenges. For marker extraction, we propose Dynamic Discriminative Chain-of-Thought (DD-CoT) with deterministic anchoring to resolve semantic ambiguity and character-level brittleness. For conspiracy detection, an "Anti-Echo Chamber" architecture, consisting of an adversarial Parallel Council adjudicated by a Calibrated Judge, overcomes the "Reporter Trap," where models falsely penalize objective reporting. Achieving 0.24 Macro F1 (+100\% over baseline) on S1 and 0.79 Macro F1 (+49\%) on S2, with the S1 system ranking 3rd on the development leaderboard, our approach establishes a versatile paradigm for interpretable, psycholinguistically-groun
arXiv:2603.04921v1 Announce Type: new Abstract: This paper presents a novel agentic LLM pipeline for SemEval-2026 Task 10 that jointly extracts psycholinguistic conspiracy markers and detects conspiracy endorsement. Unlike traditional classifiers that conflate semantic reasoning with structural localization, our decoupled design isolates these challenges. For marker extraction, we propose Dynamic Discriminative Chain-of-Thought (DD-CoT) with deterministic anchoring to resolve semantic ambiguity and character-level brittleness. For conspiracy detection, an "Anti-Echo Chamber" architecture, consisting of an adversarial Parallel Council adjudicated by a Calibrated Judge, overcomes the "Reporter Trap," where models falsely penalize objective reporting. Achieving 0.24 Macro F1 (+100\% over baseline) on S1 and 0.79 Macro F1 (+49\%) on S2, with the S1 system ranking 3rd on the development leaderboard, our approach establishes a versatile paradigm for interpretable, psycholinguistically-grounded NLP.
Executive Summary
The article presents a novel agentic LLM pipeline for extracting psycholinguistic conspiracy markers and detecting conspiracy endorsement. The pipeline achieves significant improvements over the baseline, with a Macro F1 score of 0.24 on S1 and 0.79 on S2. The approach decouples semantic reasoning and structural localization, using Dynamic Discriminative Chain-of-Thought and an Anti-Echo Chamber architecture. The results establish a versatile paradigm for interpretable, psycholinguistically-grounded NLP.
Key Points
- ▸ Novel agentic LLM pipeline for psycholinguistic marker extraction and conspiracy endorsement detection
- ▸ Decoupled design isolates semantic reasoning and structural localization challenges
- ▸ Achieves significant improvements over the baseline with Macro F1 scores of 0.24 on S1 and 0.79 on S2
Merits
Improved Accuracy
The pipeline achieves significant improvements over the baseline, demonstrating its effectiveness in extracting psycholinguistic conspiracy markers and detecting conspiracy endorsement.
Interpretable Results
The approach provides interpretable results, allowing for a deeper understanding of the underlying psycholinguistic mechanisms driving conspiracy endorsement.
Demerits
Complexity
The pipeline's complexity may make it challenging to implement and interpret, particularly for those without extensive expertise in NLP and psycholinguistics.
Data Dependence
The approach's performance may be dependent on the quality and availability of training data, which can be a limitation in certain contexts.
Expert Commentary
The article presents a significant contribution to the field of NLP, demonstrating the potential for agentic LLM pipelines to extract psycholinguistic conspiracy markers and detect conspiracy endorsement. The approach's decoupled design and use of Dynamic Discriminative Chain-of-Thought and Anti-Echo Chamber architecture are notable strengths. However, the complexity of the pipeline and its dependence on high-quality training data are important limitations to consider. Overall, the article highlights the need for continued research and development in this area, with important implications for both practical applications and policy considerations.
Recommendations
- ✓ Further research is needed to explore the potential applications and limitations of the proposed pipeline
- ✓ The development of more transparent and explainable NLP models is crucial to address the ethical concerns surrounding their use in psycholinguistic analysis