ICLR 2026 Response to LLM-Generated Papers and Reviews
Policies on Large Language Model Usage at ICLR 2026
AAAI Conference and Symposium Proceedings
Browse the AAAI Library containing several high-quality AAAI Conference proceedings in artificial intelligence.
“Generations in Dialogue: Bridging Perspectives in AI.”
Each podcast episode examines how generational experiences shape views of AI, exploring the challenges, opportunities, and ethical considerations
AAAI Conferences and Symposia
Learn about upcoming AI conferences and symposia by AAAI which promote research in AI and foster scientific exchange.
AAAI Code of Conduct for Conferences and Events - AAAI
The AAAI code of conduct for conferences and events ensures that we provide a respectful and inclusive conference experience for everyone.
The International Conference on Web and Social Media (ICWSM) - AAAI
ICWSM brings together researchers in the broad field of social media analysis to foster discussions about research.
The Artificial Intelligence for Interactive Digital Entertainment Conference (AIIDE) - AAAI
A full history of the AIIDE conference, sponsored by the Association for the Advancement of Artificial Intelligence (AAAI).
Membership in AAAI
AAAI membership supports efforts to encourage and facilitate research, education, and development in artificial intelligence.
Request to Reproduce Copyrighted Materials - AAAI
Materials published by AAAI Press, AAAI, and AI Magazine are subject to copyright both individually and as compilations.
Upcoming Submission Deadlines
Databases and Information Systems Integration, Artificial Intelligence and Decision Support Systems, Information Systems Analysis and Specification, Software Agents and Internet Computing, Human-Computer Interaction, Enterprise Architecture
Evolving Beyond Snapshots: Harmonizing Structure and Sequence via Entity State Tuning for Temporal Knowledge Graph Forecasting
arXiv:2602.12389v1 Announce Type: new Abstract: Temporal knowledge graph (TKG) forecasting requires predicting future facts by jointly modeling structural dependencies within each snapshot and temporal evolution across snapshots. However, most existing methods are stateless: they recompute entity representations at each timestamp...
Consistency of Large Reasoning Models Under Multi-Turn Attacks
arXiv:2602.13093v2 Announce Type: new Abstract: Large reasoning models with reasoning capabilities achieve state-of-the-art performance on complex tasks, but their robustness under multi-turn adversarial pressure remains underexplored. We evaluate nine frontier reasoning models under adversarial attacks. Our findings reveal that reasoning...
OptiML: An End-to-End Framework for Program Synthesis and CUDA Kernel Optimization
arXiv:2602.12305v1 Announce Type: cross Abstract: Generating high-performance CUDA kernels remains challenging due to the need to navigate a combinatorial space of low-level transformations under noisy and expensive hardware feedback. Although large language models can synthesize functionally correct CUDA code, achieving...
Why Deep Jacobian Spectra Separate: Depth-Induced Scaling and Singular-Vector Alignment
arXiv:2602.12384v2 Announce Type: cross Abstract: Understanding why gradient-based training in deep networks exhibits strong implicit bias remains challenging, in part because tractable singular-value dynamics are typically available only for balanced deep linear models. We propose an alternative route based on...
Reproducing DragDiffusion: Interactive Point-Based Editing with Diffusion Models
arXiv:2602.12393v1 Announce Type: cross Abstract: DragDiffusion is a diffusion-based method for interactive point-based image editing that enables users to manipulate images by directly dragging selected points. The method claims that accurate spatial control can be achieved by optimizing a single...
What does RL improve for Visual Reasoning? A Frankenstein-Style Analysis
arXiv:2602.12395v1 Announce Type: cross Abstract: Reinforcement learning (RL) with verifiable rewards has become a standard post-training stage for boosting visual reasoning in vision-language models, yet it remains unclear what capabilities RL actually improves compared with supervised fine-tuning as cold-start initialization...
Agent Skills for Large Language Models: Architecture, Acquisition, Security, and the Path Forward
arXiv:2602.12430v2 Announce Type: cross Abstract: The transition from monolithic language models to modular, skill-equipped agents marks a defining shift in how large language models (LLMs) are deployed in practice. Rather than encoding all procedural knowledge within model weights, agent skills...
Grandes Modelos de Linguagem Multimodais (MLLMs): Da Teoria \`a Pr\'atica
arXiv:2602.12302v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) combine the natural language understanding and generation capabilities of LLMs with perception skills in modalities such as image and audio, representing a key advancement in contemporary AI. This chapter presents...
Learning Ordinal Probabilistic Reward from Preferences
arXiv:2602.12660v1 Announce Type: new Abstract: Reward models are crucial for aligning large language models (LLMs) with human values and intentions. Existing approaches follow either Generative (GRMs) or Discriminative (DRMs) paradigms, yet both suffer from limitations: GRMs typically demand costly point-wise...
ReFilter: Improving Robustness of Retrieval-Augmented Generation via Gated Filter
arXiv:2602.12709v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become a dominant paradigm for grounding large language models (LLMs) with external evidence in knowledge-intensive question answering. A core design choice is how to fuse retrieved samples into the LLMs, where...
When Words Don't Mean What They Say: Figurative Understanding in Bengali Idioms
arXiv:2602.12921v1 Announce Type: new Abstract: Figurative language understanding remains a significant challenge for Large Language Models (LLMs), especially for low-resource languages. To address this, we introduce a new idiom dataset, a large-scale, culturally-grounded corpus of 10,361 Bengali idioms. Each idiom...
TraceBack: Multi-Agent Decomposition for Fine-Grained Table Attribution
arXiv:2602.13059v1 Announce Type: new Abstract: Question answering (QA) over structured tables requires not only accurate answers but also transparency about which cells support them. Existing table QA systems rarely provide fine-grained attribution, so even correct answers often lack verifiable grounding,...
Exploring a New Competency Modeling Process with Large Language Models
arXiv:2602.13084v1 Announce Type: new Abstract: Competency modeling is widely used in human resource management to select, develop, and evaluate talent. However, traditional expert-driven approaches rely heavily on manual analysis of large volumes of interview transcripts, making them costly and prone...
Beyond Musical Descriptors: Extracting Preference-Bearing Intent in Music Queries
arXiv:2602.12301v1 Announce Type: cross Abstract: Although annotated music descriptor datasets for user queries are increasingly common, few consider the user's intent behind these descriptors, which is essential for effectively meeting their needs. We introduce MusicRecoIntent, a manually annotated corpus of...
DiffuRank: Effective Document Reranking with Diffusion Language Models
arXiv:2602.12528v1 Announce Type: cross Abstract: Recent advances in large language models (LLMs) have inspired new paradigms for document reranking. While this paradigm better exploits the reasoning and contextual understanding capabilities of LLMs, most existing LLM-based rerankers rely on autoregressive generation,...
The Appeal and Reality of Recycling LoRAs with Adaptive Merging
arXiv:2602.12323v1 Announce Type: new Abstract: The widespread availability of fine-tuned LoRA modules for open pre-trained models has led to an interest in methods that can adaptively merge LoRAs to improve performance. These methods typically include some way of selecting LoRAs...