AAAI Conferences and Symposia
Learn about upcoming AI conferences and symposia by AAAI which promote research in AI and foster scientific exchange.
AAAI Code of Conduct for Conferences and Events - AAAI
The AAAI code of conduct for conferences and events ensures that we provide a respectful and inclusive conference experience for everyone.
The International Conference on Web and Social Media (ICWSM) - AAAI
ICWSM brings together researchers in the broad field of social media analysis to foster discussions about research.
Membership in AAAI
AAAI membership supports efforts to encourage and facilitate research, education, and development in artificial intelligence.
AAAI Conference on Artificial Intelligence - AAAI
The AAAI Conference on Artificial Intelligence promotes theoretical and applied AI research as well as intellectual interchange among researchers and practitioners.
AAAI 2026 Summer Symposium Series - AAAI
We invite proposals for the 2026 Summer Symposium Series, to be held June 22-June 24, 2026 at Dongguk University in Seoul, South Korea
A Theoretical Framework for Adaptive Utility-Weighted Benchmarking
arXiv:2602.12356v1 Announce Type: new Abstract: Benchmarking has long served as a foundational practice in machine learning and, increasingly, in modern AI systems such as large language models, where shared tasks, metrics, and leaderboards offer a common basis for measuring progress...
Intent-Driven Smart Manufacturing Integrating Knowledge Graphs and Large Language Models
arXiv:2602.12419v1 Announce Type: new Abstract: The increasing complexity of smart manufacturing environments demands interfaces that can translate high-level human intents into machine-executable actions. This paper presents a unified framework that integrates instruction-tuned Large Language Models (LLMs) with ontology-aligned Knowledge Graphs...
To Mix or To Merge: Toward Multi-Domain Reinforcement Learning for Large Language Models
arXiv:2602.12566v1 Announce Type: new Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) plays a key role in stimulating the explicit reasoning capability of Large Language Models (LLMs). We can achieve expert-level performance in some specific domains via RLVR, such as coding...
AI Agents for Inventory Control: Human-LLM-OR Complementarity
arXiv:2602.12631v1 Announce Type: new Abstract: Inventory control is a fundamental operations problem in which ordering decisions are traditionally guided by theoretically grounded operations research (OR) algorithms. However, such algorithms often rely on rigid modeling assumptions and can perform poorly when...
Think Fast and Slow: Step-Level Cognitive Depth Adaptation for LLM Agents
arXiv:2602.12662v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed as autonomous agents for multi-turn decision-making tasks. However, current agents typically rely on fixed cognitive patterns: non-thinking models generate immediate responses, while thinking models engage in deep reasoning...
X-SYS: A Reference Architecture for Interactive Explanation Systems
arXiv:2602.12748v1 Announce Type: new Abstract: The explainable AI (XAI) research community has proposed numerous technical methods, yet deploying explainability as systems remains challenging: Interactive explanation systems require both suitable algorithms and system capabilities that maintain explanation usability across repeated queries,...
Optimal Take-off under Fuzzy Clearances
arXiv:2602.13166v1 Announce Type: new Abstract: This paper presents a hybrid obstacle avoidance architecture that integrates Optimal Control under clearance with a Fuzzy Rule Based System (FRBS) to enable adaptive constraint handling for unmanned aircraft. Motivated by the limitations of classical...
From Biased Chatbots to Biased Agents: Examining Role Assignment Effects on LLM Agent Robustness
arXiv:2602.12285v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly deployed as autonomous agents capable of actions with real-world impacts beyond text generation. While persona-induced biases in text generation are well documented, their effects on agent task performance remain...
Retrieval-Augmented Self-Taught Reasoning Model with Adaptive Chain-of-Thought for ASR Named Entity Correction
arXiv:2602.12287v1 Announce Type: cross Abstract: End-to-end automatic speech recognition (ASR) systems frequently misrecognize domain-specific phrases like named entities, which can cause catastrophic failures in downstream tasks. A new family of named entity correction methods based on large language models (LLMs)...
Adaptive traffic signal control optimization using a novel road partition and multi-channel state representation method
arXiv:2602.12296v1 Announce Type: cross Abstract: This study proposes a novel adaptive traffic signal control method leveraging a Deep Q-Network (DQN) and Proximal Policy Optimization (PPO) to optimize signal timing by integrating variable cell length and multi-channel state representation. A road...
Quantum walk inspired JPEG compression of images
arXiv:2602.12306v1 Announce Type: cross Abstract: This work proposes a quantum inspired adaptive quantization framework that enhances the classical JPEG compression by introducing a learned, optimized Qtable derived using a Quantum Walk Inspired Optimization (QWIO) search strategy. The optimizer searches a...
Visible and Hyperspectral Imaging for Quality Assessment of Milk: Property Characterisation and Identification
arXiv:2602.12313v1 Announce Type: cross Abstract: Rapid and non-destructive assessment of milk quality is crucial to ensuring both nutritional value and food safety. In this study, we investigated the potential of visible and hyperspectral imaging as cost-effective and quick-response alternatives to...
AgenticShop: Benchmarking Agentic Product Curation for Personalized Web Shopping
arXiv:2602.12315v1 Announce Type: cross Abstract: The proliferation of e-commerce has made web shopping platforms key gateways for customers navigating the vast digital marketplace. Yet this rapid expansion has led to a noisy and fragmented information environment, increasing cognitive burden as...
CacheMind: From Miss Rates to Why -- Natural-Language, Trace-Grounded Reasoning for Cache Replacement
arXiv:2602.12422v1 Announce Type: cross Abstract: Cache replacement remains a challenging problem in CPU microarchitecture, often addressed using hand-crafted heuristics, limiting cache performance. Cache data analysis requires parsing millions of trace entries with manual filtering, making the process slow and non-interactive....
ReFilter: Improving Robustness of Retrieval-Augmented Generation via Gated Filter
arXiv:2602.12709v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become a dominant paradigm for grounding large language models (LLMs) with external evidence in knowledge-intensive question answering. A core design choice is how to fuse retrieved samples into the LLMs, where...
ViMedCSS: A Vietnamese Medical Code-Switching Speech Dataset & Benchmark
arXiv:2602.12911v1 Announce Type: new Abstract: Code-switching (CS), which is when Vietnamese speech uses English words like drug names or procedures, is a common phenomenon in Vietnamese medical communication. This creates challenges for Automatic Speech Recognition (ASR) systems, especially in low-resource...
Exploring a New Competency Modeling Process with Large Language Models
arXiv:2602.13084v1 Announce Type: new Abstract: Competency modeling is widely used in human resource management to select, develop, and evaluate talent. However, traditional expert-driven approaches rely heavily on manual analysis of large volumes of interview transcripts, making them costly and prone...
DiffuRank: Effective Document Reranking with Diffusion Language Models
arXiv:2602.12528v1 Announce Type: cross Abstract: Recent advances in large language models (LLMs) have inspired new paradigms for document reranking. While this paradigm better exploits the reasoning and contextual understanding capabilities of LLMs, most existing LLM-based rerankers rely on autoregressive generation,...
Decoder-only Conformer with Modality-aware Sparse Mixtures of Experts for ASR
arXiv:2602.12546v1 Announce Type: cross Abstract: We present a decoder-only Conformer for automatic speech recognition (ASR) that processes speech and text in a single stack without external speech encoders or pretrained large language models (LLM). The model uses a modality-aware sparse...
The Appeal and Reality of Recycling LoRAs with Adaptive Merging
arXiv:2602.12323v1 Announce Type: new Abstract: The widespread availability of fine-tuned LoRA modules for open pre-trained models has led to an interest in methods that can adaptively merge LoRAs to improve performance. These methods typically include some way of selecting LoRAs...
AMPS: Adaptive Modality Preference Steering via Functional Entropy
arXiv:2602.12533v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) often exhibit significant modality preference, which is a tendency to favor one modality over another. Depending on the input, they may over-rely on linguistic priors relative to visual evidence, or...
Exploring Accurate and Transparent Domain Adaptation in Predictive Healthcare via Concept-Grounded Orthogonal Inference
arXiv:2602.12542v1 Announce Type: new Abstract: Deep learning models for clinical event prediction on electronic health records (EHR) often suffer performance degradation when deployed under different data distributions. While domain adaptation (DA) methods can mitigate such shifts, its "black-box" nature prevents...