Semantic Partial Grounding via LLMs
arXiv:2602.22067v1 Announce Type: new Abstract: Grounding is a critical step in classical planning, yet it often becomes a computational bottleneck due to the exponential growth …
Academic
arXiv:2602.22067v1 Announce Type: new Abstract: Grounding is a critical step in classical planning, yet it often becomes a computational bottleneck due to the exponential growth …
arXiv:2602.22070v1 Announce Type: new Abstract: Large language models are increasingly used in decision-making tasks that require them to process information from a variety of sources, …
arXiv:2602.22094v1 Announce Type: new Abstract: Plans often change due to changes in the situation or our understanding of the situation. Sometimes, a feasible plan may …
arXiv:2602.21215v1 Announce Type: cross Abstract: Token-level steering has emerged as a pivotal approach for inference-time alignment, enabling fine grained control over large language models by …
arXiv:2602.21216v1 Announce Type: cross Abstract: The EQ-5D (EuroQol 5-Dimensions) is a standardized instrument for the evaluation of health-related quality of life. In health economics, systematic …
arXiv:2602.21217v1 Announce Type: cross Abstract: This paper establishes Applied Sociolinguistic AI for Community Development (ASA-CD) as a novel scientific paradigm for addressing community challenges through …
arXiv:2602.21218v1 Announce Type: cross Abstract: High-quality data is essential for modern machine learning, yet many valuable corpora are sensitive and cannot be freely shared. Synthetic …
arXiv:2602.21220v1 Announce Type: cross Abstract: We present a memory system for AI agents that treats stored information as continuous fields governed by partial differential equations …
arXiv:2602.21221v1 Announce Type: cross Abstract: Efficient long-context LLM deployment is stalled by a dichotomy between amortized compression, which struggles with out-of-distribution generalization, and Test-Time Training, …
arXiv:2602.21222v1 Announce Type: cross Abstract: Parameter efficient fine tuning methods like LoRA have enabled task specific adaptation of large language models, but efficiently composing multiple …
arXiv:2602.21223v1 Announce Type: cross Abstract: It is not only what we ask large language models (LLMs) to do that matters, but also how we prompt. …
arXiv:2602.21224v1 Announce Type: cross Abstract: Speculative decoding has emerged as a pivotal technique to accelerate LLM inference by employing a lightweight draft model to generate …