All Practice Areas

Immigration Law

이민법

Jurisdiction: All US KR EU Intl
LOW Academic European Union

When the Pure Reasoner Meets the Impossible Object: Analytic vs. Synthetic Fine-Tuning and the Suppression of Genesis in Language Models

arXiv:2603.19265v1 Announce Type: cross Abstract: This paper investigates the ontological consequences of fine-tuning Large Language Models (LLMs) on "impossible objects" -- entities defined by mutually exclusive predicates (e.g., "Artifact Alpha is a Square" and "Artifact Alpha is a Circle"). Drawing...

1 min 4 weeks, 1 day ago
ead
LOW Academic United States

Probing to Refine: Reinforcement Distillation of LLMs via Explanatory Inversion

arXiv:2603.19266v1 Announce Type: cross Abstract: Distilling robust reasoning capabilities from large language models (LLMs) into smaller, computationally efficient student models remains an unresolved challenge. Despite recent advances, distilled models frequently suffer from superficial pattern memorization and subpar generalization. To overcome...

1 min 4 weeks, 1 day ago
tps
LOW Academic European Union

Transformers are Stateless Differentiable Neural Computers

arXiv:2603.19272v1 Announce Type: cross Abstract: Differentiable Neural Computers (DNCs) were introduced as recurrent architectures equipped with an addressable external memory supporting differentiable read and write operations. Transformers, in contrast, are nominally feedforward architectures based on multi-head self-attention. In this work...

1 min 4 weeks, 1 day ago
ead
LOW Academic United Kingdom

LSR: Linguistic Safety Robustness Benchmark for Low-Resource West African Languages

arXiv:2603.19273v1 Announce Type: cross Abstract: Safety alignment in large language models relies predominantly on English-language training data. When harmful intent is expressed in low-resource languages, refusal mechanisms that hold in English frequently fail to activate. We introduce LSR (Linguistic Safety...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

CURE: A Multimodal Benchmark for Clinical Understanding and Retrieval Evaluation

arXiv:2603.19274v1 Announce Type: cross Abstract: Multimodal large language models (MLLMs) demonstrate considerable potential in clinical diagnostics, a domain that inherently requires synthesizing complex visual and textual data alongside consulting authoritative medical literature. However, existing benchmarks primarily evaluate MLLMs in end-to-end...

1 min 4 weeks, 1 day ago
tps
LOW Academic United States

Improving Automatic Summarization of Radiology Reports through Mid-Training of Large Language Models

arXiv:2603.19275v1 Announce Type: cross Abstract: Automatic summarization of radiology reports is an essential application to reduce the burden on physicians. Previous studies have widely used the "pre-training, fine-tuning" strategy to adapt large language models (LLMs) for summarization. This study proposed...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

HypeLoRA: Hyper-Network-Generated LoRA Adapters for Calibrated Language Model Fine-Tuning

arXiv:2603.19278v1 Announce Type: cross Abstract: Modern Transformer-based models frequently suffer from miscalibration, producing overconfident predictions that do not reflect true empirical frequencies. This work investigates the calibration dynamics of LoRA: Low-Rank Adaptation and a novel hyper-network-based adaptation framework as parameter-efficient...

1 min 4 weeks, 1 day ago
tps
LOW Academic United States

Joint Return and Risk Modeling with Deep Neural Networks for Portfolio Construction

arXiv:2603.19288v1 Announce Type: cross Abstract: Portfolio construction traditionally relies on separately estimating expected returns and covariance matrices using historical statistics, often leading to suboptimal allocation under time-varying market conditions. This paper proposes a joint return and risk modeling framework based...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

Speculating Experts Accelerates Inference for Mixture-of-Experts

arXiv:2603.19289v1 Announce Type: cross Abstract: Mixture-of-Experts (MoE) models have gained popularity as a means of scaling the capacity of large language models (LLMs) while maintaining sparse activations and reduced per-token compute. However, in memory-constrained inference settings, expert weights must be...

1 min 4 weeks, 1 day ago
tps
LOW Academic European Union

Neural Dynamics Self-Attention for Spiking Transformers

arXiv:2603.19290v1 Announce Type: cross Abstract: Integrating Spiking Neural Networks (SNNs) with Transformer architectures offers a promising pathway to balance energy efficiency and performance, particularly for edge vision applications. However, existing Spiking Transformers face two critical challenges: (i) a substantial performance...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

From Comprehension to Reasoning: A Hierarchical Benchmark for Automated Financial Research Reporting

arXiv:2603.19254v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used to generate financial research reports, shifting from auxiliary analytic tools to primary content producers. Yet recent real-world deployments reveal persistent failures--factual errors, numerical inconsistencies, fabricated references, and shallow...

1 min 4 weeks, 1 day ago
tps
LOW Academic International

ShobdoSetu: A Data-Centric Framework for Bengali Long-Form Speech Recognition and Speaker Diarization

arXiv:2603.19256v1 Announce Type: new Abstract: Bengali is spoken by over 230 million people yet remains severely under-served in automatic speech recognition (ASR) and speaker diarization research. In this paper, we present our system for the DL Sprint 4.0 Bengali Long-Form...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

LLM-MRD: LLM-Guided Multi-View Reasoning Distillation for Fake News Detection

arXiv:2603.19293v1 Announce Type: new Abstract: Multimodal fake news detection is crucial for mitigating societal disinformation. Existing approaches attempt to address this by fusing multimodal features or leveraging Large Language Models (LLMs) for advanced reasoning. However, these methods suffer from serious...

1 min 4 weeks, 1 day ago
tps
LOW Academic United States

Scalable Prompt Routing via Fine-Grained Latent Task Discovery

arXiv:2603.19415v1 Announce Type: new Abstract: Prompt routing dynamically selects the most appropriate large language model from a pool of candidates for each query, optimizing performance while managing costs. As model pools scale to include dozens of frontier models with narrow...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

Vocabulary shapes cross-lingual variation of word-order learnability in language models

arXiv:2603.19427v1 Announce Type: new Abstract: Why do some languages like Czech permit free word order, while others like English do not? We address this question by pretraining transformer language models on a spectrum of synthetic word-order variants of natural languages....

1 min 4 weeks, 1 day ago
ead
LOW Academic European Union

Cooperation and Exploitation in LLM Policy Synthesis for Sequential Social Dilemmas

arXiv:2603.19453v1 Announce Type: new Abstract: We study LLM policy synthesis: using a large language model to iteratively generate programmatic agent policies for multi-agent environments. Rather than training neural policies via reinforcement learning, our framework prompts an LLM to produce Python...

1 min 4 weeks, 1 day ago
tps
LOW Academic International

EvidenceRL: Reinforcing Evidence Consistency for Trustworthy Language Models

arXiv:2603.19532v1 Announce Type: new Abstract: Large Language Models (LLMs) are fluent but prone to hallucinations, producing answers that appear plausible yet are unsupported by available evidence. This failure is especially problematic in high-stakes domains where decisions must be justified by...

1 min 4 weeks, 1 day ago
tps
LOW Academic International

Maximizing mutual information between user-contexts and responses improve LLM personalization with no additional data

arXiv:2603.19294v1 Announce Type: new Abstract: While post-training has successfully improved large language models (LLMs) across a variety of domains, these gains heavily rely on human-labeled data or external verifiers. Existing data has already been exploited, and new high-quality data is...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

BrainSCL: Subtype-Guided Contrastive Learning for Brain Disorder Diagnosis

arXiv:2603.19295v1 Announce Type: new Abstract: Mental disorder populations exhibit pronounced heterogeneity -- that is, the significant differences between samples -- poses a significant challenge to the definition of positive pairs in contrastive learning. To address this, we propose a subtype-guided...

1 min 4 weeks, 1 day ago
tps
LOW Academic United States

CLaRE-ty Amid Chaos: Quantifying Representational Entanglement to Predict Ripple Effects in LLM Editing

arXiv:2603.19297v1 Announce Type: new Abstract: The static knowledge representations of large language models (LLMs) inevitably become outdated or incorrect over time. While model-editing techniques offer a promising solution by modifying a model's factual associations, they often produce unpredictable ripple effects,...

1 min 4 weeks, 1 day ago
tps
LOW Academic International

PRIME-CVD: A Parametrically Rendered Informatics Medical Environment for Education in Cardiovascular Risk Modelling

arXiv:2603.19299v1 Announce Type: new Abstract: In recent years, progress in medical informatics and machine learning has been accelerated by the availability of openly accessible benchmark datasets. However, patient-level electronic medical record (EMR) data are rarely available for teaching or methodological...

1 min 4 weeks, 1 day ago
ead
LOW Academic European Union

Parameter-Efficient Token Embedding Editing for Clinical Class-Level Unlearning

arXiv:2603.19302v1 Announce Type: new Abstract: Machine unlearning is increasingly important for clinical language models, where privacy regulations and institutional policies may require removing sensitive information from deployed systems without retraining from scratch. In practice, deletion requests must balance effective forgetting...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

GT-Space: Enhancing Heterogeneous Collaborative Perception with Ground Truth Feature Space

arXiv:2603.19308v1 Announce Type: new Abstract: In autonomous driving, multi-agent collaborative perception enhances sensing capabilities by enabling agents to share perceptual data. A key challenge lies in handling {\em heterogeneous} features from agents equipped with different sensing modalities or model architectures,...

1 min 4 weeks, 1 day ago
tps
LOW Academic International

MSNet and LS-Net: Scalable Multi-Scale Multi-Representation Networks for Time Series Classification

arXiv:2603.19315v1 Announce Type: new Abstract: Time series classification (TSC) performance depends not only on architectural design but also on the diversity of input representations. In this work, we propose a scalable multi-scale convolutional framework that systematically integrates structured multi-representation inputs...

1 min 4 weeks, 1 day ago
tps
LOW Academic International

A General Deep Learning Framework for Wireless Resource Allocation under Discrete Constraints

arXiv:2603.19322v1 Announce Type: new Abstract: While deep learning (DL)-based methods have achieved remarkable success in continuous wireless resource allocation, efficient solutions for problems involving discrete variables remain challenging. This is primarily due to the zero-gradient issue in backpropagation, the difficulty...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

Target Concept Tuning Improves Extreme Weather Forecasting

arXiv:2603.19325v1 Announce Type: new Abstract: Deep learning models for meteorological forecasting often fail in rare but high-impact events such as typhoons, where relevant data is scarce. Existing fine-tuning methods typically face a trade-off between overlooking these extreme events and overfitting...

1 min 4 weeks, 1 day ago
tps
LOW Academic International

Do Post-Training Algorithms Actually Differ? A Controlled Study Across Model Scales Uncovers Scale-Dependent Ranking Inversions

arXiv:2603.19335v1 Announce Type: new Abstract: Post-training alignment has produced dozens of competing algorithms -- DPO, SimPO, KTO, GRPO, and others -- yet practitioners lack controlled comparisons to guide algorithm selection. We present OXRL, a unified framework implementing 51 post-training algorithms...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

Anatomical Heterogeneity in Transformer Language Models

arXiv:2603.19348v1 Announce Type: new Abstract: Current transformer language models are trained with uniform computational budgets across all layers, implicitly assuming layer homogeneity. We challenge this assumption through empirical analysis of SmolLM2-135M, a 30-layer, 135M-parameter causal language model, using five diagnostic...

1 min 4 weeks, 1 day ago
removal
LOW Academic International

Warm-Start Flow Matching for Guaranteed Fast Text/Image Generation

arXiv:2603.19360v1 Announce Type: new Abstract: Current auto-regressive (AR) LLMs, diffusion-based text/image generative models, and recent flow matching (FM) algorithms are capable of generating premium quality text/image samples. However, the inference or sample generation in these models is often very time-consuming...

1 min 4 weeks, 1 day ago
ead
LOW Academic International

Adaptive Layerwise Perturbation: Unifying Off-Policy Corrections for LLM RL

arXiv:2603.19470v1 Announce Type: new Abstract: Off-policy problems such as policy staleness and training-inference mismatch, has become a major bottleneck for training stability and further exploration for LLM RL. To enhance inference efficiency, the distribution gap between the inference and updated...

1 min 4 weeks, 1 day ago
ead
Previous Page 22 of 71 Next

Impact Distribution

Critical 0
High 0
Medium 7
Low 2110