Quantifying Catastrophic Forgetting in IoT Intrusion Detection Systems
arXiv:2603.00363v1 Announce Type: new Abstract: Distribution shifts in attack patterns within RPL-based IoT networks pose a critical threat to the reliability and security of large-scale connected systems. Intrusion Detection Systems (IDS) trained on static datasets often fail to generalize to...
Improving Full Waveform Inversion in Large Model Era
arXiv:2603.00377v1 Announce Type: new Abstract: Full Waveform Inversion (FWI) is a highly nonlinear and ill-posed problem that aims to recover subsurface velocity maps from surface-recorded seismic waveforms data. Existing data-driven FWI typically uses small models, as available datasets have limited...
TENG-BC: Unified Time-Evolving Natural Gradient for Neural PDE Solvers with General Boundary Conditions
arXiv:2603.00397v1 Announce Type: new Abstract: Accurately solving time-dependent partial differential equations (PDEs) with neural networks remains challenging due to long-time error accumulation and the difficulty of enforcing general boundary conditions. We introduce TENG-BC, a high-precision neural PDE solver based on...
USE: Uncertainty Structure Estimation for Robust Semi-Supervised Learning
arXiv:2603.00404v1 Announce Type: new Abstract: In this study, a novel idea, Uncertainty Structure Estimation (USE), a lightweight, algorithm-agnostic procedure that emphasizes the often-overlooked role of unlabeled data quality is introduced for Semi-supervised learning (SSL). SSL has achieved impressive progress, but...
Exact and Asymptotically Complete Robust Verifications of Neural Networks via Quantum Optimization
arXiv:2603.00408v1 Announce Type: new Abstract: Deep neural networks (DNNs) enable high performance across domains but remain vulnerable to adversarial perturbations, limiting their use in safety-critical settings. Here, we introduce two quantum-optimization-based models for robust verification that reduce the combinatorial burden...
Physics-Aware Learnability: From Set-Theoretic Independence to Operational Constraints
arXiv:2603.00417v1 Announce Type: new Abstract: Beyond binary classification, learnability can become a logically fragile notion: in EMX, even the class of all finite subsets of $[0,1]$ is learnable in some models of ZFC and not in others. We argue the...
Efficient Decoder Scaling Strategy for Neural Routing Solvers
arXiv:2603.00430v1 Announce Type: new Abstract: Construction-based neural routing solvers, typically composed of an encoder and a decoder, have emerged as a promising approach for solving vehicle routing problems. While recent studies suggest that shifting parameters from the encoder to the...
ROKA: Robust Knowledge Unlearning against Adversaries
arXiv:2603.00436v1 Announce Type: new Abstract: The need for machine unlearning is critical for data privacy, yet existing methods often cause Knowledge Contamination by unintentionally damaging related knowledge. Such a degraded model performance after unlearning has been recently leveraged for new...
Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols
arXiv:2603.00478v1 Announce Type: new Abstract: Few-shot transfer has been revolutionized by stronger pre-trained models and improved adaptation algorithms.However, there lacks a unified, rigorous evaluation protocol that is both challenging and realistic for real-world usage. In this work, we establish FEWTRANS,...
Analyzing Physical Adversarial Example Threats to Machine Learning in Election Systems
arXiv:2603.00481v1 Announce Type: new Abstract: Developments in the machine learning voting domain have shown both promising results and risks. Trained models perform well on ballot classification tasks (> 99% accuracy) but are at risk from adversarial example attacks that cause...
Episode 41: Reading Recommendations - EJIL: The Podcast!
Cybersecurity’s Role in Securing Elections
SPEAKERS: Professor Chris Hoofnagle, Beth Calley, Lucy Huang Podcast Transcript: [Lucy Huang] 00:07 Hello and welcome to the Berkeley Technology Law Journal podcast. My name is Lucy Huang and I am one of the senior editors of the podcast. Today,...
FCC chair calls Paramount/WBD merger "a lot cleaner" than defunct Netflix deal
FCC to review foreign debt, but Carr indicates it will be a formality.
Alibaba’s Qwen tech lead steps down after major AI push
Reactions rippled through Alibaba's Qwen team after tech lead Junyang Lin stepped down following a major model launch.
Humans and LLMs Diverge on Probabilistic Inferences
arXiv:2602.23546v1 Announce Type: new Abstract: Human reasoning often involves working over limited information to arrive at probabilistic conclusions. In its simplest form, this involves making an inference that is not strictly entailed by a premise, but rather only likely given...
France or Spain or Germany or France: A Neural Account of Non-Redundant Redundant Disjunctions
arXiv:2602.23547v1 Announce Type: new Abstract: Sentences like "She will go to France or Spain, or perhaps to Germany or France." appear formally redundant, yet become acceptable in contexts such as "Mary will go to a philosophy program in France or...
BRIDGE the Gap: Mitigating Bias Amplification in Automated Scoring of English Language Learners via Inter-group Data Augmentation
arXiv:2602.23580v1 Announce Type: new Abstract: In the field of educational assessment, automated scoring systems increasingly rely on deep learning and large language models (LLMs). However, these systems face significant risks of bias amplification, where model prediction gaps between student groups...
From Static Benchmarks to Dynamic Protocol: Agent-Centric Text Anomaly Detection for Evaluating LLM Reasoning
arXiv:2602.23729v1 Announce Type: new Abstract: The evaluation of large language models (LLMs) has predominantly relied on static datasets, which offer limited scalability and fail to capture the evolving reasoning capabilities of recent models. To overcome these limitations, we propose an...
Structured Prompt Optimization for Few-Shot Text Classification via Semantic Alignment in Latent Space
arXiv:2602.23753v1 Announce Type: new Abstract: This study addresses the issues of semantic entanglement, unclear label structure, and insufficient feature representation in few-shot text classification, and proposes an optimization framework based on structured prompts to enhance semantic understanding and task adaptation...
Divide and Conquer: Accelerating Diffusion-Based Large Language Models via Adaptive Parallel Decoding
arXiv:2602.23792v1 Announce Type: new Abstract: Diffusion-based large language models (dLLMs) have shown promising performance across various reasoning tasks, establishing themselves as an alternative to autoregressive large language models (LLMs). Unlike autoregressive LLMs that generate one token per step based on...
Benchmarking BERT-based Models for Sentence-level Topic Classification in Nepali Language
arXiv:2602.23940v1 Announce Type: new Abstract: Transformer-based models such as BERT have significantly advanced Natural Language Processing (NLP) across many languages. However, Nepali, a low-resource language written in Devanagari script, remains relatively underexplored. This study benchmarks multilingual, Indic, Hindi, and Nepali...
ARGUS: Seeing the Influence of Narrative Features on Persuasion in Argumentative Texts
arXiv:2602.24109v1 Announce Type: new Abstract: Can narratives make arguments more persuasive? And to this end, which narrative features matter most? Although stories are often seen as powerful tools for persuasion, their specific role in online, unstructured argumentation remains underexplored. To...
CoME: Empowering Channel-of-Mobile-Experts with Informative Hybrid-Capabilities Reasoning
arXiv:2602.24142v1 Announce Type: new Abstract: Mobile Agents can autonomously execute user instructions, which requires hybrid-capabilities reasoning, including screen summary, subtask planning, action decision and action function. However, existing agents struggle to achieve both decoupled enhancement and balanced integration of these...
Task-Centric Acceleration of Small-Language Models
arXiv:2602.24174v1 Announce Type: new Abstract: Small language models (SLMs) have emerged as efficient alternatives to large language models for task-specific applications. However, they are often employed in high-volume, low-latency settings, where efficiency is crucial. We propose TASC, Task-Adaptive Sequence Compression,...
Do LLMs Benefit From Their Own Words?
arXiv:2602.24287v1 Announce Type: new Abstract: Multi-turn interactions with large language models typically retain the assistant's own past responses in the conversation history. In this work, we revisit this design choice by asking whether large language models benefit from conditioning on...
Serendipity with Generative AI: Repurposing knowledge components during polycrisis with a Viable Systems Model approach
arXiv:2602.23365v1 Announce Type: cross Abstract: Organisations face polycrisis uncertainty yet overlook embedded knowledge. We show how generative AI can operate as a serendipity engine and knowledge transducer to discover, classify and mobilise reusable components (models, frameworks, patterns) from existing documents....
UTPTrack: Towards Simple and Unified Token Pruning for Visual Tracking
arXiv:2602.23734v1 Announce Type: cross Abstract: One-stream Transformer-based trackers achieve advanced performance in visual object tracking but suffer from significant computational overhead that hinders real-time deployment. While token pruning offers a path to efficiency, existing methods are fragmented. They typically prune...
NAU-QMUL: Utilizing BERT and CLIP for Multi-modal AI-Generated Image Detection
arXiv:2602.23863v1 Announce Type: cross Abstract: With the aim of detecting AI-generated images and identifying the specific models responsible for their generation, we propose a multi-modal multi-task model. The model leverages pre-trained BERT and CLIP Vision encoders for text and image...
SWE-rebench V2: Language-Agnostic SWE Task Collection at Scale
arXiv:2602.23866v1 Announce Type: cross Abstract: Software engineering agents (SWE) are improving rapidly, with recent gains largely driven by reinforcement learning (RL). However, RL training is constrained by the scarcity of large-scale task collections with reproducible execution environments and reliable test...