Benchmarking IoT Time-Series AD with Event-Level Augmentations
arXiv:2602.15457v1 Announce Type: new Abstract: Anomaly detection (AD) for safety-critical IoT time series should be judged at the event level: reliability and earliness under realistic perturbations. Yet many studies still emphasize point-level results on curated base datasets, limiting value for...
ExLipBaB: Exact Lipschitz Constant Computation for Piecewise Linear Neural Networks
arXiv:2602.15499v1 Announce Type: new Abstract: It has been shown that a neural network's Lipschitz constant can be leveraged to derive robustness guarantees, to improve generalizability via regularization or even to construct invertible networks. Therefore, a number of methods varying in...
On the Geometric Coherence of Global Aggregation in Federated GNN
arXiv:2602.15510v1 Announce Type: new Abstract: Federated Learning (FL) enables distributed training across multiple clients without centralized data sharing, while Graph Neural Networks (GNNs) model relational data through message passing. In federated GNN settings, client graphs often exhibit heterogeneous structural and...
Accelerated Predictive Coding Networks via Direct Kolen-Pollack Feedback Alignment
arXiv:2602.15571v1 Announce Type: new Abstract: Predictive coding (PC) is a biologically inspired algorithm for training neural networks that relies only on local updates, allowing parallel learning across layers. However, practical implementations face two key limitations: error signals must still propagate...
Neural Network-Based Parameter Estimation of a Labour Market Agent-Based Model
arXiv:2602.15572v1 Announce Type: new Abstract: Agent-based modelling (ABM) is a widespread approach to simulate complex systems. Advancements in computational processing and storage have facilitated the adoption of ABMs across many fields; however, ABMs face challenges that limit their use as...
Certified Per-Instance Unlearning Using Individual Sensitivity Bounds
arXiv:2602.15602v1 Announce Type: new Abstract: Certified machine unlearning can be achieved via noise injection leading to differential privacy guarantees, where noise is calibrated to worst-case sensitivity. Such conservative calibration often results in performance degradation, limiting practical applicability. In this work,...
CVPR 2026 Compute Reporting Form - Clarification
Join the Largest Global Community in Computing
IEEE Computer Society is the top source for information, inspiration, and collaboration in computer science and engineering, empowering technologist worldwide
SCOTUStoday for Wednesday, February 18
Justice Anthony Kennedy joined the court on this day in 1988. He served for slightly more than 30 years, retiring on July 31, 2018. SCOTUS Quick Hits Morning Reads A […]The postSCOTUStoday for Wednesday, February 18appeared first onSCOTUSblog.
Inside the DHS forum where ICE agents trash talk one another
Forum members have discussed their discomfort with mass deportation efforts.
Google Cloud’s VP for startups on reading your ‘check engine light’ before it’s too late
Startup founders are being pushed to move faster than ever, using AI while facing tighter funding, rising infrastructure costs, and more pressure to show real traction early. Cloud credits, access to GPUs, and foundation models have made it easier to...
Microsoft says Office bug exposed customers’ confidential emails to Copilot AI
Microsoft said the bug meant that its Copilot AI chatbot was reading and summarizing paying customers' confidential emails, bypassing data-protection policies.
Open Rubric System: Scaling Reinforcement Learning with Pairwise Adaptive Rubric
arXiv:2602.14069v1 Announce Type: new Abstract: Scalar reward models compress multi-dimensional human preferences into a single opaque score, creating an information bottleneck that often leads to brittleness and reward hacking in open-ended alignment. We argue that robust alignment for non-verifiable tasks...
Empty Shelves or Lost Keys? Recall Is the Bottleneck for Parametric Factuality
arXiv:2602.14080v1 Announce Type: new Abstract: Standard factuality evaluations of LLMs treat all errors alike, obscuring whether failures arise from missing knowledge (empty shelves) or from limited access to encoded facts (lost keys). We propose a behavioral framework that profiles factual...
Knowing When Not to Answer: Abstention-Aware Scientific Reasoning
arXiv:2602.14189v1 Announce Type: new Abstract: Large language models are increasingly used to answer and verify scientific claims, yet existing evaluations typically assume that a model must always produce a definitive answer. In scientific settings, however, unsupported or uncertain conclusions can...
STATe-of-Thoughts: Structured Action Templates for Tree-of-Thoughts
arXiv:2602.14265v1 Announce Type: new Abstract: Inference-Time-Compute (ITC) methods like Best-of-N and Tree-of-Thoughts are meant to produce output candidates that are both high-quality and diverse, but their use of high-temperature sampling often fails to achieve meaningful output diversity. Moreover, existing ITC...
BLUEPRINT Rebuilding a Legacy: Multimodal Retrieval for Complex Engineering Drawings and Documents
arXiv:2602.13345v1 Announce Type: new Abstract: Decades of engineering drawings and technical records remain locked in legacy archives with inconsistent or missing metadata, making retrieval difficult and often manual. We present Blueprint, a layout-aware multimodal retrieval system designed for large-scale engineering...
Finding Highly Interpretable Prompt-Specific Circuits in Language Models
arXiv:2602.13483v1 Announce Type: new Abstract: Understanding the internal circuits that language models use to solve tasks remains a central challenge in mechanistic interpretability. Most prior work identifies circuits at the task level by averaging across many prompts, implicitly assuming a...
Singular Vectors of Attention Heads Align with Features
arXiv:2602.13524v1 Announce Type: new Abstract: Identifying feature representations in language models is a central task in mechanistic interpretability. Several recent studies have made an implicit assumption that feature representations can be inferred in some cases from singular vectors of attention...
QuaRK: A Quantum Reservoir Kernel for Time Series Learning
arXiv:2602.13531v1 Announce Type: new Abstract: Quantum reservoir computing offers a promising route for time series learning by modelling sequential data via rich quantum dynamics while the only training required happens at the level of a lightweight classical readout. However, studies...
Out-of-Support Generalisation via Weight Space Sequence Modelling
arXiv:2602.13550v1 Announce Type: new Abstract: As breakthroughs in deep learning transform key industries, models are increasingly required to extrapolate on datapoints found outside the range of the training set, a challenge we coin as out-of-support (OoS) generalisation. However, neural networks...
Interpretable clustering via optimal multiway-split decision trees
arXiv:2602.13586v1 Announce Type: new Abstract: Clustering serves as a vital tool for uncovering latent data structures, and achieving both high accuracy and interpretability is essential. To this end, existing methods typically construct binary decision trees by solving mixed-integer nonlinear optimization...
Joint Time Series Chain: Detecting Unusual Evolving Trend across Time Series
arXiv:2602.13649v1 Announce Type: new Abstract: Time series chain (TSC) is a recently introduced concept that captures the evolving patterns in large scale time series. Informally, a time series chain is a temporally ordered set of subsequences, in which consecutive subsequences...
Cumulative Utility Parity for Fair Federated Learning under Intermittent Client Participation
arXiv:2602.13651v1 Announce Type: new Abstract: In real-world federated learning (FL) systems, client participation is intermittent, heterogeneous, and often correlated with data characteristics or resource constraints. Existing fairness approaches in FL primarily focus on equalizing loss or accuracy conditional on participation,...
Zero-Order Optimization for LLM Fine-Tuning via Learnable Direction Sampling
arXiv:2602.13659v1 Announce Type: new Abstract: Fine-tuning large pretrained language models (LLMs) is a cornerstone of modern NLP, yet its growing memory demands (driven by backpropagation and large optimizer States) limit deployment in resource-constrained settings. Zero-order (ZO) methods bypass backpropagation by...