Verizon acknowledges "pain" of new unlock policy, suggests change is coming
Report: Verizon's goal is "immediate unlock for all payment methods really soon."
Is your startup’s check engine light on? Google Cloud’s VP explains what to do
Startup founders are being pushed to move faster than ever, using AI while facing tighter funding, rising infrastructure costs, and more pressure to show real traction early. Cloud credits, access to GPUs, and foundation models have made it easier to...
World Labs lands $1B, with $200M from Autodesk, to bring world models into 3D workflows
The partnership will see the two companies exploring how World Labs’ models can work alongside Autodesk’s tools, and vice versa, starting with a focus on entertainment use cases.
Google adds music-generation capabilities to the Gemini app
Users will be able to use text, images, and videos as a reference to generate music.
Kana emerges from stealth with $15M to build flexible AI agents for marketers
Kana, a new AI marketing startup from the founders of Rapt and Krux, has raised $15 million to build customizable, agent-based marketing tools.
Microsoft says Office bug exposed customers’ confidential emails to Copilot AI
Microsoft said the bug meant that its Copilot AI chatbot was reading and summarizing paying customers' confidential emails, bypassing data-protection policies.
India’s Sarvam wants to bring its AI models to feature phones, cars, and smart glasses
The company is using edge models that take up only megabytes of space, can run on most phones with existing processors, and can work offline.
Indian AI lab Sarvam’s new models are a major bet on the viability of open source AI
The new lineup includes 30-billion- and 105-billion-parameter models; a text-to-speech model; a speech-to-text model; and a vision model to parse documents.
From Scarcity to Scale: A Release-Level Analysis of the Pashto Common Voice Dataset
arXiv:2602.14062v1 Announce Type: new Abstract: Large, openly licensed speech datasets are essential for building automatic speech recognition (ASR) systems, yet many widely spoken languages remain underrepresented in public resources. Pashto, spoken by more than 60 million people, has historically lacked...
Annotation-Efficient Vision-Language Model Adaptation to the Polish Language Using the LLaVA Framework
arXiv:2602.14073v1 Announce Type: new Abstract: Most vision-language models (VLMs) are trained on English-centric data, limiting their performance in other languages and cultural contexts. This restricts their usability for non-English-speaking users and hinders the development of multimodal systems that reflect diverse...
GTS: Inference-Time Scaling of Latent Reasoning with a Learnable Gaussian Thought Sampler
arXiv:2602.14077v1 Announce Type: new Abstract: Inference-time scaling (ITS) in latent reasoning models typically introduces stochasticity through heuristic perturbations, such as dropout or fixed Gaussian noise. While these methods increase trajectory diversity, their exploration behavior is not explicitly modeled and can...
Index Light, Reason Deep: Deferred Visual Ingestion for Visual-Dense Document Question Answering
arXiv:2602.14162v1 Announce Type: new Abstract: Existing multimodal document question answering methods universally adopt a supply-side ingestion strategy: running a Vision-Language Model (VLM) on every page during indexing to generate comprehensive descriptions, then answering questions through text retrieval. However, this "pre-ingestion"...
Knowing When Not to Answer: Abstention-Aware Scientific Reasoning
arXiv:2602.14189v1 Announce Type: new Abstract: Large language models are increasingly used to answer and verify scientific claims, yet existing evaluations typically assume that a model must always produce a definitive answer. In scientific settings, however, unsupported or uncertain conclusions can...
We can still parse using syntactic rules
arXiv:2602.14238v1 Announce Type: new Abstract: This research introduces a new parsing approach, based on earlier syntactic work on context free grammar (CFG) and generalized phrase structure grammar (GPSG). The approach comprises both a new parsing algorithm and a set of...
STATe-of-Thoughts: Structured Action Templates for Tree-of-Thoughts
arXiv:2602.14265v1 Announce Type: new Abstract: Inference-Time-Compute (ITC) methods like Best-of-N and Tree-of-Thoughts are meant to produce output candidates that are both high-quality and diverse, but their use of high-temperature sampling often fails to achieve meaningful output diversity. Moreover, existing ITC...
Directional Concentration Uncertainty: A representational approach to uncertainty quantification for generative models
arXiv:2602.13264v1 Announce Type: new Abstract: In the critical task of making generative models trustworthy and robust, methods for Uncertainty Quantification (UQ) have begun to show encouraging potential. However, many of these methods rely on rigid heuristics that fail to generalize...
BLUEPRINT Rebuilding a Legacy: Multimodal Retrieval for Complex Engineering Drawings and Documents
arXiv:2602.13345v1 Announce Type: new Abstract: Decades of engineering drawings and technical records remain locked in legacy archives with inconsistent or missing metadata, making retrieval difficult and often manual. We present Blueprint, a layout-aware multimodal retrieval system designed for large-scale engineering...
Accelerated Discovery of Cryoprotectant Cocktails via Multi-Objective Bayesian Optimization
arXiv:2602.13398v1 Announce Type: new Abstract: Designing cryoprotectant agent (CPA) cocktails for vitrification is challenging because formulations must be concentrated enough to suppress ice formation yet non-toxic enough to preserve cell viability. This tradeoff creates a large, multi-objective design space in...
Why is Normalization Preferred? A Worst-Case Complexity Theory for Stochastically Preconditioned SGD under Heavy-Tailed Noise
arXiv:2602.13413v1 Announce Type: new Abstract: We develop a worst-case complexity theory for stochastically preconditioned stochastic gradient descent (SPSGD) and its accelerated variants under heavy-tailed noise, a setting that encompasses widely used adaptive methods such as Adam, RMSProp, and Shampoo. We...
Text Has Curvature
arXiv:2602.13418v1 Announce Type: new Abstract: Does text have an intrinsic curvature? Language is increasingly modeled in curved geometries - hyperbolic spaces for hierarchy, mixed-curvature manifolds for compositional structure - yet a basic scientific question remains unresolved: what does curvature mean...
Comparing Classifiers: A Case Study Using PyCM
arXiv:2602.13482v1 Announce Type: new Abstract: Selecting an optimal classification model requires a robust and comprehensive understanding of the performance of the model. This paper provides a tutorial on the PyCM library, demonstrating its utility in conducting deep-dive evaluations of multi-class...
Finding Highly Interpretable Prompt-Specific Circuits in Language Models
arXiv:2602.13483v1 Announce Type: new Abstract: Understanding the internal circuits that language models use to solve tasks remains a central challenge in mechanistic interpretability. Most prior work identifies circuits at the task level by averaging across many prompts, implicitly assuming a...
Federated Learning of Nonlinear Temporal Dynamics with Graph Attention-based Cross-Client Interpretability
arXiv:2602.13485v1 Announce Type: new Abstract: Networks of modern industrial systems are increasingly monitored by distributed sensors, where each system comprises multiple subsystems generating high dimensional time series data. These subsystems are often interdependent, making it important to understand how temporal...
Preventing Rank Collapse in Federated Low-Rank Adaptation with Client Heterogeneity
arXiv:2602.13486v1 Announce Type: new Abstract: Federated low-rank adaptation (FedLoRA) has facilitated communication-efficient and privacy-preserving fine-tuning of foundation models for downstream tasks. In practical federated learning scenarios, client heterogeneity in system resources and data distributions motivates heterogeneous LoRA ranks across clients....
TrasMuon: Trust-Region Adaptive Scaling for Orthogonalized Momentum Optimizers
arXiv:2602.13498v1 Announce Type: new Abstract: Muon-style optimizers leverage Newton-Schulz (NS) iterations to orthogonalize updates, yielding update geometries that often outperform Adam-series methods. However, this orthogonalization discards magnitude information, rendering training sensitive to step-size hyperparameters and vulnerable to high-energy bursts. To...
Singular Vectors of Attention Heads Align with Features
arXiv:2602.13524v1 Announce Type: new Abstract: Identifying feature representations in language models is a central task in mechanistic interpretability. Several recent studies have made an implicit assumption that feature representations can be inferred in some cases from singular vectors of attention...
QuaRK: A Quantum Reservoir Kernel for Time Series Learning
arXiv:2602.13531v1 Announce Type: new Abstract: Quantum reservoir computing offers a promising route for time series learning by modelling sequential data via rich quantum dynamics while the only training required happens at the level of a lightweight classical readout. However, studies...