Meta strikes up to $100B AMD chip deal as it chases ‘personal superintelligence’
Meta is buying billions of dollars in AMD AI chips in a multiyear deal tied to a 160 million-share warrant, deepening its push to diversify beyond Nvidia and expand data center capacity.
Oura launches a proprietary AI model focused on women’s health
The model supports questions spanning the full reproductive health spectrum, from early menstrual cycles through menopause.
Final 4 days to save up to $680 on your TechCrunch Disrupt 2026 pass
Just 4 days left before savings of up to $680 on your TechCrunch Disrupt 2026 pass end on February 27 at 11:59 p.m. PT. Register to save at one of the most anticipated tech events of the year.
Anthropic launches new push for enterprise agents with plug-ins for finance, engineering, and design
It's a major opportunity to grow Anthropic’s enterprise client base — and a significant threat to SaaS products currently performing those functions.
Nimble raises $47M to give AI agents access to real-time web data
Nimble uses AI agents to search the web, verify and validate the results, and then clean and structure the information into neat tables that can then be queried like a database.
Canva acquires startups working on animation and marketing
With the new acquisitions, the company wants to bolster its position as a marketing solution by potentially adding video creation and more granular measurement.
QueryPlot: Generating Geological Evidence Layers using Natural Language Queries for Mineral Exploration
arXiv:2602.17784v1 Announce Type: cross Abstract: Mineral prospectivity mapping requires synthesizing heterogeneous geological knowledge, including textual deposit models and geospatial datasets, to identify regions likely to host specific mineral deposit types. This process is traditionally manual and knowledge-intensive. We present QueryPlot,...
Mind the Style: Impact of Communication Style on Human-Chatbot Interaction
arXiv:2602.17850v1 Announce Type: cross Abstract: Conversational agents increasingly mediate everyday digital interactions, yet the effects of their communication style on user experience and task success remain unclear. Addressing this gap, we describe the results of a between-subject user study where...
Enhancing Scientific Literature Chatbots with Retrieval-Augmented Generation: A Performance Evaluation of Vector and Graph-Based Systems
arXiv:2602.17856v1 Announce Type: cross Abstract: This paper investigates the enhancement of scientific literature chatbots through retrieval-augmented generation (RAG), with a focus on evaluating vector- and graph-based retrieval systems. The proposed chatbot leverages both structured (graph) and unstructured (vector) databases to...
MantisV2: Closing the Zero-Shot Gap in Time Series Classification with Synthetic Data and Test-Time Strategies
arXiv:2602.17868v1 Announce Type: cross Abstract: Developing foundation models for time series classification is of high practical relevance, as such models can serve as universal feature extractors for diverse downstream tasks. Although early models such as Mantis have shown the promise...
MultiVer: Zero-Shot Multi-Agent Vulnerability Detection
arXiv:2602.17875v1 Announce Type: cross Abstract: We present MultiVer, a zero-shot multi-agent system for vulnerability detection that achieves state-of-the-art recall without fine-tuning. A four-agent ensemble (security, correctness, performance, style) with union voting achieves 82.7% recall on PyVul, exceeding fine-tuned GPT-3.5 (81.3%)...
Games That Teach, Chats That Convince: Comparing Interactive and Static Formats for Persuasive Learning
arXiv:2602.17905v1 Announce Type: cross Abstract: Interactive systems such as chatbots and games are increasingly used to persuade and educate on sustainability-related topics, yet it remains unclear how different delivery formats shape learning and persuasive outcomes when content is held constant....
Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions
arXiv:2602.17907v1 Announce Type: cross Abstract: Traditional neural topic models are typically optimized by reconstructing the document's Bag-of-Words (BoW) representations, overlooking contextual information and struggling with data sparsity. In this work, we propose a novel approach to construct semantically-grounded soft label...
Condition-Gated Reasoning for Context-Dependent Biomedical Question Answering
arXiv:2602.17911v1 Announce Type: cross Abstract: Current biomedical question answering (QA) systems often assume that medical knowledge applies uniformly, yet real-world clinical reasoning is inherently conditional: nearly every decision depends on patient-specific factors such as comorbidities and contraindications. Existing benchmarks do...
From Lossy to Verified: A Provenance-Aware Tiered Memory for Agents
arXiv:2602.17913v1 Announce Type: cross Abstract: Long-horizon agents often compress interaction histories into write-time summaries. This creates a fundamental write-before-query barrier: compression decisions are made before the system knows what a future query will hinge on. As a result, summaries can...
On the scaling relationship between cloze probabilities and language model next-token prediction
arXiv:2602.17848v1 Announce Type: new Abstract: Recent work has shown that larger language models have better predictive power for eye movement and reading time data. While even the best models under-allocate probability mass to human responses, larger models assign higher-quality estimates...
Decomposing Retrieval Failures in RAG for Long-Document Financial Question Answering
arXiv:2602.17981v1 Announce Type: new Abstract: Retrieval-augmented generation is increasingly used for financial question answering over long regulatory filings, yet reliability depends on retrieving the exact context needed to justify answers in high stakes settings. We study a frequent failure mode...
Improving Sampling for Masked Diffusion Models via Information Gain
arXiv:2602.18176v1 Announce Type: new Abstract: Masked Diffusion Models (MDMs) offer greater flexibility in decoding order than autoregressive models but require careful planning to achieve high-quality generation. Existing samplers typically adopt greedy heuristics, prioritizing positions with the highest local certainty to...
Information-Theoretic Storage Cost in Sentence Comprehension
arXiv:2602.18217v1 Announce Type: new Abstract: Real-time sentence comprehension imposes a significant load on working memory, as comprehenders must maintain contextual information to anticipate future input. While measures of such load have played an important role in psycholinguistic theories, they have...
PsihoRo: Depression and Anxiety Romanian Text Corpus
arXiv:2602.18324v1 Announce Type: new Abstract: Psychological corpora in NLP are collections of texts used to analyze human psychology, emotions, and mental health. These texts allow researchers to study psychological constructs, detect mental health issues and analyze emotional language. However, mental...
Validating Political Position Predictions of Arguments
arXiv:2602.18351v1 Announce Type: new Abstract: Real-world knowledge representation often requires capturing subjective, continuous attributes -- such as political positions -- that conflict with pairwise validation, the widely accepted gold standard for human evaluation. We address this challenge through a dual-scale...
RVR: Retrieve-Verify-Retrieve for Comprehensive Question Answering
arXiv:2602.18425v1 Announce Type: new Abstract: Comprehensively retrieving diverse documents is crucial to address queries that admit a wide range of valid answers. We introduce retrieve-verify-retrieve (RVR), a multi-round retrieval framework designed to maximize answer coverage. Initially, a retriever takes the...
Lost Before Translation: Social Information Transmission and Survival in AI-AI Communication
arXiv:2602.17674v1 Announce Type: cross Abstract: When AI systems summarize and relay information, they inevitably transform it. But how? We introduce an experimental paradigm based on the telephone game to study what happens when AI talks to AI. Across five studies...
Bayesian Optimality of In-Context Learning with Selective State Spaces
arXiv:2602.17744v1 Announce Type: cross Abstract: We propose Bayesian optimal sequential prediction as a new principle for understanding in-context learning (ICL). Unlike interpretations framing Transformers as performing implicit gradient descent, we formalize ICL as meta-learning over latent sequence tasks. For tasks...
VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean
arXiv:2602.18307v1 Announce Type: cross Abstract: Large language models have achieved striking results in interactive theorem proving, particularly in Lean. However, most benchmarks for LLM-based proof automation are drawn from mathematics in the Mathlib ecosystem, whereas proofs in software verification are...
Duality Models: An Embarrassingly Simple One-step Generation Paradigm
arXiv:2602.17682v1 Announce Type: new Abstract: Consistency-based generative models like Shortcut and MeanFlow achieve impressive results via a target-aware design for solving the Probability Flow ODE (PF-ODE). Typically, such methods introduce a target time $r$ alongside the current time $t$ to...
AnCoder: Anchored Code Generation via Discrete Diffusion Models
arXiv:2602.17688v1 Announce Type: new Abstract: Diffusion language models offer a compelling alternative to autoregressive code generation, enabling global planning and iterative refinement of complex program logic. However, existing approaches fail to respect the rigid structure of programming languages and, as...
Parallel Complex Diffusion for Scalable Time Series Generation
arXiv:2602.17706v1 Announce Type: new Abstract: Modeling long-range dependencies in time series generation poses a fundamental trade-off between representational capacity and computational efficiency. Traditional temporal diffusion models suffer from local entanglement and the $\mathcal{O}(L^2)$ cost of attention mechanisms. We address these...
Provable Adversarial Robustness in In-Context Learning
arXiv:2602.17743v1 Announce Type: new Abstract: Large language models adapt to new tasks through in-context learning (ICL) without parameter updates. Current theoretical explanations for this capability assume test tasks are drawn from a distribution similar to that seen during pretraining. This...
Grassmannian Mixture-of-Experts: Concentration-Controlled Routing on Subspace Manifolds
arXiv:2602.17798v1 Announce Type: new Abstract: Mixture-of-Experts models rely on learned routers to assign tokens to experts, yet standard softmax gating provides no principled mechanism to control the tradeoff between sparsity and utilization. We propose Grassmannian MoE (GrMoE), a routing framework...