Quantifying Memorization and Privacy Risks in Genomic Language Models
arXiv:2603.08913v1 Announce Type: new Abstract: Genomic language models (GLMs) have emerged as powerful tools for learning representations of DNA sequences, enabling advances in variant prediction, regulatory element identification, and cross-task transfer learning. However, as these models are increasingly trained or...
Semantic Level of Detail: Multi-Scale Knowledge Representation via Heat Kernel Diffusion on Hyperbolic Manifolds
arXiv:2603.08965v1 Announce Type: new Abstract: AI memory systems increasingly organize knowledge into graph structures -- knowledge graphs, entity relations, community hierarchies -- yet lack a principled mechanism for continuous resolution control: where do the qualitative boundaries between abstraction levels lie,...
The Coupling Within: Flow Matching via Distilled Normalizing Flows
arXiv:2603.09014v1 Announce Type: new Abstract: Flow models have rapidly become the go-to method for training and deploying large-scale generators, owing their success to inference-time flexibility via adjustable integration steps. A crucial ingredient in flow training is the choice of coupling...
Dynamic Multi-period Experts for Online Time Series Forecasting
arXiv:2603.09062v1 Announce Type: new Abstract: Online Time Series Forecasting (OTSF) requires models to continuously adapt to concept drift. However, existing methods often treat concept drift as a monolithic phenomenon. To address this limitation, we first redefine concept drift by categorizing...
Exclusive Self Attention
arXiv:2603.09078v1 Announce Type: new Abstract: We introduce exclusive self attention (XSA), a simple modification of self attention (SA) that improves Transformer's sequence modeling performance. The key idea is to constrain attention to capture only information orthogonal to the token's own...
Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards
arXiv:2603.09117v1 Announce Type: new Abstract: Reinforcement Learning from Verifiable Rewards (RLVR) significantly enhances large language models (LLMs) reasoning but severely suffers from calibration degeneration, where models become excessively over-confident in incorrect answers. Previous studies devote to directly incorporating calibration objective...
Latent-DARM: Bridging Discrete Diffusion And Autoregressive Models For Reasoning
arXiv:2603.09184v1 Announce Type: new Abstract: Most multi-agent systems rely exclusively on autoregressive language models (ARMs) that are based on sequential generation. Although effective for fluent text, ARMs limit global reasoning and plan revision. On the other hand, Discrete Diffusion Language...
Reward-Zero: Language Embedding Driven Implicit Reward Mechanisms for Reinforcement Learning
arXiv:2603.09331v1 Announce Type: new Abstract: We introduce Reward-Zero, a general-purpose implicit reward mechanism that transforms natural-language task descriptions into dense, semantically grounded progress signals for reinforcement learning (RL). Reward-Zero serves as a simple yet sophisticated universal reward function that leverages...
TA-GGAD: Testing-time Adaptive Graph Model for Generalist Graph Anomaly Detection
arXiv:2603.09349v1 Announce Type: new Abstract: A significant number of anomalous nodes in the real world, such as fake news, noncompliant users, malicious transactions, and malicious posts, severely compromises the health of the graph data ecosystem and urgently requires effective identification...
Interactive 3D visualization of surface roughness predictions in additive manufacturing: A data-driven framework
arXiv:2603.09353v1 Announce Type: new Abstract: Surface roughness in Material Extrusion Additive Manufacturing varies across a part and is difficult to anticipate during process planning because it depends on both printing parameters and local surface inclination, which governs the staircase effect....
Google brings Gemini in Chrome to India
As part of the rollout, Gemini will support languages including Hindi, Bengali, Gujarati, Kannada, Malayalam, Marathi, Telugu, and Tamil.
Amazon launches its healthcare AI assistant on its website and app
Health AI can answer questions, explain health records, manage prescription renewals, book appointments, and more.
AI-powered apps struggle with long-term retention, new report shows
AI can drive stronger early monetization for apps, but sustaining value remains the challenge, RevenueCat's latest report finds.
AgentMail raises $6M to build an email service for AI agents
AgentMail provides an API platform that lets you give AI agents their own email inboxes, with support for two-way conversations, parsing, threading, labeling, searching, and replying.
Google gives in to users’ complaints over AI-powered ‘Ask Photos’ search feature
The option appears on the Google Photos Search screen and lets users pick which experience they want.
Meta acquired Moltbook, the AI agent social network that went viral because of fake posts
Meta says that Moltbook's approach to "connecting agents through an always-on-directory" is novel.
YouTube expands AI deepfake detection to politicians, government officials, and journalists
YouTube's AI deepfake detection tool is becoming available to politicians, journalists, and officials, letting them flag unauthorized likenesses for removal.
Adobe is debuting an AI assistant for Photoshop
Adobe is also adding new AI-powered image-editing features to Firefly.
Zoom introduces an AI-powered office suite, says AI avatars for meetings arrive this month
Zoom is also introducing real-time deepfake detection tech for meetings.
Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive
The idea behind the new features is to make the apps more personal and capable to help users get things done faster, right within the platforms themselves.
Yann LeCun’s AMI Labs raises $1.03B to build world models
“My prediction is that ‘world models’ will be the next buzzword,” AMI Labs CEO Alexandre LeBrun told TechCrunch. “In six months, every company will call itself a world model to raise funding.”
Counting on Consensus: Selecting the Right Inter-annotator Agreement Metric for NLP Annotation and Evaluation
arXiv:2603.06865v1 Announce Type: new Abstract: Human annotation remains the foundation of reliable and interpretable data in Natural Language Processing (NLP). As annotation and evaluation tasks continue to expand, from categorical labelling to segmentation, subjective judgment, and continuous rating, measuring agreement...
Hit-RAG: Learning to Reason with Long Contexts via Preference Alignment
arXiv:2603.07023v1 Announce Type: new Abstract: Despite the promise of Retrieval-Augmented Generation in grounding Multimodal Large Language Models with external knowledge, the transition to extensive contexts often leads to significant attention dilution and reasoning hallucinations. The surge in information density causes...
Emotion Transcription in Conversation: A Benchmark for Capturing Subtle and Complex Emotional States through Natural Language
arXiv:2603.07138v1 Announce Type: new Abstract: Emotion Recognition in Conversation (ERC) is critical for enabling natural human-machine interactions. However, existing methods predominantly employ categorical or dimensional emotion annotations, which often fail to adequately represent complex, subtle, or culturally specific emotional nuances....
Scaling Self-Supervised Speech Models Uncovers Deep Linguistic Relationships: Evidence from the Pacific Cluster
arXiv:2603.07238v1 Announce Type: new Abstract: Similarities between language representations derived from Self-Supervised Speech Models (S3Ms) have been observed to primarily reflect geographic proximity or surface typological similarities driven by recent expansion or contact, potentially missing deeper genealogical signals. We investigate...
To Predict or Not to Predict? Towards reliable uncertainty estimation in the presence of noise
arXiv:2603.07330v1 Announce Type: new Abstract: This study examines the role of uncertainty estimation (UE) methods in multilingual text classification under noisy and non-topical conditions. Using a complex-vs-simple sentence classification task across several languages, we evaluate a range of UE techniques...
How Much Noise Can BERT Handle? Insights from Multilingual Sentence Difficulty Detection
arXiv:2603.07346v1 Announce Type: new Abstract: Noisy training data can significantly degrade the performance of language-model-based classifiers, particularly in non-topical classification tasks. In this study we designed a methodological framework to assess the impact of denoising. More specifically, we explored a...
Cross-Modal Taxonomic Generalization in (Vision-) Language Models
arXiv:2603.07474v1 Announce Type: new Abstract: What is the interplay between semantic representations learned by language models (LM) from surface form alone to those learned from more grounded evidence? We study this question for a scenario where part of the input...
Accent Vector: Controllable Accent Manipulation for Multilingual TTS Without Accented Data
arXiv:2603.07534v1 Announce Type: new Abstract: Accent is an integral part of society, reflecting multiculturalism and shaping how individuals express identity. The majority of English speakers are non-native (L2) speakers, yet current Text-To-Speech (TTS) systems primarily model American-accented English due limited...