The UK Supreme Court
Welcome to SCOUTSblog’s newest recurring series, in which we interview experts on different supreme courts around the world and how they compare to our own. For our debut column, we […]The postThe UK Supreme Courtappeared first onSCOTUSblog.
The justices’ troubling message to lower courts
Civil Rights and Wrongs is a recurring series by Daniel Harawa covering criminal justice and civil rights cases before the court. In two recent decisions, the Supreme Court summarily reversed […]The postThe justices’ troubling message to lower courtsappeared first onSCOTUSblog.
SCOTUStoday for Tuesday, March 3
As we’ve noted before, we read a lot of legal news in the process of preparing this newsletter. Here’s a headline we saw recently that we won’t soon forget: References […]The postSCOTUStoday for Tuesday, March 3appeared first onSCOTUSblog.
Episode 41: Reading Recommendations - EJIL: The Podcast!
Episode 41: Thinking through Rupture in International Economic Law: Views from Latin America - EJIL: The Podcast!
Cybersecurity’s Role in Securing Elections
SPEAKERS: Professor Chris Hoofnagle, Beth Calley, Lucy Huang Podcast Transcript: [Lucy Huang] 00:07 Hello and welcome to the Berkeley Technology Law Journal podcast. My name is Lucy Huang and I am one of the senior editors of the podcast. Today,...
France or Spain or Germany or France: A Neural Account of Non-Redundant Redundant Disjunctions
arXiv:2602.23547v1 Announce Type: new Abstract: Sentences like "She will go to France or Spain, or perhaps to Germany or France." appear formally redundant, yet become acceptable in contexts such as "Mary will go to a philosophy program in France or...
Multi-Agent Causal Reasoning for Suicide Ideation Detection Through Online Conversations
arXiv:2602.23577v1 Announce Type: new Abstract: Suicide remains a pressing global public health concern. While social media platforms offer opportunities for early risk detection through online conversation trees, existing approaches face two major limitations: (1) They rely on predefined rules (e.g.,...
BRIDGE the Gap: Mitigating Bias Amplification in Automated Scoring of English Language Learners via Inter-group Data Augmentation
arXiv:2602.23580v1 Announce Type: new Abstract: In the field of educational assessment, automated scoring systems increasingly rely on deep learning and large language models (LLMs). However, these systems face significant risks of bias amplification, where model prediction gaps between student groups...
LFQA-HP-1M: A Large-Scale Human Preference Dataset for Long-Form Question Answering
arXiv:2602.23603v1 Announce Type: new Abstract: Long-form question answering (LFQA) demands nuanced evaluation of multi-sentence explanatory responses, yet existing metrics often fail to reflect human judgment. We present LFQA-HP-1M, a large-scale dataset comprising 1.3M human pairwise preference annotations for LFQA. We...
TRIZ-RAGNER: A Retrieval-Augmented Large Language Model for TRIZ-Aware Named Entity Recognition in Patent-Based Contradiction Mining
arXiv:2602.23656v1 Announce Type: new Abstract: TRIZ-based contradiction mining is a fundamental task in patent analysis and systematic innovation, as it enables the identification of improving and worsening technical parameters that drive inventive problem solving. However, existing approaches largely rely on...
From Static Benchmarks to Dynamic Protocol: Agent-Centric Text Anomaly Detection for Evaluating LLM Reasoning
arXiv:2602.23729v1 Announce Type: new Abstract: The evaluation of large language models (LLMs) has predominantly relied on static datasets, which offer limited scalability and fail to capture the evolving reasoning capabilities of recent models. To overcome these limitations, we propose an...
Structured Prompt Optimization for Few-Shot Text Classification via Semantic Alignment in Latent Space
arXiv:2602.23753v1 Announce Type: new Abstract: This study addresses the issues of semantic entanglement, unclear label structure, and insufficient feature representation in few-shot text classification, and proposes an optimization framework based on structured prompts to enhance semantic understanding and task adaptation...
GLUScope: A Tool for Analyzing GLU Neurons in Transformer Language Models
arXiv:2602.23826v1 Announce Type: new Abstract: We present GLUScope, an open-source tool for analyzing neurons in Transformer-based language models, intended for interpretability researchers. We focus on more recent models than previous tools do; specifically we consider gated activation functions such as...
The Astonishing Ability of Large Language Models to Parse Jabberwockified Language
arXiv:2602.23928v1 Announce Type: new Abstract: We show that large language models (LLMs) have an astonishing ability to recover meaning from severely degraded English texts. Texts in which content words have been randomly substituted by nonsense strings, e.g., "At the ghybe...
EDDA-Coordinata: An Annotated Dataset of Historical Geographic Coordinates
arXiv:2602.23941v1 Announce Type: new Abstract: This paper introduces a dataset of enriched geographic coordinates retrieved from Diderot and d'Alembert's eighteenth-century Encyclopedie. Automatically recovering geographic coordinates from historical texts is a complex task, as they are expressed in a variety of...
MemEmo: Evaluating Emotion in Memory Systems of Agents
arXiv:2602.23944v1 Announce Type: new Abstract: Memory systems address the challenge of context loss in Large Language Model during prolonged interactions. However, compared to human cognition, the efficacy of these systems in processing emotion-related information remains inconclusive. To address this gap,...
The GRADIEND Python Package: An End-to-End System for Gradient-Based Feature Learning
arXiv:2602.23993v1 Announce Type: new Abstract: We present gradiend, an open-source Python package that operationalizes the GRADIEND method for learning feature directions from factual-counterfactual MLM and CLM gradients in language models. The package provides a unified workflow for feature-related data creation,...
Task Complexity Matters: An Empirical Study of Reasoning in LLMs for Sentiment Analysis
arXiv:2602.24060v1 Announce Type: new Abstract: Large language models (LLMs) with reasoning capabilities have fueled a compelling narrative that reasoning universally improves performance across language tasks. We test this claim through a comprehensive evaluation of 504 configurations across seven model families--including...
HiDrop: Hierarchical Vision Token Reduction in MLLMs via Late Injection, Concave Pyramid Pruning, and Early Exit
arXiv:2602.23699v1 Announce Type: cross Abstract: The quadratic computational cost of processing vision tokens in Multimodal Large Language Models (MLLMs) hinders their widespread adoption. While progressive vision token pruning offers a promising solution, current methods misinterpret shallow layer functions and use...
UTPTrack: Towards Simple and Unified Token Pruning for Visual Tracking
arXiv:2602.23734v1 Announce Type: cross Abstract: One-stream Transformer-based trackers achieve advanced performance in visual object tracking but suffer from significant computational overhead that hinders real-time deployment. While token pruning offers a path to efficiency, existing methods are fragmented. They typically prune...
SWE-rebench V2: Language-Agnostic SWE Task Collection at Scale
arXiv:2602.23866v1 Announce Type: cross Abstract: Software engineering agents (SWE) are improving rapidly, with recent gains largely driven by reinforcement learning (RL). However, RL training is constrained by the scarcity of large-scale task collections with reproducible execution environments and reliable test...
Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking
arXiv:2602.24009v1 Announce Type: cross Abstract: Jailbreak techniques for large language models (LLMs) evolve faster than benchmarks, making robustness estimates stale and difficult to compare across papers due to drift in datasets, harnesses, and judging protocols. We introduce JAILBREAK FOUNDRY (JBF),...
RewardUQ: A Unified Framework for Uncertainty-Aware Reward Models
arXiv:2602.24040v1 Announce Type: cross Abstract: Reward models are central to aligning large language models (LLMs) with human preferences. Yet most approaches rely on pointwise reward estimates that overlook the epistemic uncertainty in reward models arising from limited human feedback. Recent...
Detoxifying LLMs via Representation Erasure-Based Preference Optimization
arXiv:2602.23391v1 Announce Type: new Abstract: Large language models (LLMs) trained on webscale data can produce toxic outputs, raising concerns for safe deployment. Prior defenses, based on applications of DPO, NPO, and similar algorithms, reduce the likelihood of harmful continuations, but...
U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation
arXiv:2602.23400v1 Announce Type: new Abstract: Generative Recommendation (GenRec) typically leverages Large Language Models (LLMs) to redefine personalization as an instruction-driven sequence generation task. However, fine-tuning on user logs inadvertently encodes sensitive attributes into model parameters, raising critical privacy concerns. Existing...
Global Interpretability via Automated Preprocessing: A Framework Inspired by Psychiatric Questionnaires
arXiv:2602.23459v1 Announce Type: new Abstract: Psychiatric questionnaires are highly context sensitive and often only weakly predict subsequent symptom severity, which makes the prognostic relationship difficult to learn. Although flexible nonlinear models can improve predictive accuracy, their limited interpretability can erode...
Uncertainty-aware Language Guidance for Concept Bottleneck Models
arXiv:2602.23495v1 Announce Type: new Abstract: Concept Bottleneck Models (CBMs) provide inherent interpretability by first mapping input samples to high-level semantic concepts, followed by a combination of these concepts for the final classification. However, the annotation of human-understandable concepts requires extensive...
FedDAG: Clustered Federated Learning via Global Data and Gradient Integration for Heterogeneous Environments
arXiv:2602.23504v1 Announce Type: new Abstract: Federated Learning (FL) enables a group of clients to collaboratively train a model without sharing individual data, but its performance drops when client data are heterogeneous. Clustered FL tackles this by grouping similar clients. However,...
Sample Size Calculations for Developing Clinical Prediction Models: Overview and pmsims R package
arXiv:2602.23507v1 Announce Type: new Abstract: Background: Clinical prediction models are increasingly used to inform healthcare decisions, but determining the minimum sample size for their development remains a critical and unresolved challenge. Inadequate sample sizes can lead to overfitting, poor generalisability,...