Cumulative Utility Parity for Fair Federated Learning under Intermittent Client Participation
arXiv:2602.13651v1 Announce Type: new Abstract: In real-world federated learning (FL) systems, client participation is intermittent, heterogeneous, and often correlated with data characteristics or resource constraints. Existing fairness approaches in FL primarily focus on equalizing loss or accuracy conditional on participation,...
Analysis of the academic article for AI & Technology Law practice area relevance: The article proposes a new fairness principle, cumulative utility parity, for federated learning (FL) systems to address the issue of intermittent client participation. This development has implications for AI & Technology Law practice, particularly in the context of data privacy and bias mitigation in AI systems. The research highlights the need for regulatory and industry attention to ensure fairness and representation in AI-driven applications, particularly in scenarios where client participation is uneven. Key legal developments, research findings, and policy signals: - **Fairness principle for FL systems:** The article introduces cumulative utility parity, a fairness principle that evaluates long-term benefit per participation opportunity, rather than per training round, to address the issue of uneven client participation. - **Bias mitigation in AI systems:** The research demonstrates the need for regulatory and industry attention to ensure fairness and representation in AI-driven applications, particularly in scenarios where client participation is uneven. - **Regulatory implications:** The development of cumulative utility parity may inform regulatory approaches to AI fairness and bias mitigation, particularly in the context of data privacy and protection.
**Jurisdictional Comparison and Analytical Commentary on the Impact of Cumulative Utility Parity on AI & Technology Law Practice** The concept of cumulative utility parity for fair federated learning under intermittent client participation has significant implications for AI & Technology Law practice, particularly in jurisdictions with robust data protection and AI regulations. In the United States, the Federal Trade Commission (FTC) has emphasized the importance of fairness and transparency in AI decision-making, while the European Union's General Data Protection Regulation (GDPR) requires data controllers to implement fair and transparent AI systems. In contrast, Korea's Personal Information Protection Act (PIPA) focuses on the protection of personal information, but does not explicitly address AI fairness. Internationally, the OECD's Principles on Artificial Intelligence emphasize the importance of fairness and transparency in AI systems. The cumulative utility parity principle proposed in the article addresses the issue of under-representation of intermittently available clients in federated learning systems, which is particularly relevant in jurisdictions where data protection and AI regulations are stringent. The approach of disentangling unavoidable physical constraints from avoidable algorithmic bias arising from scheduling and aggregation is consistent with the principles of fairness and transparency emphasized in US and EU regulations. However, the Korean approach to AI regulation may require additional consideration of the cumulative utility parity principle to ensure that AI systems are fair and transparent in practice. **Implications Analysis** The cumulative utility parity principle has several implications for AI & Technology Law practice, including: 1. **Fairness and Transparency**:
As an AI Liability & Autonomous Systems Expert, I'll analyze the article's implications for practitioners in the domain of AI and autonomous systems, particularly in the context of product liability for AI. The article proposes cumulative utility parity as a fairness principle for federated learning (FL) systems, which aims to evaluate whether clients receive comparable long-term benefits per participation opportunity. This concept is relevant to product liability for AI, as it highlights the importance of considering the long-term impacts of AI systems on users and clients. In terms of case law, statutory, or regulatory connections, the article's focus on fairness and representation parity in FL systems is reminiscent of the concept of "similarly situated" individuals in tort law (e.g., _Brown v. Board of Education, 347 U.S. 483 (1954)_). This concept is also related to the principles of non-discrimination and equal protection under various data protection and AI regulations, such as the European Union's General Data Protection Regulation (GDPR) and the United States' Algorithmic Accountability Act. The article's emphasis on evaluating AI systems based on their long-term impacts and benefits is also aligned with the principles of product liability for AI, as outlined in various statutes and regulations, such as the Consumer Product Safety Act (CPSA) and the Federal Trade Commission (FTC) guidelines on AI and machine learning. These regulations require manufacturers to ensure that their products, including AI systems, are safe and do not cause harm to consumers. In terms
Zero-Order Optimization for LLM Fine-Tuning via Learnable Direction Sampling
arXiv:2602.13659v1 Announce Type: new Abstract: Fine-tuning large pretrained language models (LLMs) is a cornerstone of modern NLP, yet its growing memory demands (driven by backpropagation and large optimizer States) limit deployment in resource-constrained settings. Zero-order (ZO) methods bypass backpropagation by...
This academic article has significant relevance to AI & Technology Law practice by addressing legal and operational constraints in deploying large-scale AI models. The key legal developments include a novel policy-driven zero-order optimization framework that reduces memory demands and variance in LLM fine-tuning, potentially easing compliance with resource limitations and scalability challenges in AI deployment. The research findings demonstrate improved gradient estimation quality and scalability, offering a practical solution for legal and technical stakeholders managing AI infrastructure. Policy signals emerge as this work informs regulatory considerations around efficient AI resource use and sustainable model deployment.
**Jurisdictional Comparison and Analytical Commentary** The recent development of zero-order optimization methods for large language model (LLM) fine-tuning, as presented in the article "Zero-Order Optimization for LLM Fine-Tuning via Learnable Direction Sampling," has significant implications for AI & Technology Law practice across various jurisdictions. In the United States, the focus on innovation and technological advancement may lead to increased adoption of this method, particularly in industries where resource-constrained settings are common, such as autonomous vehicles or edge computing. In contrast, Korean law, which has a strong emphasis on data protection and privacy, may approach this technology with caution, considering the potential risks of data breaches and unauthorized data collection. Internationally, the European Union's General Data Protection Regulation (GDPR) may also pose challenges for the adoption of this technology, as it requires explicit consent for data processing and strict data protection measures. However, the EU's emphasis on innovation and digitalization may also drive the development and adoption of this technology, particularly in industries such as healthcare and finance. In this context, the learnable direction sampling framework proposed in the article may be seen as a promising solution for balancing the need for innovation with the need for data protection. **Comparative Analysis** In terms of comparative analysis, the US approach may be characterized as more permissive, with a focus on innovation and technological advancement. Korean law, on the other hand, may be seen as more restrictive, with a focus on data protection and privacy
As an AI Liability & Autonomous Systems Expert, I'd like to analyze the implications of this article on practitioners in the field of AI and NLP. The proposed policy-driven Zero-Order (ZO) framework for fine-tuning large language models (LLMs) has significant potential for improving memory efficiency and reducing computational costs in resource-constrained settings. This is particularly relevant in the context of product liability for AI, where memory constraints can impact the reliability and safety of AI-powered systems. From a regulatory perspective, this development may be connected to the concept of "safety by design" in the European Union's Artificial Intelligence Act (EU AI Act), which emphasizes the importance of ensuring AI systems are designed to operate safely and securely. In the United States, this development may be relevant to the Federal Trade Commission's (FTC) guidance on AI and machine learning, which highlights the need for developers to ensure that AI systems are transparent, explainable, and reliable. In terms of case law, the concept of "adequate design" in product liability cases may be relevant to this development. For example, in the case of _Daubert v. Merrell Dow Pharmaceuticals, Inc._ (1993), the US Supreme Court established a standard for determining whether expert testimony is reliable and relevant to a particular case. A similar standard may be applied to the design of AI systems, including the use of ZO methods to improve memory efficiency and reduce computational costs. Statutorily, this development may be connected
Data-driven Bi-level Optimization of Thermal Power Systems with embedded Artificial Neural Networks
arXiv:2602.13746v1 Announce Type: new Abstract: Industrial thermal power systems have coupled performance variables with hierarchical order of importance, making their simultaneous optimization computationally challenging or infeasible. This barrier limits the integrated and computationally scaleable operation optimization of industrial thermal power...
Relevance to current AI & Technology Law practice area: This article may have indirect implications for AI & Technology Law, particularly in the context of data-driven decision-making and the increasing use of machine learning in industrial systems. However, the article's primary focus is on the technical development of a bi-level optimization framework for thermal power systems, rather than its legal implications. Key legal developments, research findings, and policy signals: The article does not explicitly discuss legal developments or policy signals. However, it may be relevant to the ongoing discussion around the use of AI in industrial systems and the potential risks and benefits associated with data-driven decision-making. The article's use of machine learning and neural networks may also be relevant to the growing body of law and regulation surrounding AI and data protection. In terms of research findings, the article presents a technical solution to a complex optimization problem in industrial thermal power systems, using machine learning-powered bi-level optimization framework. The results suggest that this approach can be computationally efficient and effective in solving real-world problems. Overall, while this article may not have direct implications for AI & Technology Law practice, it highlights the ongoing development of AI and machine learning technologies in various industries, which may have future implications for the law and regulation surrounding these technologies.
**Jurisdictional Comparison and Analytical Commentary** The article "Data-driven Bi-level Optimization of Thermal Power Systems with embedded Artificial Neural Networks" presents a machine learning-powered bi-level optimization framework for data-driven optimization of industrial thermal power systems. This development has significant implications for AI & Technology Law practice, particularly in the areas of intellectual property, data protection, and algorithmic decision-making. **US Approach**: In the United States, the development of AI-powered optimization frameworks like the one presented in the article may raise concerns about patentability and trade secret protection. The use of artificial neural networks (ANNs) and Karush-Kuhn-Tucker (KKT) optimality conditions may be considered a novel and non-obvious combination, potentially eligible for patent protection. However, the US Patent and Trademark Office (USPTO) may scrutinize the disclosure of the underlying algorithms and data used in the framework, particularly if they are considered trade secrets. The Federal Trade Commission (FTC) may also review the framework's impact on consumer data protection and algorithmic decision-making. **Korean Approach**: In South Korea, the development of AI-powered optimization frameworks like the one presented in the article may be subject to the Korean Patent Act and the Korean Data Protection Act. The Korean government has implemented regulations on the use of AI and data analytics in various sectors, including industrial thermal power systems. The framework's use of ANNs and KKT optimality conditions may be eligible for patent protection under the Korean Patent Act
As the AI Liability & Autonomous Systems Expert, I'd like to analyze the article's implications for practitioners in the domain of AI and autonomous systems. The article presents a data-driven bi-level optimization framework for industrial thermal power systems using artificial neural networks (ANNs). This framework can optimize performance variables with hierarchical order of importance, which is a significant challenge in the field. The proposed ANN-KKT framework has been validated on benchmark problems and real-world power generation operations, demonstrating its effectiveness. Implications for Practitioners: 1. **Integration of AI in Industrial Systems**: The article highlights the potential of AI-powered optimization frameworks in industrial systems, particularly in thermal power systems. This integration can lead to improved efficiency, reduced costs, and enhanced performance. 2. **Data-Driven Decision Making**: The use of ANNs in the proposed framework demonstrates the importance of data-driven decision making in industrial systems. Practitioners can leverage this approach to make informed decisions based on historical data and real-time performance metrics. 3. **Liability and Risk Management**: As AI-powered systems become increasingly prevalent in industrial settings, liability and risk management become critical concerns. Practitioners must consider the potential risks and consequences associated with AI-driven decision making, including errors, biases, and cybersecurity threats. Case Law, Statutory, or Regulatory Connections: * **Product Liability**: The article's focus on AI-powered optimization frameworks raises questions about product liability in the context of AI-driven systems. Practitioners should be aware of statutes like the
MechPert: Mechanistic Consensus as an Inductive Bias for Unseen Perturbation Prediction
arXiv:2602.13791v1 Announce Type: new Abstract: Predicting transcriptional responses to unseen genetic perturbations is essential for understanding gene regulation and prioritizing large-scale perturbation experiments. Existing approaches either rely on static, potentially incomplete knowledge graphs, or prompt language models for functionally similar...
Analysis of the academic article "MechPert: Mechanistic Consensus as an Inductive Bias for Unseen Perturbation Prediction" reveals the following key developments, research findings, and policy signals relevant to AI & Technology Law practice area: MechPert, a lightweight framework, improves the accuracy of predicting transcriptional responses to unseen genetic perturbations by leveraging mechanistic consensus from multiple agents, which can be applied to the development of more effective AI-driven regulatory models in the life sciences. This research demonstrates the potential of consensus-based approaches to enhance the reliability and efficiency of AI-driven predictions in low-data regimes. The findings of MechPert's improved performance in predicting genetic perturbations and experimental design may inform the development of more robust and accurate AI-driven regulatory models in various industries, including biotechnology and pharmaceuticals.
**Jurisdictional Comparison and Analytical Commentary** The introduction of MechPert, a lightweight framework for predicting transcriptional responses to unseen genetic perturbations, has significant implications for the practice of AI & Technology Law, particularly in the areas of intellectual property, data protection, and liability. A comparison of the US, Korean, and international approaches reveals distinct differences in how these jurisdictions may address the development and deployment of AI-powered tools like MechPert. In the US, the development of MechPert may raise concerns under the Computer Fraud and Abuse Act (CFAA) and the Stored Communications Act (SCA), which govern the unauthorized access and use of computer systems and data. Additionally, the use of AI-powered tools in scientific research may implicate the Bayh-Dole Act, which regulates the ownership and commercialization of inventions arising from federally funded research. In Korea, the development of MechPert may be subject to the Act on the Development of Information and Communications Technology, which regulates the use of AI and big data in various industries, including healthcare and biotechnology. The Korean government has also established a framework for the responsible development and use of AI, which may influence the development and deployment of MechPert in the country. Internationally, the development of MechPert may be subject to the EU's General Data Protection Regulation (GDPR), which regulates the processing of personal data, including genetic data. The use of AI-powered tools in scientific research may also implicate the OECD
As an AI Liability & Autonomous Systems Expert, I'll provide domain-specific expert analysis of the article's implications for practitioners. The MechPert framework, which uses a consensus mechanism to aggregate predictions from multiple agents, may have significant implications for the development of autonomous systems in the field of gene regulation and experimental design. This is particularly relevant in the context of AI liability, as the framework's ability to predict transcriptional responses to unseen genetic perturbations could be seen as a form of autonomous decision-making. In terms of case law, statutory, or regulatory connections, this article may be relevant to the development of liability frameworks for autonomous systems, particularly in the context of scientific research and experimentation. For example, the National Science Foundation's (NSF) guidelines for responsible conduct of research (RCR) may be relevant to the use of MechPert in experimental design, as they emphasize the importance of transparency, accountability, and ethics in scientific research. Additionally, the article's focus on improving predictions in low-data regimes may be relevant to the development of liability frameworks for AI systems that operate in environments with limited data availability. In terms of specific statutes and precedents, the article may be relevant to the development of liability frameworks for AI systems in the context of scientific research and experimentation. For example, the US Supreme Court's decision in Daubert v. Merrell Dow Pharmaceuticals (1993) established a standard for the admissibility of expert testimony in court, which may be relevant to the use of
GREPO: A Benchmark for Graph Neural Networks on Repository-Level Bug Localization
arXiv:2602.13921v1 Announce Type: new Abstract: Repository-level bug localization-the task of identifying where code must be modified to fix a bug-is a critical software engineering challenge. Standard Large Language Modles (LLMs) are often unsuitable for this task due to context window...
Analysis of the academic article for AI & Technology Law practice area relevance: The article introduces GREPO, a benchmark for Graph Neural Networks (GNNs) in repository-level bug localization, highlighting the potential of GNNs for software engineering challenges. Key legal developments include the increasing use of AI-powered tools in software development, which may lead to new liability considerations for software developers and AI model creators. Research findings suggest that GNNs can outperform traditional retrieval methods, but also raise questions about the reliability and accountability of AI-driven decision-making in software development. Relevant policy signals include the potential need for regulatory frameworks to address the use of AI-powered tools in software development, particularly in areas such as liability, data protection, and intellectual property. The article's findings may also inform discussions around the development of AI-powered software tools and the need for transparency and accountability in their use.
**Jurisdictional Comparison and Analytical Commentary on GREPO's Impact on AI & Technology Law Practice** The introduction of GREPO, a benchmark for Graph Neural Networks (GNNs) on repository-level bug localization, has significant implications for AI & Technology Law practice globally. In the United States, the development of GNNs and their applications in bug localization may raise concerns about intellectual property protection, particularly in the context of software development and open-source code repositories. In contrast, Korea's strong focus on artificial intelligence and data-driven innovation may lead to more favorable regulatory environments for the adoption and development of GNNs. Internationally, the European Union's General Data Protection Regulation (GDPR) and the upcoming Artificial Intelligence Act may influence the development and deployment of GNNs, particularly in the context of data processing and repository management. The GREPO benchmark's emphasis on graph-based data structures and direct GNN processing may also raise questions about data ownership, access, and control, which are critical considerations in AI & Technology Law. In terms of jurisdictional comparison, the US approach may be characterized by a more permissive regulatory environment, while Korea's approach may be more supportive of AI innovation. Internationally, the EU's regulatory framework may be more stringent, focusing on data protection and AI accountability. The GREPO benchmark's impact on AI & Technology Law practice will depend on how these jurisdictional approaches evolve and interact with the development of GNNs and their applications in bug localization. **Implications
As an AI Liability & Autonomous Systems Expert, I'd like to provide domain-specific expert analysis of the article's implications for practitioners. The article introduces GREPO, a benchmark for Graph Neural Networks (GNNs) on repository-level bug localization, which has significant implications for the development and deployment of AI-powered software engineering tools. From a liability perspective, the emergence of GNNs for bug localization raises questions about the responsibility of developers and manufacturers of AI-powered software engineering tools. The article highlights the potential of GNNs for outstanding performance compared to established information retrieval baselines, which may lead to increased reliance on these tools. However, the lack of a dedicated benchmark, such as GREPO, may have hindered the application of GNNs in the past. This raises concerns about the potential for AI-powered tools to introduce new bugs or exacerbate existing ones, particularly if they are not properly tested or validated. In terms of case law, statutory, or regulatory connections, the article's implications may be relevant to the following: * The US Federal Trade Commission (FTC) has issued guidelines on the use of AI and machine learning in software development, emphasizing the importance of transparency, accountability, and testing (FTC, 2020). GREPO may provide a valuable resource for developers and manufacturers to demonstrate the effectiveness and safety of their AI-powered software engineering tools. * The European Union's General Data Protection Regulation (GDPR) requires data controllers to implement appropriate technical and organizational measures to ensure the
Time-Series Classification with Multivariate Statistical Dependence Features
arXiv:2604.06537v1 Announce Type: new Abstract: In this paper, we propose a novel framework for non-stationary time-series analysis that replaces conventional correlation-based statistics with direct estimation of statistical dependence in the normalized joint density of input and target signals, the cross...
Attention Flows: Tracing LLM Conceptual Engagement via Story Summaries
arXiv:2604.06416v1 Announce Type: new Abstract: Although LLM context lengths have grown, there is evidence that their ability to integrate information across long-form texts has not kept pace. We evaluate one such understanding task: generating summaries of novels. When human authors...
Inference-Time Code Selection via Symbolic Equivalence Partitioning
arXiv:2604.06485v1 Announce Type: new Abstract: "Best-of-N" selection is a popular inference-time scaling method for code generation using Large Language Models (LLMs). However, to reliably identify correct solutions, existing methods often depend on expensive or stochastic external verifiers. In this paper,...
Atlassian launches visual AI tools and third-party agents in Confluence
Confluence users can now create visual assets within the software in addition to new third-party agents working with Lovable, Replit, and Gamma.
The Illusion of Stochasticity in LLMs
arXiv:2604.06543v1 Announce Type: new Abstract: In this work, we demonstrate that reliable stochastic sampling is a fundamental yet unfulfilled requirement for Large Language Models (LLMs) operating as agents. Agentic systems are frequently required to sample from distributions, often inferred from...
Tubi is the first streamer to launch a native app within ChatGPT
Tubi becomes the first streaming service to offer an app integration within ChatGPT, the AI chatbot that millions of users turn to for answers.
LLM-Augmented Knowledge Base Construction For Root Cause Analysis
arXiv:2604.06171v1 Announce Type: new Abstract: Communications networks now form the backbone of our digital world, with fast and reliable connectivity. However, even with appropriate redundancy and failover mechanisms, it is difficult to guarantee "five 9s" (99.999 %) reliability, requiring rapid...
Distributed Interpretability and Control for Large Language Models
arXiv:2604.06483v1 Announce Type: new Abstract: Large language models that require multiple GPU cards to host are usually the most capable models. It is necessary to understand and steer these models, but the current technologies do not support the interpretability and...
Stochastic Gradient Descent in the Saddle-to-Saddle Regime of Deep Linear Networks
arXiv:2604.06366v1 Announce Type: new Abstract: Deep linear networks (DLNs) are used as an analytically tractable model of the training dynamics of deep neural networks. While gradient descent in DLNs is known to exhibit saddle-to-saddle dynamics, the impact of stochastic gradient...
SensorPersona: An LLM-Empowered System for Continual Persona Extraction from Longitudinal Mobile Sensor Streams
arXiv:2604.06204v1 Announce Type: new Abstract: Personalization is essential for Large Language Model (LLM)-based agents to adapt to users' preferences and improve response quality and task performance. However, most existing approaches infer personas from chat histories, which capture only self-disclosed information...
The Stepwise Informativeness Assumption: Why are Entropy Dynamics and Reasoning Correlated in LLMs?
arXiv:2604.06192v1 Announce Type: new Abstract: Recent work uses entropy-based signals at multiple representation levels to study reasoning in large language models, but the field remains largely empirical. A central unresolved puzzle is why internal entropy dynamics, defined under the predictive...
AE-ViT: Stable Long-Horizon Parametric Partial Differential Equations Modeling
arXiv:2604.06475v1 Announce Type: new Abstract: Deep Learning Reduced Order Models (ROMs) are becoming increasingly popular as surrogate models for parametric partial differential equations (PDEs) due to their ability to handle high-dimensional data, approximate highly nonlinear mappings, and utilize GPUs. Existing...
Cross-Lingual Transfer and Parameter-Efficient Adaptation in the Turkic Language Family: A Theoretical Framework for Low-Resource Language Models
arXiv:2604.06202v1 Announce Type: new Abstract: Large language models (LLMs) have transformed natural language processing, yet their capabilities remain uneven across languages. Most multilingual models are trained primarily on high-resource languages, leaving many languages with large speaker populations underrepresented in both...
Toward a universal foundation model for graph-structured data
arXiv:2604.06391v1 Announce Type: new Abstract: Graphs are a central representation in biomedical research, capturing molecular interaction networks, gene regulatory circuits, cell--cell communication maps, and knowledge graphs. Despite their importance, currently there is not a broadly reusable foundation model available for...
FMI@SU ToxHabits: Evaluating LLMs Performance on Toxic Habit Extraction in Spanish Clinical Texts
arXiv:2604.06403v1 Announce Type: new Abstract: The paper presents an approach for the recognition of toxic habits named entities in Spanish clinical texts. The approach was developed for the ToxHabits Shared Task. Our team participated in subtask 1, which aims to...
Busemann energy-based attention for emotion analysis in Poincar\'e discs
arXiv:2604.06752v1 Announce Type: new Abstract: We present EmBolic - a novel fully hyperbolic deep learning architecture for fine-grained emotion analysis from textual messages. The underlying idea is that hyperbolic geometry efficiently captures hierarchies between both words and emotions. In our...
STDec: Spatio-Temporal Stability Guided Decoding for dLLMs
arXiv:2604.06330v1 Announce Type: new Abstract: Diffusion Large Language Models (dLLMs) have achieved rapid progress, viewed as a promising alternative to the autoregressive paradigm. However, most dLLM decoders still adopt a global confidence threshold, and do not explicitly model local context...
Consistency-Guided Decoding with Proof-Driven Disambiguation for Three-Way Logical Question Answering
arXiv:2604.06196v1 Announce Type: new Abstract: Three-way logical question answering (QA) assigns $True/False/Unknown$ to a hypothesis $H$ given a premise set $S$. While modern large language models (LLMs) can be accurate on isolated examples, we identify two recurring failure modes in...
Towards Accurate and Calibrated Classification: Regularizing Cross-Entropy From A Generative Perspective
arXiv:2604.06689v1 Announce Type: new Abstract: Accurate classification requires not only high predictive accuracy but also well-calibrated confidence estimates. Yet, modern deep neural networks (DNNs) are often overconfident, primarily due to overfitting on the negative log-likelihood (NLL). While focal loss variants...
ART: Attention Replacement Technique to Improve Factuality in LLMs
arXiv:2604.06393v1 Announce Type: new Abstract: Hallucination in large language models (LLMs) continues to be a significant issue, particularly in tasks like question answering, where models often generate plausible yet incorrect or irrelevant information. Although various methods have been proposed to...
A Comparative Study of Demonstration Selection for Practical Large Language Models-based Next POI Prediction
arXiv:2604.06207v1 Announce Type: new Abstract: This paper investigates demonstration selection strategies for predicting a user's next point-of-interest (POI) using large language models (LLMs), aiming to accurately forecast a user's subsequent location based on historical check-in data. While in-context learning (ICL)...
Final 3 days to save up to $500 on your TechCrunch Disrupt 2026 pass
Save up to $500 on your TechCrunch Disrupt 2026 pass until April 10, 11:59 p.m. PT. Secure your spot at the center of the tech ecosystem. Register today.
LLM-based Schema-Guided Extraction and Validation of Missing-Person Intelligence from Heterogeneous Data Sources
arXiv:2604.06571v1 Announce Type: new Abstract: Missing-person and child-safety investigations rely on heterogeneous case documents, including structured forms, bulletin-style posters, and narrative web profiles. Variations in layout, terminology, and data quality impede rapid triage, large-scale analysis, and search-planning workflows. This paper...
Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models
arXiv:2604.06201v1 Announce Type: new Abstract: While most reading comprehension benchmarks for LLMs focus on factual information that can be answered by localizing specific textual evidence, many real-world tasks require understanding distributional information, such as population-level trends and preferences expressed across...
Asymptotic-Preserving Neural Networks for Viscoelastic Parameter Identification in Multiscale Blood Flow Modeling
arXiv:2604.06287v1 Announce Type: new Abstract: Mathematical models and numerical simulations offer a non-invasive way to explore cardiovascular phenomena, providing access to quantities that cannot be measured directly. In this study, we start with a one-dimensional multiscale blood flow model that...