id
stringlengths 10
10
| number
int64 1
25.6k
| forum
stringlengths 10
10
| title
stringlengths 5
214
| abstract
stringlengths 26
4.31k
| content_TLDR
stringlengths 1
250
⌀ | content_keywords
stringlengths 6
1.02k
| content_pdf
stringlengths 49
49
| content_primary_area
stringclasses 21
values | content_supplementary_material
stringlengths 56
56
⌀ | signatures
stringlengths 47
51
|
|---|---|---|---|---|---|---|---|---|---|---|
uQKtwdJN0o
| 25,237
|
uQKtwdJN0o
|
FrugalRAG: Less is More in RL Finetuning for Multi-hop Question Answering
|
Reinforcement learning (RL) based on the final answer's reward has driven recent progress in small language models (SLMs) on reasoning-heavy tasks such as math and code. However, applying the same techniques to retrieval-augmented generation (RAG) benchmarks like multi-hop QA has yielded limited gains—often trailing supervised or prompting-only baselines. Instead, we argue that a viable path for RL in multi-hop QA is to use test-time scaling judiciously, for optimizing both the final answer accuracy and the efficiency in reaching that answer.
We propose FrugalRAG, a two-stage finetuning framework that adaptively _reduces_ the number of retrieval steps based on a question's difficulty. First, we train an SLM with supervised finetuning on a full-exploration policy that generates broad sub-queries. Then, we apply RL to adaptively prune search depth based on question difficulty, directly rewarding policies that balance correctness with frugality. Unlike prior approaches requiring 10× more data, our method achieves competitive performance with only ~1,000 examples. On HotPotQA and other multi-hop QA benchmarks, FrugalRAG attains state-of-the-art efficiency–accuracy tradeoffs, cutting retrieval cost nearly in half. Moreover, on the challenging BrowseCompPlus benchmark, it surpasses SLM-based and other baselines after training on only 200 examples. These results demonstrate the use of RL—not to increase reasoning steps but to reduce them—as an effective solution for scalable, efficient RAG.
| null |
['Multi-Hop RAG', 'Efficiency', 'Reasoning', 'SLMs']
|
/pdf/3f417a456608be44bbbe0d79021824c66645a981.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25237/Authors']
|
GnawtLKGkP
| 25,236
|
GnawtLKGkP
|
Any-step Generation via N-th Order Recursive Consistent Velocity Field Estimation
|
Recent advances in few-step generative models (typically $1$-$8$ steps), such as consistency models, have yielded impressive performance. However, their broader adoption is hindered by significant challenges, including substantial computational overhead, the reliance on complex multi-component loss functions, and intricate multi-stage training strategies that lack end-to-end simplicity. These limitations impede their scalability and stability, especially when applied to large-scale models.
To address these issues, we introduce **$N$-th order Recursive Consistent velocity field estimation for Generative Modeling (RCGM)**, a novel framework that unifies many existing approaches. Within this framework, we reveal that conventional one-step methods, such as consistency and MeanFlow models, are special cases of 1st-order RCGM. This insight enables a natural extension to higher-order scenarios ($N \geq 2$), which exhibit markedly improved training stability and achieve state-of-the-art (SOTA) performance.
For instance, on ImageNet $256\times256$, RCGM enables a $675\text{M}$ parameter diffusion transformer to achieve a $1.48$ FID score in just $2$ sampling steps. Crucially, RCGM facilitates the stable full-parameter training of a large-scale ($3.6\textrm{B}$) unified multi-modal model, attaining a $0.85$ GenEval score in $2$ steps. In contrast, conventional 1st-order approaches, such as consistency and MeanFlow models, typically suffer from training instability, model collapse, or memory constraints under comparable settings.
*Code will be publicly available.*
| null |
['Generative Models']
|
/pdf/15ab6b71b1d024e9934411c9d3377a01ee4edc77.pdf
|
generative models
| null |
['ICLR.cc/2026/Conference/Submission25236/Authors']
|
KmAu6XvQ5d
| 25,234
|
KmAu6XvQ5d
|
LoRA Fails under Non-IID Conditions: Rethinking Federated Low-Rank Adaptation
|
Low-Rank Adaptation (LoRA) has become a popular technique for memory-efficient fine-tuning of large models and has recently been adopted in federated learning (FL) due to its reduced parameter footprint. However, we show that LoRA significantly underperforms full-parameter fine-tuning (FFT) in FL, especially under non-IID client distributions. Our neural tangent kernel (NTK) analysis points to a simple cause: non-IID shifts diversify and misalign client gradients, increasing the effective rank (spectral energy) of the NTK / gradient-Gram matrix. Because LoRA commits to a fixed low-rank subspace, it cannot capture this additional structure; the induced kernel deviates and its spectral floor drops, leading to slower convergence and weaker generalization. Based on this finding, we argue that low-rank compression methods—such as GaLore—are inherently better suited for FL than low-rank reparameterization.
Motivated by this insight, we propose FedLore On the client side, FedLore uses a GaLore-style optimizer while replacing SVD with randomized SVD to reduce computational overhead. On the server side, FedLore estimates a shared low-rank gradient from client updates and broadcasts it to configure each client’s GaLore projector, aligning update subspaces and mitigating drift under heterogeneity. Across NLU, vision, FedLore consistently achieves higher accuracy and robustness under non-IID conditions than LoRA-based strategies, while using comparable or less memory.
|
LoRA fails under non-IID in federated learning; FedLoRe uses GaLore-style compression with randomized SVD and correction to improve memory, convergence, and robustness.
|
['Federated Learning', 'Non-IID Data', 'Low-Rank Method', 'LoRA']
|
/pdf/30aa23f673907cf1a1463636d6ab2ad901683194.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25234/Authors']
|
J04D9xBUCi
| 25,233
|
J04D9xBUCi
|
Bridging the Preference Gap: Post-Training Input Rewriting with Large Language Models
|
Pre-trained language models, such as BERT and RoBERTa, have achieved remarkable performance in semantic classification tasks. Yet, their effectiveness varies with different textual expressions due to inherent preferences developed during training. To address this limitation, we propose a framework that leverages large language models (LLMs) to rewrite input texts in ways that better align with a target classifier's preferences, thereby enhancing its performance. To achieve this, we introduce a training process for the LLM and an automated method for constructing training data that encapsulates the classifier-specific preferences. Furthermore, we present a multi-sampling and filtering strategy to address instability in LLM outputs. Empirical evaluations on semantic classification datasets demonstrate that our framework significantly improves classifier’s performances.
| null |
['textual entailment', 'natural language inference']
|
/pdf/a299b35537faee98a7172e8ec6160f4a245f083d.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25233/Authors']
|
NCLjpR2MDq
| 25,232
|
NCLjpR2MDq
|
From Broad Exploration to Stable Synthesis: Entropy-Guided Optimization for Autoregressive Image Generation
|
Combining Chain-of-Thought (CoT) with Reinforcement Learning (RL) improves text-to-image (T2I) generation, yet the underlying interaction between CoT's exploration and RL's optimization remains unclear. We present a systematic entropy-based analysis that yields three key insights: (1) CoT expands the generative exploration space, while RL contracts it toward high-reward regions; (2) final reward is strongly negatively correlated with both the mean and variance of image-token entropy, highlighting the need to reduce uncertainty and instability; and (3) the entropy of the textual CoT directly governs downstream image quality, with lower-entropy CoTs leading to better generations. Motivated by these findings, we propose Entropy-Guided Group Relative Policy Optimization (EG-GRPO), a fine-tuning strategy that reallocates optimization budget by uncertainty: low-entropy tokens are excluded from reward-driven updates to preserve stability, while high-entropy tokens receive an entropy bonus that encourages structured exploration without collapse. Experiments on standard T2I benchmarks demonstrate that EG-GRPO achieves state-of-the-art performance.
| null |
['Language Models', 'Autoregressive Image Generation', 'Chain-of-Thought']
|
/pdf/5bb4ab9e362b7e472faf33ee928f045bec5eb290.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25232/Authors']
|
cpHhVrrug4
| 25,231
|
cpHhVrrug4
|
Beyond Unidirectional Flow: LLM Reasoning with Bidirectional Cycle-Consistent CoT
|
Small-large model collaboration is a promising approach for efficient reasoning, where lightweight assistant models generate intermediate representations to guide larger, more capable models. However, this paradigm encounters two key challenges: \textbf{representation heterogeneity} between different model architectures and \textbf{unidirectional information flow} that prevents mutual learning.
Small assistant models and large base models develop distinct geometric structures for encoding similar concepts, making direct alignment difficult and leading to information degradation.
Additionally, unidirectional flow creates asymmetric dynamics where assistant models cannot benefit from large models' superior representational capacity.
We introduce \textbf{CycleCoT}, a bidirectional framework that addresses these bottlenecks through cycle-consistent soft thought alignment. Our approach uses dual residual transformation networks to establish invertible mappings between heterogeneous model spaces through three mechanisms: (1) expressive mappings between different model representations, (2) bidirectional alignment objectives enforcing semantic consistency in both directions, and (3) cycle consistency constraints preserving information during round-trip transformations. This enables large models' knowledge to enhance assistant models' soft thought generation, creating symbiotic collaboration. Evaluation on LLaMA-$3.1$-$8$B-Instruct and Qwen$2.5$-$7$B-Instruct across mathematical, commonsense, and symbolic reasoning benchmarks demonstrates consistent improvements over unidirectional baselines, with gains up to $5.5\%$ on mathematical reasoning tasks.
Our analysis reveals that alignment quality surpasses quantity: fewer, well-aligned soft thoughts outperform longer sequences.
| null |
['Reasoning', 'LLM', 'Chain-of-Thoughts']
|
/pdf/8f76bbda9397a631f28c093ea9b136f940a1d1c2.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25231/Authors']
|
KjxdIG4z84
| 25,230
|
KjxdIG4z84
|
MetaFlow: A Meta Approach of Training LLMs into Generalizable Workflow Generators
|
Large language models (LLMs) excel across a wide range of tasks, yet their instance-specific solutions often lack the structural consistency needed for reliable deployment. Workflows that encode recurring algorithmic patterns at the task level provide a principled framework, offering robustness across instance variations, interpretable traces for debugging, and reusability across problem instances. However, manually designing such workflows requires significant expertise and effort, limiting their broader application. While automatic workflow generation could address this bottleneck, existing methods either produce instance-specific solutions without learning task-level patterns, or cannot generalize beyond their training configurations. We present MetaFlow, which casts workflow generation as a meta-learning problem: given a task and an operator set, the model learns to compose solution strategies. MetaFlow trains in two stages—supervised fine-tuning on synthetic workflow data, followed by reinforcement learning with verifiable rewards (RLVR) that uses execution feedback across problem instances in the task to improve end-to-end success. The resulting model produces effective workflows for trained tasks and exhibits strong generalization to untrained tasks and novel operator sets. Across benchmarks in question answering, code generation, and mathematical reasoning, MetaFlow achieves performance comparable to state-of-the-art baselines on in-domain tasks with single inference, while demonstrating remarkable zero-shot generalization capabilities on out-of-domain tasks and operator sets.
|
We trains LLMs to generate reusable workflows for entire task classes, demonstrating strong generalization to unseen tasks and novel operators through meta-learning with verifiable execution feedback.
|
['LLM Agent; Workflow Generation; Reinforcement learning; Meta Learning']
|
/pdf/2c294495ff494031c0a23084e101028d41343b03.pdf
|
applications to robotics, autonomy, planning
| null |
['ICLR.cc/2026/Conference/Submission25230/Authors']
|
FHXvxKGpdv
| 25,229
|
FHXvxKGpdv
|
UPER: Bridging the Perception Gap in Personalized Image Generation with Human-Aligned Reinforcement Learning
|
Personalized image generation aims to synthesize novel scenes featuring a specific user-provided subject. However, state-of-the-art models often fail to preserve the fine-grained details that define a subject's unique identity, a critical flaw that limits their use in high-fidelity applications. This "consistency gap" arises from a misalignment between the model's learned similarity metric and nuanced human perception. To address this, we introduce \textbf{UPER} (\textbf{U}nifying \textbf{P}ost-training for P\textbf{er}sonalization), a post-training framework designed to align generative models with human preferences for detail consistency. UPER employs a two-stage process: it first refines the model's focus on the subject's core attributes via Supervised Fine-Tuning (SFT) on a dataset with cleaned background information. Subsequently, it optimizes the model using Reinforcement Learning (RL) with a novel composite reward function. The key component of this function is a new patch-based consistency metric that accurately measures subject fidelity using only pre-trained vision encoders, eliminating the need for expensive preference data collection. We apply UPER to the state-of-the-art OminiControl model. The results are unequivocal: in a blind user study with over 1,000 responses, images generated by our final model were preferred for their overall quality and subject consistency \textbf{89.3\%} of the time over the strong baseline. Our work provides a robust and scalable solution to the detail-consistency challenge, paving the way for more faithful personalized generation.
| null |
['RLHF', 'Personalization']
|
/pdf/ba07636d36e493146275799ff2052a2a763ff78e.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25229/Authors']
|
T7vcbdwHYH
| 25,228
|
T7vcbdwHYH
|
CL2GEC: A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction
|
The growing demand for automated writing assistance in scientific domains highlights the need for robust Chinese Grammatical Error Correction (CGEC) systems that can adapt across disciplines. However, existing CGEC research lacks dedicated benchmarks for academic writing and overlooks continual learning as a solution to handle domain-specific variation. To fill this gap, we introduce CL2 GEC, a Continual Learning benchmark for Chinese Literature Grammatical Error Correction, designed to evaluate adaptive CGEC across multiple academic fields. Our benchmark includes 10,000 human-annotated sentences spanning 10 disciplines, each exhibiting distinct linguistic styles and error patterns. We evaluate large language models under sequential tuning, parameter-efficient adaptation, and representative continual learning strategies, using both standard GEC metrics and continual learning metrics adapted to task-level variation. Experimental results show that regularization-based continual learning methods, such as OGD and GEM, outperform replay-based and sequential approaches in both grammatical accuracy and knowledge retention. These findings underscore the feasibility and importance of integrating continual learning into CGEC and position our benchmark as a foundation for future research on adaptive scientific writing assistance.
|
CL2GEC is a new benchmark for Chinese academic GEC across 10 disciplines; results show that regularization-based continual learning significantly outperforms replay and sequential tuning in both grammatical accuracy and knowledge retention.
|
['Chinese Grammatical Error Correction;Benchmark Evaluation;Continual Learning;Large Language Models']
|
/pdf/f4aa6dc313ba60e31d9724d0bd7aa27a688e9a3f.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25228/Authors']
|
JX5imb3E2V
| 25,225
|
JX5imb3E2V
|
Improving expressivity in Link Prediction with GNNs via the Shortest Path
|
Graph Neural Networks (GNNs) often fail to capture the link-specific structural patterns essential for accurate link prediction, since their node-centric message passing might overlook the subgraph structures connecting two nodes. Prior attempts to inject such structural context either suffer from high computational cost or rely on oversimplified heuristics (e.g., common neighbor counts) that cannot capture multi-hop dependencies. We propose SP4LP (Shortest Path for Link Prediction), a new framework that integrates GNN-based node encodings with sequence modelling over shortest paths. Specifically, SP4LP first computes node representations with a GNN, then extracts the shortest path between each candidate node pair and processes the sequence of node embeddings with a sequence model. This design allows SP4LP to efficiently capture expressive multi-hop relational patterns. Theoretically, we show that SP4LP is strictly more expressive than both standard message-passing GNNs and several leading structural feature methods, positioning it as a general and principled framework for link prediction in graphs. Empirically, SP4LP sets a new state of the art on many standard link prediction benchmarks.
| null |
['graph neural networks', 'expressivity', 'shortest path']
|
/pdf/1cedfcdd94cf27fdb350f59fb822f01f2a2074fc.pdf
|
learning on graphs and other geometries & topologies
|
/attachment/5f0bb8cb4a2ffe2c72385e2c59fef93a917c0e08.zip
|
['ICLR.cc/2026/Conference/Submission25225/Authors']
|
rjhF7b7n6g
| 25,222
|
rjhF7b7n6g
|
Evaluating Dataset Watermarking for Fine-tuning Traceability of Customized Diffusion Models: A Comprehensive Benchmark and Removal Approach
|
Recently, numerous fine-tuning techniques for diffusion models have been developed, enabling diffusion models to generate content that closely resembles a specific image set, such as specific facial identities and artistic styles. However, this advancement also poses potential security risks. The primary risk comes from copyright violations due to using public domain images without authorization to fine-tune diffusion models. Furthermore, if such models generate harmful content linked to the source images, tracing the origin of the fine-tuning data is crucial to clarify responsibility. To achieve fine-tuning traceability of customized diffusion models, dataset watermarking for diffusion model has been proposed, involving embedding imperceptible watermarks into images that require traceability. Notably, even after using the watermarked images to fine-tune diffusion models, the watermarks remain detectable in the generated outputs. However, existing dataset watermarking approaches lack a unified framework for performance evaluation, thereby limiting their effectiveness in practical scenarios. To address this gap, this paper first establishes a generalized threat model and subsequently introduces a comprehensive framework for evaluating dataset watermarking methods, comprising three dimensions: Universality, Transmissibility, and Robustness. Our evaluation results demonstrate that existing methods exhibit universality across diverse fine-tuning approaches and tasks, as well as transmissibility even when only a small proportion of watermarked images is used. In terms of robustness, existing methods show good performance against common image proces sing operations, but this does not match real-world threat scenarios. To address this issue, this paper proposes a practical watermark removal method that can completely remove dataset watermarks without affecting fine-tuning, revealing their vulnerabilities and pointing to a new challenge for future research.
|
This paper first establishes a generalized threat model and subsequently introduces a comprehensive framework for evaluating dataset watermarking methods, comprising three dimensions: Universality, Transmissibility, and Robustness.
|
['Dataset Watermarking; Diffusion Model; Copyright Protection']
|
/pdf/70aaa80aced760bd7dfa02f43cc86fcaf4761886.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25222/Authors']
|
qPbDM5L8tE
| 25,221
|
qPbDM5L8tE
|
Contact-VLA: Zero-Shot Planning and Control for Contact-Rich Manipulation
|
Vision-Language-Action (VLA) systems often lack adaptability and explainability due to their black-box structure and dependency on fixed action sets from extensive tele-operated datasets, limiting their effectiveness in complex, dynamic manipulation scenarios. To address this issue, we propose a novel VLA framework capable of effectively managing complex, dynamic, and contact-rich manipulation tasks. By integrating foundational vision and language models with motion planning and reactive controllers, our system achieves zero-shot planning and adaptive manipulation without relying on extensive tele-operated action datasets. Unlike conventional VLAs, we explicitly separate the roles of vision models and Large Language Models (LLM): the vision module handles scene initialization and object pose tracking, while the LLM generates initial contact strategies and cost function estimations. These two components collaboratively contribute to the creation of a simulated environment in which our dynamic planner operates. Additionally, this modular approach significantly enhances both the explainability and performance of the overall framework, as demonstrated by ablation studies. Furthermore, we introduce a memory unit to leverage past manipulation experiences, enabling the generalization and efficient reuse of learned contact strategies and parameter adjustments across diverse manipulation scenarios. Experiments conducted on challenging contact-rich tasks validate our framework's robustness and highlight the critical design elements that contribute to its effectiveness.
|
Contact-VLA is a modular framework that integrates vision-based scene modeling, LLM-driven strategy generation, and dynamic planning to enable zero-shot adaptive manipulation in contact-rich tasks.
|
['Vision-Language-Action model', 'robotic manipulation', 'contact-rich manipulation', 'manipulation planning', 'robot learning']
|
/pdf/177b4548f7c40b825c9f089e36f13a9c7371adf2.pdf
|
applications to robotics, autonomy, planning
| null |
['ICLR.cc/2026/Conference/Submission25221/Authors']
|
H2NG2dNN2K
| 25,220
|
H2NG2dNN2K
|
Q-FSRU: Quantum-Augmented Frequency-Spectral Fusion for Medical Visual Question Answering
|
Solving tough clinical questions that require both image and text understanding is still a major challenge in healthcare AI. In this work, we propose Q-FSRU, a new model that combines Frequency Spectrum Representation and Fusion (FSRU) with a method called Quantum Retrieval-Augmented Generation (Quantum RAG) for medical Visual Question Answering (VQA). The model takes in features from medical images and related text, then shifts them into the frequency domain using Fast Fourier Transform (FFT). This helps it focus on more meaningful data and filter out noise or less useful information. To improve accuracy and ensure that answers are based on real knowledge, we add a quantum-inspired retrieval system. It fetches useful medical facts from external sources using quantum-based similarity techniques. These details are then merged with the frequency-based features for stronger reasoning. We evaluated our model using the VQA-RAD dataset, which includes real radiology images and questions. The results showed that Q-FSRU outperforms earlier models, especially on complex cases needing image-text reasoning. The mix of frequency and quantum information improves both performance and explainability. Overall, this approach offers a promising way to build smart, clear, and helpful AI tools for doctors.
|
We propose Q-FSRU, a medical VQA model that fuses frequency-domain features with quantum-inspired retrieval, achieving superior accuracy and explainability on complex radiology questions.
|
['Medical VQA', 'Frequency Spectrum Representation', 'Fast Fourier Transform (FFT)', 'Quantum Retrieval-Augmented Generation', 'Image-text reasoning', 'Radiology', 'Explainable AI', 'Clinical decision support']
|
/pdf/68e20b6688afd58ab8e10ef0ee1eb40b4f5dfa61.pdf
|
generative models
| null |
['ICLR.cc/2026/Conference/Submission25220/Authors']
|
UyiTjp0oKU
| 25,219
|
UyiTjp0oKU
|
Gaze Following in Question Answering: A Comprehensive Benchmark for Vision-Language Models
|
Gaze following aims to infer human intention within scene images. Conventional methods typically rely on scene and face images to regress the gaze point coordinates which is unnatural and restrictive. Recently, vision-language models (VLMs) have attracted significant attention for their powerful reasoning abilities, raising an important question: can VLMs be leveraged to advance the gaze following? In this work, we introduce GazeVQA, the first large-scale text-image dataset for VLM-based gaze following. GazeVQA is the first to provide accurate textual annotations for both observers and gaze targets, along with natural language question-answering (QA) pairs tailored for the gaze following task. The dataset contains 410K QA pairs across 102K scene images, offering rich supervision for training and evaluating VLMs. Building on GazeVQA, we establish the first benchmark for VLM-based gaze following. Experiments demonstrate that existing VLMS exhibit limited zero-shot performance on gaze following. However, with training on our dataset, their performance improves significantly, demonstrating the potential of GazeVQA to drive progress in this area. We will release the dataset and code to facilitate future research.
|
We present GazeVQA, the first large-scale text-image dataset for VLM-based gaze following.
|
['Gaze Following', 'Vision-Language Model']
|
/pdf/02b5e7e682c97906da5aa366d0d1998875116621.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25219/Authors']
|
fvyzZfhvTG
| 25,218
|
fvyzZfhvTG
|
Causal Scaffolding for Physical Reasoning: A Benchmark for Causally-Informed Physical World Understanding in VLMs
|
Understanding and reasoning about the physical world is the foundation of intelligent behavior, yet state-of-the-art vision-language models (VLMs) still fail at causal physical reasoning, often producing plausible but incorrect answers. To systematically address this gap, we introduce CausalPhys, a benchmark of over 3,000 carefully curated video- and image-based questions spanning four domains: Perception, Anticipation, Intervention, and Goal Orientation. Each question is paired with a causal graph that captures underlying interactions and dependencies, enabling fine-grained and interpretable evaluation. We further propose a causal-graph-grounded metric that verifies whether a model’s chain-of-thought reasoning follows correct causal relations, moving beyond answer-only accuracy. Systematic evaluations of leading VLMs on CausalPhys expose consistent failures to capture causal dependencies, underscoring fundamental weaknesses in their physical reasoning. To overcome these shortcomings, we introduce a Causal Rationale-informed Fine-Tuning strategy (CRFT) that scaffolds VLM reasoning with causal graphs. Extensive experiments show that CRFT significantly improves both reasoning accuracy and interpretability across multiple backbones. By combining diagnostic evaluation with causality-informed fine-tuning, this work establishes a foundation for advancing VLMs toward causally grounded physical reasoning.
| null |
['physical reasoning', 'causality', 'VLM']
|
/pdf/86c2713222d14948f95caa2a4157913d2d7b7049.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25218/Authors']
|
1ndthBqbyK
| 25,217
|
1ndthBqbyK
|
TSDINO: Teacher–Student Self-Distillation Framework for Robust Pre-training of Time-Series Foundation Models
|
Building time-series foundation models (TSFM) poses challenges in terms of learning stability due to limited data availability and heterogeneous temporal dynamics across various time-series datasets. We propose TSDINO, a teacher-student framework for robust pre-training of TSFM based on the principle of self-distillation with no labels. TSDINO offers a model-agnostic approach that combines two complementary objectives: (i) feature preservation under augmentations and (ii) masked patch prediction. A meta-architecture comprising teacher-student networks and projection heads enables adaptation to various models. We evaluate TSDINO on classification and forecasting tasks using diverse publicly available benchmarking datasets. TSDINO consistently achieves competitive zero-shot performance over gradient-based pre-training.
| null |
['time series', 'self-distillation', 'time-series foundation models']
|
/pdf/43b325484b4b3f4b77f48594f8c1a9cdfc8f6fef.pdf
|
learning on time series and dynamical systems
| null |
['ICLR.cc/2026/Conference/Submission25217/Authors']
|
XEkQu1ZWGN
| 25,214
|
XEkQu1ZWGN
|
ChemBOMAS: Accelerated Bayesian Optimization for Scientific Discovery in Chemistry with LLM-Enhanced Multi-Agent System
|
Bayesian optimization (BO) is a powerful tool for scientific discovery in chemistry, yet its efficiency is often hampered by the sparse experimental data and vast search space. Here, we introduce ChemBOMAS: a large language model (LLM)-enhanced multi-agent system that accelerates BO through synergistic data- and knowledge-driven strategies. Firstly, the data-driven strategy involves an 8B-scale LLM regressor fine-tuned on a mere 1% labeled samples for pseudo-data generation, robustly initializing the optimization process. Secondly, the knowledge-driven strategy employs a hybrid Retrieval-Augmented Generation approach to guide LLM in dividing the search space while mitigating LLM hallucinations. An Upper Confidence Bound algorithm then identifies high-potential subspaces within this established partition. Across the LLM-refined subspaces and supported by LLM-generated data, BO achieves the improvement of effectiveness and efficiency. Comprehensive evaluations across multiple scientific benchmarks demonstrate that ChemBOMAS set a new state-of-the-art, accelerating optimization efficiency by up to 5-fold compared to baseline methods.
|
ChemBOMAS is an LLM-enhanced multi-agent system that synergistically integrates data-driven pseudo-data generation with knowledge-driven search space partitioning to accelerate Bayesian optimization for scientific discovery by up to ten times.
|
['Bayesian Optimization', 'Data Augmentation', 'Knowledge-Driven Strategy', 'Large Language Model', 'AI4Science']
|
/pdf/8dd18e5d273f76d1d01b0ee673570c6ae8089c0b.pdf
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
/attachment/4ae4fa65e8ffe9e397217f3397a786a52ac60a9e.zip
|
['ICLR.cc/2026/Conference/Submission25214/Authors']
|
PzhNnMepgl
| 25,210
|
PzhNnMepgl
|
Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding
|
Masked Diffusion Language Models generate sequences via iterative sampling that progressively unmasks tokens. However, they still recompute the attention and feed-forward blocks for every token position at every step---even when many unmasked tokens are essentially fixed, resulting in substantial waste in compute.
We propose \textbf{\textsc{SureLock}}: when the posterior at an unmasked position has stabilized across steps (our \emph{sure} condition), we \emph{lock} that position---thereafter skipping its query projection and feed-forward sublayers---while caching its attention keys and values so other positions can continue to attend to it.
This reduces the dominant per-iteration computational cost from $O(N^2d)$ to $O(MNd)$ where $N$ is the sequence length, $M$ is the number of unlocked token positions, and $d$ is the model dimension.
In practice, $M$ decreases as the iteration progresses, yielding substantial savings.
On LLaDA-8B, SureLock reduces algorithmic FLOPs by 30--50\% relative to the same sampler without locking,
while maintaining comparable generation quality.
We also provide a theoretical analysis to justify the design rationale of SureLock: monitoring only the local KL at the lock step suffices to bound the deviation in final token probabilities.
| null |
['diffusion language models', 'compute efficient sampling', 'skipping compute', 'adaptive attention']
|
/pdf/abe2cdfb241311eb87db18a3fd14e5d7734fa827.pdf
|
generative models
| null |
['ICLR.cc/2026/Conference/Submission25210/Authors']
|
IgZWU75BLL
| 25,208
|
IgZWU75BLL
|
SuRe: Surprise-Driven Prioritised Replay for Continual LLM Learning
|
Continual learning, one's ability to adapt to a sequence of tasks without forgetting previously acquired knowledge, remains a major challenge in machine learning and a key gap between artificial and human intelligence. While regularisation and replay perform well in vision, they lag behind multi-task learning for large language models (LLMs), especially at scale with many tasks. We revisit replay and argue that two failure modes drive this gap: selection (what to rehearse) and integration (how to consolidate new knowledge). To address selection, we propose Surprise-prioritised Replay (SuRe), a simple, architecture-agnostic rule that ranks and stores the most surprising (high Negative Log-Likelihood) sequences. SuRe alone achieves state-of-the-art results on both the Standard CL and the Large Number of Tasks (LNT) benchmarks. To address integration, we add a dual-learner design with fast and slow LoRA adapters merged via an exponential moving average (EMA), enabling rapid adaptation while stabilising long-term knowledge. Combining SuRe with the dual learner yields further gains, including improvements of up to +5 accuracy points on LNT over prior SOTA. Ablation studies confirm that our proposed method remains robust under reduced replay frequency and small buffer size, demonstrating both effectiveness and sample efficiency. Taken together, our results establish replay as a strong baseline for LLM continual learning and demonstrate that surprise-based selection and slow-weight consolidation are complementary components for mitigating catastrophic forgetting.
| null |
['continual learning', 'large language models', 'replay', 'surprise']
|
/pdf/443585eb9cb3b989527ce9bec62ed513904af7bb.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25208/Authors']
|
vanVyHsl30
| 25,207
|
vanVyHsl30
|
ADVMEM: Adversarial Memory Initialization for Realistic Test-Time Adaptation via Tracklet-Based Benchmarking
|
We introduce a novel tracklet-based dataset for benchmarking test-time adaptation (TTA) methods. The aim of this dataset is to mimic the intricate challenges encountered in real-world environments such as images captured by hand-held cameras, self-driving cars, etc. The current benchmarks for TTA focus on how models face distribution shifts, when deployed, and on violations to the customary independent-and-identically-distributed (i.i.d.) assumption in machine learning. Yet, these benchmarks fail to faithfully represent realistic scenarios that naturally display temporal dependencies, such as how consecutive frames from a video stream likely show the same object across time. We address this shortcoming of current datasets by proposing a novel TTA benchmark we call the "Inherent Temporal Dependencies" (ITD) dataset. We ensure the instances in ITD naturally embody temporal dependencies by collecting them from tracklets-sequences of object-centric images we compile from the bounding boxes of an object-tracking dataset. We use ITD to conduct a thorough experimental analysis of current TTA methods, and shed light on the limitations of these methods when faced with the challenges of temporal dependencies. Moreover, we build upon these insights and propose a novel adversarial memory initialization strategy to improve memory-based TTA methods. We find this strategy substantially boosts the performance of various methods on our challenging benchmark.
| null |
['test time adaptation']
|
/pdf/aff03dae2dfc45760d358ce17dabfbb672e8a10a.pdf
|
transfer learning, meta learning, and lifelong learning
|
/attachment/a470b0d1950a0daf5e913808759137fcb33bb516.pdf
|
['ICLR.cc/2026/Conference/Submission25207/Authors']
|
JGvOicAo3g
| 25,206
|
JGvOicAo3g
|
GMTS: Gradient Magnitude-based Token Selection Improves RLVR Training for LLM Reasoning
|
Reinforcement learning (RL) has recently emerged as a central paradigm for enhancing large language models' (LLMs) reasoning abilities. State-of-the-art RL with Verifiable Rewards (RLVR) methods have demonstrated remarkable effectiveness in mathematical reasoning tasks. Recent studies suggest that high-entropy tokens play an exceptionally important role in model training, since training with only the highest 20\% entropy tokens yields significant performance gains. In this work, we find that while high-entropy tokens within one answer tend to correlate with large gradient magnitude, entropy alone fails to consistently reflect token importance across different answers, considering the variations in the answer-level reward signals. Based on this observation, we introduce the **G**radient **M**agnitude-based **T**oken **S**election (GMTS) method to quantify tokens. We find that training with the top 20\% tokens ranked by GMTS achieves substantially better performance than entropy-based selection on well-known math benchmarks (**+1.55** on Qwen2.5-math-1.5B, **+1.33** on Qwen2.5-math-7B, **+1.85** on Qwen3-8B models). These findings indicate that GMTS provides a more refined quantification than entropy, thereby improving the performance of RLVR training.
| null |
['Reinforcement Learning', 'Large Language Models', 'RL with Verifiable Rewards', 'Gradient Magnitude-based Token Selection', 'Mathematical Reasoning']
|
/pdf/36af2c9b9e61b6782b4c9eac76e467c30a90f95a.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25206/Authors']
|
r0L9GwlnzP
| 25,205
|
r0L9GwlnzP
|
Do LLM Agents Know How to Ground, Recover, and Assess? A Benchmark for Epistemic Competence in Information-Seeking Agents
|
Recent work has explored training Large Language Model (LLM) search agents with reinforcement learning (RL) for open-domain question answering (QA). However, most evaluations focus solely on final answer accuracy, overlooking how these agents reason with and act on external evidence.
We introduce **SeekBench**, the first benchmark for evaluating the *epistemic competence* of LLM search agents through step-level analysis of their response traces. **SeekBench** comprises 190 expert-annotated traces with over 1,800 response steps generated by LLM search agents, each enriched with evidence annotations for granular analysis of whether agents (1) generate reasoning steps grounded in observed evidence, (2) adaptively reformulate searches to recover from low-quality results, and (3) have proper calibration to correctly assess whether the current evidence is sufficient for providing an answer.
Our analysis of state-of-the-art LLM search agents reveals critical behavioral gaps overlooked by traditional metrics, including specialized skills like Search-R1's synthesis capabilities. These findings expose distinct epistemic competencies that accuracy-only evaluations fail to capture, providing guidance for developing more capable and reliable agents.
| null |
['Epistemic Competence', 'Evidence-Grounded Reasoning', 'LLM Search Agents']
|
/pdf/c547f40238fb9943de02ac8f53b72fb41b0531d9.pdf
|
datasets and benchmarks
|
/attachment/c3aa65e5ce3fe95ae94f6c2d606567dbfbf2d6f7.zip
|
['ICLR.cc/2026/Conference/Submission25205/Authors']
|
fY4proGNFD
| 25,201
|
fY4proGNFD
|
Reframing attention as a reinforcement learning problem for causal discovery
|
Formal frameworks of causality have operated largely parallel to modern trends in deep reinforcement learning (RL). However, there has been a revival of interest in formally grounding the representations learned by neural networks in causal concepts. Yet, most attempts at neural models of causality assume static causal graphs and ignore the dynamic nature of causal interactions. In this work, we introduce Causal Process framework as a novel theory for representing dynamic hypotheses about causal structure. Furthermore, we present Causal Process Model as an implementation of this framework. Leveraging the inherent causality of the RL framework, we reformulate the attention mechanism from Transformer networks within an RL setting to infer interpretable causal processes from visual observations. Here, causal inference corresponds to constructing a causal graph hypothesis, which itself becomes an RL task nested within the original RL problem. To create an instance of such hypothesis, we employ RL agents. These agents establish links between units similar to the Transformer attention mechanism. We demonstrate the effectiveness of our approach in an RL environment where we outperform current alternatives in causal representation learning and agent performance.
| null |
['Causal World Models', 'Causal Reinforcement Learning', 'Causal Processes', 'Causal Representation Learning']
|
/pdf/20c0fe367ef3298d78a026a646e369b25df057ff.pdf
|
causal reasoning
| null |
['ICLR.cc/2026/Conference/Submission25201/Authors']
|
7dTqUaY2Kl
| 25,200
|
7dTqUaY2Kl
|
JailNewsBench: Multi-Lingual and Regional Benchmark for Fake News Generation under Jailbreak Attacks
|
Fake news undermines societal trust and decision-making across politics, economics, health, and international relations, and in extreme cases threatens human lives and societal safety.
Because fake news reflects region-specific political, social, and cultural contexts and is expressed in language, evaluating the risks of large language models (LLMs) requires a multi-lingual and regional perspective.
Malicious users can bypass safeguards through jailbreak attacks, inducing LLMs to generate fake news.
However, no benchmark currently exists to systematically assess attack resilience across languages and regions.
Here, we propose JailNewsBench, the first benchmark for evaluating LLM robustness against jailbreak-induced fake news generation.
JailNewsBench spans 34 regions and 22 languages, covering 8 evaluation sub-metrics through LLM-as-a-Judge and 5 jailbreak attacks, with approximately 300k instances.
Our evaluation of 9 LLMs reveals that the maximum attack success rate reached 86.3% and the maximum harmfulness score was 3.5 out of 5.
Notably, the attack success rate and generation quality were significantly higher for English and U.S.-related topics compared to other languages and regions, underscoring the urgent need for multi-lingual and region-aware evaluation.
In addition, our analysis shows that coverage of fake news in existing safety datasets is limited and less well defended than major categories such as toxicity and social bias.
| null |
['fake news', 'jailbreak', 'llm', 'multilingual']
|
/pdf/46ddaf0d6431d8d7b514f865e674a8790201c14a.pdf
|
datasets and benchmarks
|
/attachment/d82c5ed915ad8b78b4db0a8f1b3c39fa9bafe3ce.zip
|
['ICLR.cc/2026/Conference/Submission25200/Authors']
|
I94Eg6cu7P
| 25,199
|
I94Eg6cu7P
|
SRT: Super-Resolution for Time Series via Disentangled Rectified Flow
|
Fine-grained time series data with high temporal resolution is critical for accurate analytics across a wide range of applications. However, the acquisition of such data is often limited by cost and feasibility. This problem can be tackled by reconstructing high-resolution signals from low-resolution inputs based on specific priors, known as super-resolution. While extensively studied in computer vision, directly transferring image super-resolution techniques to time series is not trivial. To address this challenge at a fundamental level, we propose **S**uper-**R**esolution for **T**ime series (SRT), a novel framework that reconstructs temporal patterns lost in low-resolution inputs via disentangled rectified flow. SRT decomposes the input into trend and seasonal components, aligns them to the target resolution using an implicit neural representation, and leverages a novel cross-resolution attention mechanism to guide the generation of high-resolution details. We further introduce SRT-large, a scaled-up version with extensive pretraining, which enables strong zero-shot super-resolution capability. Extensive experiments on nine public datasets demonstrate that SRT and SRT-large consistently outperform existing methods across multiple scale factors, showing both robust performance and the effectiveness of each component in our architecture.
|
We propose SRT, a novel disentangled rectified flow framework for time series super-resolution that generates high-resolution details from low-resolution data, achieving state-of-the-art performance across nine benchmarks.
|
['Time Series Super-Resolution', 'Rectified Flow', 'Temporal Disentanglement', 'Implicit Neural Representations']
|
/pdf/f649aac3d9ad7af140cf5212759ff2de52fff908.pdf
|
learning on time series and dynamical systems
|
/attachment/fd586e88954925f5784fc86bd7ae47790b259d3c.zip
|
['ICLR.cc/2026/Conference/Submission25199/Authors']
|
kqT4pcOT10
| 25,197
|
kqT4pcOT10
|
Emergent Bayesian Behaviour and Optimal Cue Combination in LLMs
|
Large language models (LLMs) excel at explicit reasoning, but their implicit computational strategies remain underexplored.
Decades of psychophysics research show that humans intuitively process and integrate noisy signals using near-optimal Bayesian strategies in perceptual tasks. We ask whether LLMs exhibit similar behaviour and perform optimal multimodal integration without explicit training or instruction. Adopting the psychophysics paradigm, we infer computational principles of LLMs from systematic behavioural studies. We introduce a behavioural benchmark - BayesBench: four magnitude-estimation tasks (length, location, distance, and duration) over text and image, inspired by classic psychophysics, and evaluate a diverse set of nine LLMs alongside human judgements for calibration. Through controlled ablations of noise, context, and instruction prompts, we measure performance, behaviour and efficiency in multimodal cue-combination. Beyond accuracy and efficiency metrics, we introduce a Bayesian Consistency Score that detects Bayes-consistent behavioural shifts even when accuracy saturates. Our results show that high task accuracy - notably for GPT-5 Mini - does not always imply efficient cue combination; yet accurate models, including GPT-5 Mini, Llama 4 Maverick, and Claude 3.7 Sonnet, often adapt in Bayes-consistent ways. These findings reveal emergent principled handling of uncertainty and highlight the correlation between accuracy and Bayesian tendencies. We release our psychophysics benchmark and consistency metric as evaluation tools and to inform future multimodal architecture designs.
| null |
['Large Language Models (LLMs)', 'Psychophysics', 'Bayesian Inference', 'Cue Combination', 'Emergent Abilities', 'LLM Evaluation', 'Uncertainty Quantification']
|
/pdf/14af32e162d85d8e1ab794b63c7d55375cfb4e2d.pdf
|
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
| null |
['ICLR.cc/2026/Conference/Submission25197/Authors']
|
cKNOCYPo2W
| 25,196
|
cKNOCYPo2W
|
Conditioned Initialization for Attention
|
Transformers are a dominant architecture in modern machine learning, powering applications across vision, language, and beyond. At the core of their success lies the attention layer, where the query, key, and value matrices determine how token dependencies are captured. While considerable work has focused on scaling and optimizing Transformers, comparatively little attention has been paid to how the weights of the queries, keys and values are initialized. Common practice relies on random initialization or alternatives such as mimetic initialization, which imitates weight patterns from converged models, and weight selection, which transfers weights from a teacher model. In this paper, we argue that initialization can introduce an optimization bias that fundamentally shapes training dynamics. We propose **conditioned initialization**, a principled scheme that initializes attention weights to improve the spectral properties of the attention layer. Theoretically, we show that conditioned initialization can potentially reduce the condition number of the attention Jacobian, leading to more stable optimization. Empirically, it accelerates convergence and improves generalization across diverse applications, highlighting conditioning as a critical yet underexplored area for advancing Transformer performance. Importantly, conditioned initialization is simple to apply and integrates seamlessly into a wide range of Transformer architectures.
| null |
['spectral conditioning transformers', 'spectral properties of attention']
|
/pdf/0cefa1eb32ef625b997cb5d4d3b2c49ccfbe99ba.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25196/Authors']
|
vGqkrrOGty
| 25,195
|
vGqkrrOGty
|
Towards Real-world Debiasing: Rethinking Evaluation, Challenge, and Solution
|
Spurious correlations in training data significantly hinder the generalization capability of machine learning models when faced with distribution shifts, leading to the proposition of numberous debiasing methods. However, it remains to be asked: Do existing benchmarks for debiasing really represent biases in the real world? Recent works attempt to address such concerns by sampling from real-world data (instead of synthesizing) according to some predefined biased distributions to ensure the realism of individual samples. However, the realism of the biased distribution is more critical yet challenging and underexplored due to the complexity of real-world bias distributions. To tackle the problem, we propose a fine-grained framework for analyzing biased distributions, based on which we empirically and theoretically identify key characteristics of biased distributions in the real world that are poorly represented by existing benchmarks. Towards applicable debiasing in the real world, we further introduce two novel real-world-inspired biases to bridge this gap and build a systematic evaluation framework for real-world debiasing, RDBench. Furthermore, focusing on the practical setting of debiasing w/o bias label, we find real-world biases pose a novel Sparse bias capturing challenge to the existing paradigm. We propose a simple yet effective approach named Debias in Destruction (DiD), to address the challenge, whose effectiveness is validated with extensive experiments on 8 datasets of various biased distributions.
|
In this work, we revisit the task of debiasing under real-world scenarios, proposing systematic evaluation framework, challenges, and solutions for real-world debiasing.
|
['spurious correlation', 'dataset bias', 'debias']
|
/pdf/d72291792944c88d6b7c4a83ccfa4460565e4c4b.pdf
|
alignment, fairness, safety, privacy, and societal considerations
|
/attachment/e6065ec709c132da2b77743a4b0eec090236f788.zip
|
['ICLR.cc/2026/Conference/Submission25195/Authors']
|
yX1Nn63DwQ
| 25,194
|
yX1Nn63DwQ
|
A New Efficient Method For Combining Gradients Of Different Orders
|
We present a new optimization method called GOC(Gradient Order Combination) which a combination based on the products of Hessian matrices of different orders and the gradient. the parameter r (the recipprocal of steplenth) is taken as analysis target, we can regard the SD method as a first-order and the CBB method as second-order. We have developed third-order and even higher-order, which offer faster convergence rates.
| null |
['gradient method', 'gradient combine', 'SD', 'CBB']
|
/pdf/06d550e106f67e465e380b3ebe69943e45cafc78.pdf
|
optimization
| null |
['ICLR.cc/2026/Conference/Submission25194/Authors']
|
5EqAAgBMWZ
| 25,193
|
5EqAAgBMWZ
|
Direct Reward Optimization: A Point-wise Alignment Approach
|
Direct Alignment Algorithms (DAAs) are widely used for aligning Large Language Models (LLMs) with human preferences. The current DAAs mostly use pairwise optimization objectives based on variants of Direct Preference Optimization (DPO). However, these methods only focus on the pairwise differences of the samples and cannot prevent the optimization from reducing the probabilities of preferred responses. To tackle the problem, in this paper, we propose Direct Reward Optimization (DRO), an algorithm that uses an explicit reward model to optimize the policy by setting an exact probability target for each response. DRO decouples the target reward differentials and bias in aligning objectives and utilizes the relationships not only within but also among the response pairs. Extensive experiments show that DRO outperforms the existing methods while providing control over the policy response probability.
| null |
['Alignment Algorithms', 'Large Language Models', 'Bradley-Terry']
|
/pdf/b76aa004444df8ce27f5e012e38fc86a58c63036.pdf
|
generative models
|
/attachment/be30cafd8470ba57f6a5f767c11e6186f0b1ca92.zip
|
['ICLR.cc/2026/Conference/Submission25193/Authors']
|
10Iiew095e
| 25,190
|
10Iiew095e
|
StreamingThinker: Large Language Models Can Think While Reading
|
Large language models (LLMs) have demonstrated remarkable capabilities in chain of thought (CoT) reasoning. However, the current LLM reasoning paradigm initiates thinking only after the entire input is available, which introduces unnecessary latency and weakens attention to earlier information in dynamic scenarios. Inspired by human cognition of thinking while reading, we first design a \textit{\textbf{streaming thinking}} paradigm for LLMs, where reasoning unfolds in the order of input and further adjusts its depth once reading is complete.
We instantiate this paradigm with \textit{StreamingThinker}, a framework that enables LLMs to think while reading through the integration of streaming CoT generation, streaming-constraint training, and streaming parallel inference. Specifically, StreamingThinker employs streaming reasoning units with quality control for CoT generation, enforces order-preserving reasoning through streaming attention masks and position encoding, and leverages parallel KV caches that decouple input encoding from reasoning generation, thereby ensuring alignment and enabling true concurrency. We evaluate StreamingThinker on the Qwen3 model family across math reasoning, logical reasoning, and context-based QA reasoning tasks. Experimental results show that the StreamingThinker preserves performance comparable to batch thinking, while yielding an 80\% reduction in token waiting before the onset of reasoning and a more than 60\% reduction in time-level latency for producing the final answer, demonstrating the effectiveness of the streaming paradigm for LLM reasoning.
|
We propose StreamingThinker, a framework that enables LLMs to think while reading.
|
['LLMs', 'Reasoning', 'Streaming']
|
/pdf/8dc3142412d7e546ee5b04e1f7939c68f3766fdd.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25190/Authors']
|
sOCKQ2UWKs
| 25,187
|
sOCKQ2UWKs
|
UniArt: Generating 3D articulated objects with open-set articulation beyond retrieval
|
Articulated objects are central in the field of realistic simulation and robot learning, enabling dynamic interactions and task-oriented manipulation. However, manually annotating these objects is labor-intensive, motivating the need for automated generation solutions. Previous methods usually rely on retrieving part structures from existing datasets, which inherently restricts diversity and causes geometric misalignment. To tackle these challenges, we present UniArt, an end-to-end framework that directly synthesizes 3D meshes and articulation parameters in a unified manner. We decompose the problem into three correlated tasks: geometry generation, part segmentation, and articulation prediction, and then integrate them into a single diffusion-based architecture. By formulating both part segmentation and joint parameter inference as open-set problems, our approach incorporates open-world knowledge to generalize beyond training categories. We further enhance training with a large-scale, enriched dataset built from PartNet-Mobility, featuring expanded part and material diversity. Extensive evaluations show that UniArt substantially outperforms existing retrieval-based methods in mesh quality and articulation accuracy, especially under open-set conditions. Code will be publicly available to foster future research in the 3D generation and robotics societies.
| null |
['3d Generation', 'Embodied AI']
|
/pdf/8cc9c1af164cacbc03f87a8479d234cc802c9ffd.pdf
|
applications to robotics, autonomy, planning
| null |
['ICLR.cc/2026/Conference/Submission25187/Authors']
|
krLuDCXK6n
| 25,185
|
krLuDCXK6n
|
Improving realistic semi-supervised learning with doubly robust estimation
|
A major challenge in Semi-Supervised Learning (SSL) is the mismatch between the labeled and unlabeled class distributions.
Most successful SSL approaches are based on pseudo-labeling of the unlabeled data, and therefore are susceptible to confirmation bias because the classifier being trained is biased towards the labeled class distribution and thus performs poorly on unlabeled data.
While distribution alignment alleviates this bias, we find that the distribution estimation at the end of training can still be improved with the doubly robust estimator, a theoretically sound approach that derives from semi-parametric efficiency theory.
As a result, we propose a 2-stage approach where we first train an SSL classifier but only use this initial prediction for the doubly robust estimator of the class distribution, and then train a second SSL classifier but fixing the improved distribution estimation from the start.
For training the classifier, we use a principled expectation-maximization framework for SSL with label shift, showing that the popular distribution alignment heuristic improves the data log-likelihood in the E-step, and that this EM is equivalent to the recent SimPro algorithm after reparameterization and logit adjustment but is much older and more interpretable (using the missingness mechanism).
Experimental results demonstrate the improved class distribution estimation of the doubly robust estimator and subsequent improved classification accuracy with our 2-stage approach.
|
We use doubly robust estimation to improve the class distribution estimation and classification accuracy for distribution mismatch settings in semi-supervised learning
|
['semi-supervised learning', 'doubly robust estimation']
|
/pdf/207f426e5a135b1cb30795e178de767976d24143.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25185/Authors']
|
7wav7FJA0P
| 25,183
|
7wav7FJA0P
|
PathHD: Efficient Large Language Model Reasoning over Knowledge Graphs via Hyperdimensional Retrieval
|
Recent advances in large language models (LLMs) have enabled strong reasoning
over structured and unstructured knowledge. When grounded on knowledge graphs
(KGs), however, prevailing pipelines rely on neural encoders to embed and score
symbolic paths, incurring heavy computation, high latency, and opaque decisions,
which are limitations that hinder faithful, scalable deployment. We propose a
lightweight, economical, and transparent KG reasoning framework, PathHD, that
replaces neural path scoring with hyperdimensional computing (HDC). PathHD
encodes relation paths into block-diagonal GHRR hypervectors, retrieves candidates
via fast cosine similarity with Top-K pruning, and performs a single LLM call to
produce the final answer with cited supporting paths. Technically, PathHD provides
an order-aware, invertible binding operator for path composition, a calibrated
similarity for robust retrieval, and a one-shot adjudication step that preserves
interpretability while eliminating per-path LLM scoring. Extensive experiments on
WebQSP, CWQ, and the GrailQA split show that PathHD (i) achieves comparable
or better Hits@1 than strong neural baselines while using one LLM call per query;
(ii) reduces end-to-end latency by 40–60% and GPU memory by 3–5× thanks
to encoder-free retrieval; and (iii) delivers faithful, path-grounded rationales that
improve error diagnosis and controllability. These results demonstrate that HDC
is a practical substrate for efficient KG–LLM reasoning, offering a favorable
accuracy–efficiency–interpretability trade-off.
|
We present PathHD, a lightweight hyperdimensional computing framework for efficient and interpretable large language model reasoning over knowledge graphs.
|
['Large Language Models', 'Efficient Reasoning', 'Knowledge Graphs', 'Hyperdimensional Computing']
|
/pdf/4d567b32afa8d543c4a7ae426d8c2796b87b2116.pdf
|
learning on graphs and other geometries & topologies
| null |
['ICLR.cc/2026/Conference/Submission25183/Authors']
|
ITeWz351rW
| 25,181
|
ITeWz351rW
|
Concrete-to-Abstract Goal Embeddings for Self-Supervised Reinforcement Learning
|
Self-supervised reinforcement learning (RL) aims to train agents without pre-specified external reward functions, enabling them to autonomously acquire the ability to generalize across tasks. A common substitute for external rewards is the use of observational goals sampled from experience, especially in goal-conditioned RL. However, such goals often constrain the goal space: they may be too concrete (requiring exact pixel-level matches) or too abstract (involving ambiguous observations), depending on the observation structure. Here we propose a unified hierarchical goal space that integrates both concrete and abstract goals. Observation sequences are encoded into this partially ordered space, in which a subset relation naturally induces a hierarchy from concrete to abstract goals. This encoding enables agents to disambiguate specific states while also generalizing to shared concepts. We implement this approach using a recurrent neural network to encode sequences and an energy function to learn the partial order, trained end-to-end with contrastive learning. The energy function then allows to traverse the induced hierarchy to vary the degree of abstraction. In experiments on navigation and robotic manipulation, agents trained with our hierarchical goal space achieve higher task success and greater generalization to novel tasks compared to agents limited to purely observational goals.
| null |
['self-supervised reinforcement learning', 'goal representation learning', 'goal abstraction']
|
/pdf/03c6a25c33b54945902994cc09008a85425741c0.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25181/Authors']
|
GaBIQ32oCA
| 25,179
|
GaBIQ32oCA
|
Efficient Similarity-Based Fast Unlearning via Pearson Correlation Detection
|
Machine unlearning has emerged as a critical requirement for neural networks to selectively forget specific training data while preserving model performance on remaining data. However, existing approximate unlearning techniques are computationally expensive when applied repeatedly to remove multiple similar data points. This work introduces a fast, novel approach that leverages Pearson's correlation-based similarity detection to efficiently and rapidly unlearn data points that are similar to previously unlearned samples. Our fast unlearning method exploits the key observation that once a data point has been unlearned through approximate unlearning techniques, similar data points can be rapidly removed using a lightweight similarity-based approach without requiring the full computational overhead of the original unlearning procedure.
We establish certain theoretical properties and assurances of our similarity-based unlearning approach. We demonstrate that by measuring Pearson's correlation between target data points and previously unlearned samples, we can identify candidates for efficient removal and apply an unlearning process. This approach significantly reduces computational costs for removing multiple related data points while maintaining comparable forgetting effectiveness. Our evaluation across four datasets demonstrates that the proposed method effectively unlearns correlated data points while maintaining model utility, providing a highly scalable solution for privacy-preserving machine learning systems. Experimental results show that our proposed approach shows an improvement $10^{-2}$ in terms of accuracy compared to state-of-the-art baselines.
| null |
['Machine unlearning', 'Similarity detection', 'Pearson correlation coefficient']
|
/pdf/d8ee8d2f80446675805c5b66ae822c3b34c0add0.pdf
|
alignment, fairness, safety, privacy, and societal considerations
| null |
['ICLR.cc/2026/Conference/Submission25179/Authors']
|
6EwuwivLSp
| 25,178
|
6EwuwivLSp
|
GOTTA be diverse
|
Test-Time Adaptation (TTA) enables models to adjust to distribution shifts using only the incoming test stream. While existing methods perform well under covariate shifts, their performance drops when label distributions also change, a common scenario in real-world streams. Some approaches attempt to mitigate this by introducing memory modules into their methods, typically to enforce class balance. However, because these memories are evaluated only in conjunction with specific algorithms, their independent role and effectiveness remain unclear. In this work, we systematically study memory in TTA by decoupling it from the adaptation algorithm. Through a unified evaluation, we identify the design choices that make memory effective under different stream settings. Building on these insights, we propose Guided Observational Test-Time Adaptation (GOTTA), a category of diversity-aware memories that combine class balance with intra-class diversity. Our results show that such memories provide reliable, compact, and efficient support for adaptation in dynamic test streams, highlighting diversity-aware memory as an important principle for robust TTA.
| null |
['test time adaptation', 'domain adaptation', 'computer vision']
|
/pdf/8384964239ff23ee7c2a4147d92c5ceae2f22538.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/f662f5b9608892704a0bf95ba933972b64ab9fdb.pdf
|
['ICLR.cc/2026/Conference/Submission25178/Authors']
|
IF0L7HSs3K
| 25,176
|
IF0L7HSs3K
|
Meta-Evaluation Collapse: Who Judges the Judges of Judges?
|
Large language models (LLMs) are increasingly used as evaluators, yet their reliability as judges remains poorly understood. We introduce the concept of meta-evaluation collapse: recursive LLM-based evaluation converges toward internally consistent but fragile fixed points that are detached, from human or domain-grounded truth. Through an operator-theoretic analysis, we show that unanchored evaluation hierarchies inevitably contract to biased equilibria, either collapsing into trivial consensus or amplifying systematic preferences such as fluency over accuracy. Empirically, using multilingual health queries, we find that LLM judges display high inter-model agreement but drift sharply from human evaluators, compressing variance, inflating surface qualities, and overlooking cultural nuance. Comparative evaluations, often assumed more robust, further establish these biases. Our analysis highlights the risks of over-relying on LLM consensus and calls for anchored meta-evaluation frameworks that integrate human disagreement, cultural diversity, and task-specific grounding.
|
LLMs as judges converge to consistent but biased evaluations, meta-evaluation collapse, and we show, both theoretically and empirically, that preventing this requires anchoring evaluations in human or formal ground-truth signals.
|
['LLM-as-judge', 'Meta-evaluation', 'Evaluation theory', 'Anchored evaluation']
|
/pdf/f5c6b1f75ad70782efadd20926274aafb5a67a26.pdf
|
other topics in machine learning (i.e., none of the above)
|
/attachment/397c69f48f2dc658edfda2e96c17559bed69af00.zip
|
['ICLR.cc/2026/Conference/Submission25176/Authors']
|
qBy7nYDgEa
| 25,174
|
qBy7nYDgEa
|
HiFACTMix: A Code-Mixed Benchmark and Graph-Aware Model for EvidenceBased Political Claim Verification in Hinglish
|
Fact-checking in code-mixed, low-resource languages such as Hinglish remains a significant and underexplored challenge in natural language processing. Existing fact-verification systems are primarily designed for high-resource, monolingual settings and fail to generalize to real-world political discourse in linguistically diverse regions like India. To address this gap, we introduce HiFACTMix, a novel
benchmark comprising approximately 1,500 real-world factual claims made by 28 Indian state Chief Ministers and several influential political leaders in Hinglish,each annotated with textual evidence and veracity labels (True, False, Partially True, Unverifiable). Building on this resource, we propose a Quantum-Enhanced Retrieval-Augmented Generation (RAG) framework that integrates code-mixed text encoding, evidence graph reasoning, and explanation generation. Experimental results show that HiFACTMix not only outperforms strong multilingual and code-mixed baselines (CM-BERT, VerT5erini, IndicBERT, mBERT) but also remains competitive against recent large language models, including GPT-4, LLaMA-2, and Mistral. Unlike generic LLMs that may generate fluent but weakly grounded outputs, HiFACTMix explanations are explicitly linked to retrieved evidence, ensuring both accuracy and transparency. This work opens a new direction for multilingual, quantum-assisted, and politically grounded fact verification, with implications for combating misinformation in low-resource, code-mixed environments.
|
We introduce HiFACT, a Hinglish political fact-checking benchmark and propose a quantum-enhanced RAG framework that improves accuracy and explanation quality in low-resource, code-mixed settings
|
['Hinglish', 'Fact-checking', 'Code-mixed languages', 'Low-resource NLP', 'Political discourse', 'Quantum-enhanced RAG', 'Evidence graph reasoning', 'LLM explanations']
|
/pdf/d3d5cbd2326c0531ffbe51253f7a599c4d270f43.pdf
|
generative models
|
/attachment/73807c834eb9a5841a4ebe41c0a2830c48efec85.zip
|
['ICLR.cc/2026/Conference/Submission25174/Authors']
|
uomCTwGflg
| 25,173
|
uomCTwGflg
|
Attention Contrastive Decoding: Preserving Coherence While Mitigating Hallucinations in Large Vision-Language Models
|
Large Vision-Language Models (LVLMs) exhibit remarkable multimodal capabilities but frequently produce factually inconsistent hallucinations. While Contrastive Decoding (CD) methods offer a training-free approach to hallucination mitigation, they operate at the logits level, compromising output coherence and diversity. Through systematic analysis, we show that logits-level subtraction disrupts intrinsic language generation mechanisms, requiring restrictive penalty mechanisms that further limit diversity. We propose Attention Contrastive Decoding (ACD), which transfers contrastive operations to the attention layer and employs an Adaptive Subtraction Strategy (ASS) to identify and suppress hallucination-prone attention patterns. Experiments demonstrate that ACD generates more coherent content with significantly reduced hallucinations without requiring penalty mechanisms, effectively leveraging the inherent continuity of attention mechanisms to advance reliable multimodal generation. Code is available at \url{https://anonymous.4open.science/r/ACD-00C6}.
|
We propose an adaptive contrastive decoding approach at the attention layer to mitigate hallucinations and improve coherence in large vision-language models.
|
['Trustworthy AI', 'Hallucination Alleviation', 'Large Vision-Language Models']
|
/pdf/e8fc05e45d7e98638010e27d7a9d5d6759530e87.pdf
|
alignment, fairness, safety, privacy, and societal considerations
| null |
['ICLR.cc/2026/Conference/Submission25173/Authors']
|
K5A2jBmEBK
| 25,170
|
K5A2jBmEBK
|
DeepCompress: A Dual Reward Strategy for Dynamically Exploring and Compressing Reasoning Chains
|
Large Reasoning Models (LRMs) have demonstrated impressive capabilities but suffer from cognitive inefficiencies like ``overthinking'' simple problems and ``underthinking'' complex ones. While existing methods that use supervised fine-tuning (SFT) or reinforcement learning (RL) with token-length rewards can improve efficiency, they often do so at the cost of accuracy. This paper introduces DeepCompress, a novel framework that simultaneously enhances both the accuracy and efficiency of LRMs. We challenge the prevailing approach of consistently favoring shorter reasoning paths, showing that longer responses can contain a broader range of correct solutions for difficult problems. DeepCompress employs an adaptive length reward mechanism that dynamically classifies problems as "Simple" or "Hard" in real-time based on the model's evolving capability. It encourages shorter, more efficient reasoning for "Simple" problems while promoting longer, more exploratory thought chains for "Hard" problems. This dual-reward strategy enables the model to autonomously adjust its Chain-of-Thought (CoT) length, compressing reasoning for well-mastered problems and extending it for those it finds challenging. Experimental results on challenging mathematical benchmarks show that DeepCompress consistently outperforms baseline methods, achieving superior accuracy while significantly improving token efficiency.
|
This paper introduces DeepCompress, a dual reward strategy that simultaneously enhances both the accuracy and efficiency of large reasoning models.
|
['Large Reasoning Models', 'Reasoning Efficiency', 'Reinforcement Learning']
|
/pdf/f072174ccb012a840b1a814f5a65357f2b8f5583.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25170/Authors']
|
9qQ5mabsCE
| 25,164
|
9qQ5mabsCE
|
EmboMatrix: A Scalable Training-Ground for Embodied Decision-Making
|
Embodied decision-making enables agents to translate high-level goals into executable actions through continuous interactions within the physical world, forming a cornerstone of general-purpose embodied intelligence. Large language models (LLMs), with their general decision-making capabilities, offer a promising path to realize this potential; however, LLMs trained solely on language lack exposure to physical environments, limiting their true embodied understanding. To bridge this gap, we propose the concept of a \textbf{training ground}: a comprehensive infrastructure that provides task and scene simulation, embodied interaction, and feedback signals, offering a one-stop solution for LLM acquire genuine embodied decision-making skills. In this work, we present EmboMatrix, the first training ground of its kind, providing massive and diverse tasks with efficient simulation and precise rewards. EmboMatrix incorporates a series of novel techniques: a multi-agent data engine for large-scale task and scene generation, a distributed heterogeneous-hardware system for scalable simulation, and a multi-level reward architecture for precise supervision. Leveraging EmboMatrix, we cultivate \textbf{EmboBrain}, an LLM whose embodied decision-making abilities emerge from extensive embodied interactions. Experiments show that EmboBrain-7B surpasses the 671B DeepSeek-R1 baseline by 9.5\% on two challenging embodied decision-making benchmarks, demonstrating the power of interactive, environment-grounded learning for building truly intelligent embodied agents. The code will be released upon the paper's acceptance.
|
EmboMatrix is a scalable, annotation-free training ground that aligns data, system, and RL algorithm design to enable autonomous environment exploration by LLMs, yielding consistent gains on embodied decision making benchmarks.
|
['Embodied Decision Making', 'LLM', 'Embodied Brain']
|
/pdf/f7803de133a44bd07f3fc9cd0544d17a586b1ad5.pdf
|
applications to robotics, autonomy, planning
| null |
['ICLR.cc/2026/Conference/Submission25164/Authors']
|
BjlmBIKQee
| 25,162
|
BjlmBIKQee
|
Sphinx: Visual Perception and Reasoning Gym
|
We present \textsc{Sphinx}, a synthetic gym for visual perception and reasoning tasks that targets core cognitive primitives. \textsc{Sphinx} procedurally generates problems using motifs, tiles, charts, icons, and geometric primitives, each paired with verifiable ground-truth solutions. This design enables both precise evaluation and the creation of scalable datasets. We implement 25 task types spanning symmetry detection, geometric transformation, spatial reasoning, chart interpretation, and sequence prediction. Benchmarking recent multimodal vision–language models (vLLMs) reveals that even state-of-the-art GPT-5 struggles on these tasks, achieving 47.32% accuracy and performing significantly below human baselines. Finally, we demonstrate that reinforcement learning with verifiable rewards (RLVR) improves model accuracy on these reasoning tasks, underscoring its potential for advancing multimodal reasoning.
| null |
['Multimodal reasoning', 'vLLM', 'Synthetic datasets']
|
/pdf/d6d5b2f42bdd72fc3fae9fbfa766399e949d27e8.pdf
|
datasets and benchmarks
|
/attachment/5747cc0c95258e2d84985e5eedd6320fe04ac354.zip
|
['ICLR.cc/2026/Conference/Submission25162/Authors']
|
gmmHn5nFvK
| 25,161
|
gmmHn5nFvK
|
Improving Language Agents through BREW: Bootstrapping expeRientially-learned Environmental knoWledge
|
Large Language Model (LLM)-based agents are increasingly applied to tasks requiring structured reasoning, tool use, and environmental adaptation, such as data manipulation, multistep planning, and computer-use automation. However, despite their versatility, current training paradigms for model weight optimization methods, like PPO and GRPO, remain relatively impractical with their high computational overhead for rollout convergence. In addition, the resulting agent policies are difficult to interpret, adapt, or incrementally improve. To address this, we investigate creating and refining structured memory of experiential learning of an agent from its environment as an alternative route to agent optimization. We introduce \textbf{BREW} (Bootstrapping expeRientially-learned Environmental knoWledge), a framework for agent optimization for downstream tasks via KB construction and refinement. In our formulation, we introduce an effective method for partitioning agent memory for more efficient retrieval and refinement. BREW uses task graders and behavior rubrics to learn insights while leveraging state-space search for ensuring robustness from the noise and non-specificity in natural language. Empirical results on real world, domain-grounded benchmarks---OSWorld and $\tau^2$Bench---show BREW achieves 10--20\% improvement in task precision, 10--15\% reduction in API/tool calls leading to faster execution time, all while maintaining computational efficiency on par with base models. Unlike prior work where memory is treated as static context, we establish the KB as a modular and controllable substrate for agent optimization---an explicit lever for shaping behavior in a transparent, interpretable, and extensible manner.
| null |
['Language agents', 'agent memory', 'computer use agents']
|
/pdf/7cc8ca85b0c9e218ab08ef9dab784b16d9d7e8ca.pdf
|
foundation or frontier models, including LLMs
|
/attachment/504e97b059d36f05d102e3f186206e8f9ebbfa82.pdf
|
['ICLR.cc/2026/Conference/Submission25161/Authors']
|
JtIw8lYqdl
| 25,158
|
JtIw8lYqdl
|
Scaling Laws for Uncertainty in Deep Learning
|
Scaling laws in deep learning describe the predictable relationship between a model's performance, usually measured by test loss, and some key design choices, such as dataset and model size. Inspired by these findings and fascinating phenomena emerging in the over-parameterized regime, we investigate a parallel direction: do similar scaling laws govern predictive uncertainties in deep learning? In identifiable parametric models, such scaling laws can be derived in a straightforward manner by treating model parameters in a Bayesian way. In this case, for example, we obtain $O(1/N)$ contraction rates for epistemic uncertainty with respect to dataset size $N$. However, in over-parameterized models, these guarantees do not hold, leading to largely unexplored behaviors.
In this work, we empirically show the existence of scaling laws associated with various measures of predictive uncertainty with respect to dataset and model size. Through experiments on vision and language tasks, we observe such scaling laws for in- and out-of-distribution predictive uncertainty estimated through popular approximate Bayesian inference and ensemble methods. Besides the elegance of scaling laws and the practical utility of extrapolating uncertainties to larger data or models, this work provides strong evidence to dispel recurring skepticism against Bayesian approaches: *"In many applications of deep learning we have so much data available: what do we need Bayes for?"*. Our findings show that *"so much data"* is typically not enough to make epistemic uncertainty negligible.
| null |
['Scaling Laws', 'Bayesian Deep Learning', 'Uncertainty Quantification']
|
/pdf/08ea781f774521b003d1ad3aad55a0e99c0138bf.pdf
|
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
| null |
['ICLR.cc/2026/Conference/Submission25158/Authors']
|
Ard2QzPAUK
| 25,156
|
Ard2QzPAUK
|
BeliefFormer: Belief Attention in Transformer
|
In this paper, we consider modifying the attention layer in Transformer to improve its generalization performance. Conceptually speaking, the standard attention layer takes the softmax-based weighted summation of V vectors as the residual signal (with a linear mapping for dimensionality alignment) when performing the skip-connection operation. Inspired by distribution optimization, we propose to first perform an orthogonal projection of the softmax-based weighted summation of V vectors with respect to the original V vectors and then take the orthogonal projection instead as the residual signal (with a linear mapping for dimensionality alignment) when performing the skip-connection operation. By doing so, the token vectors are modified relatively more along their tangent directions compared to their magnitudes. Intuitively speaking, the orthogonal projection reflects a belief about the discrepancy between the weighted summation of V vectors and the V vectors themselves. We refer to the newly modified layer and the overall architecture as the belief-attention and the BeliefFormer, respectively. To further improve performance, we also design a variant of belief-attention by incorporating two types of orthogonal projections, referred to as belief-attention$^{\ast}$. Extensive experiments show that the two new variants of attention layer in Transformers lead to better performance than the standard attention for image classification over ImageNet and natural language processing when training nano-GPT2.
|
incorporating orthogonal projection as residual signals into attention layer in Transformer to improve generation performance
|
['Transformer; orthogonal projection; BeliefFormer']
|
/pdf/e6e44ec5d884a9d852f702079a97ef6be4b33a56.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25156/Authors']
|
g9FDTZJEdJ
| 25,155
|
g9FDTZJEdJ
|
Scalable GANs with Transformers
|
Scalability has driven recent advances in generative modeling, yet its principles remain underexplored for adversarial learning. We investigate the scalability of Generative Adversarial Networks (GANs) through two design choices that have proven to be effective in other types of generative models: training in a compact Variational Autoencoder latent space and adopting purely transformer-based generators and discriminators. Training in latent space enables efficient computation while preserving perceptual fidelity, and this efficiency pairs naturally with plain transformers, whose performance scales with computational budget.
Building on these choices, we analyze failure modes that emerge when naively scaling GANs.
Specifically, we find issues as underutilization of early layers in the generator and optimization instability as the network scales. Accordingly, we provide simple and scale-friendly solutions as lightweight intermediate supervision and width-aware learning-rate adjustment. Our experiments show that GAT, a purely transformer-based and latent-space GANs, can be easily trained reliably across a wide range of capacities (S through XL). Moreover, GAT-XL/2 achieves state-of-the-art single-step, class-conditional generation performance (FID of 2.96) on ImageNet-256 in just 40 epochs, 6× fewer epochs than strong baselines.
|
Scalable GANs with transformer achieves state-of-the-art on 1-step class-conditional generation on ImageNet-256
|
['Generative Model', 'Generative Adversarial Network', 'Scalable Generative Models']
|
/pdf/d18982800f15eb5ef51a7b682ff6425f9d05d663.pdf
|
generative models
| null |
['ICLR.cc/2026/Conference/Submission25155/Authors']
|
1tXxi38Gvm
| 25,154
|
1tXxi38Gvm
|
InfoMax-based Resampling for Dataset Balance and Diversity
|
We propose a principled reweighting framework that moves empirical data toward uniform coverage through implicit differential entropy maximization. The core idea replaces intractable entropy maximization with a mutual information proxy and derives variational estimators under change of measure, yielding a consistent, low-variance weighted InfoNCE. Learned weights are immediately usable for data filtration and imbalance-aware sampling.
|
Learn sample weights via a mutual-information proxy for entropy to push data toward uniform coverage, using a consistent, low-variance weighted InfoNCE that yields plug-in weights for filtration and balanced sampling.
|
['InfoMax', 'mutual information', 'entropy maximization', 'weighted InfoNCE', 'change of measure', 'density-ratio estimation', 'dataset reweighting', 'balanced sampling']
|
/pdf/b8b59882baea92d54ebbccd75bcf000b7ba06a49.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
/attachment/0449043b089ccdd082ede78f06139e5909cdf038.zip
|
['ICLR.cc/2026/Conference/Submission25154/Authors']
|
ChDSjqMgKJ
| 25,152
|
ChDSjqMgKJ
|
Sequential Test-Time Adaptation via Martingale-Driven Fisher Prompting
|
We present a theoretical framework for M-FISHER, a method for sequential distribution shift detection and stable adaptation in streaming data. For detection, we construct an exponential martingale from non-conformity scores and apply Ville’s inequality to obtain time-uniform guarantees on false alarm control, ensuring statistical validity at any stopping time. Under sustained shifts, we further bound the expected detection delay as $\mathcal{O}(\log(1/\delta)/\Gamma)$, where $\Gamma$ reflects the post-shift information gain, thereby linking detection efficiency to distributional divergence. For adaptation, we show that Fisher-preconditioned updates of prompt parameters implement natural gradient descent on the distributional manifold, yielding locally optimal updates that minimize KL divergence while preserving stability and parameterization invariance. Together, these results establish M-FISHER as a principled approach for robust, anytime-valid detection and geometrically stable adaptation in sequential decision-making under covariate shift.
| null |
['foundation models', 'test-time adaptation', 'martingale']
|
/pdf/94a29926b5e26695fbf355ac1e6ced66b5b4e14d.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25152/Authors']
|
MCeZ4k7J6M
| 25,151
|
MCeZ4k7J6M
|
Accelerated Predictive Coding Networks via Direct Kolen–Pollack Feedback Alignment
|
Backpropagation (BP) is the cornerstone algorithm for training artificial neural networks, yet its reliance on update-locked global error propagation limits biological plausibility and hardware efficiency. Predictive coding (PC), originally proposed as a model of the visual cortex, relies on local updates that allow parallel learning across layers. However, practical implementations face two key limitations: error signals must still propagate from the output to early layers through multiple inference-phase steps, and feedback decays exponentially during this process, leading to vanishing updates in early layers. These issues restrict the efficiency and scalability of PC, undermining its theoretical advantage in parallelization over BP. We propose direct Kolen–Pollack predictive coding (DKP-PC), which simultaneously addresses both feedback delay and exponential decay, yielding a more efficient and scalable variant of PC while preserving update locality. Leveraging the direct feedback alignment and direct Kolen–Pollack algorithms, DKP-PC introduces learnable feedback connections from the output layer to all hidden layers, establishing a direct pathway for error transmission. This yields an algorithm that reduces the theoretical error propagation time complexity from $\mathcal{O}(L)$, with $L$ being the network depth, to $\mathcal{O}(1)$, enabling parallel updates of the parameters. Moreover, empirical results demonstrate that DKP-PC achieves performance at least comparable to, and often exceeding, that of standard PC, while offering improved latency and computational performance. By enhancing both scalability and efficiency of PC, DKP-PC narrows the gap between biologically-plausible learning algorithms and BP, and unlocks the potential of local learning rules for hardware-efficient implementations.
| null |
['Predictive Coding', 'Artificial Intelligence', 'Local Learning', 'Backpropagation', 'Feedback Alignment', 'Neural Networks']
|
/pdf/4a503095510ef9e6c92b5958732e2a26f18adeb7.pdf
|
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
| null |
['ICLR.cc/2026/Conference/Submission25151/Authors']
|
MWtXs60n38
| 25,147
|
MWtXs60n38
|
Implicit 4D Gaussian Splatting for Fast Motion with Large Inter-Frame Displacements
|
Recent 4D Gaussian Splatting (4DGS) methods often fail under fast motion with large inter-frame displacements, where Gaussian attributes are poorly learned during training, and fast-moving objects are often lost from the reconstruction. In this work, we introduce Spatiotemporal Position Implicit Network for 4DGS, coined SPIN-4DGS, which learns Gaussian attributes from explicitly collected spatiotemporal positions rather than modeling temporal displacements, thereby enabling more faithful splatting under fast motions with large inter-frame displacements. To avoid the heavy memory overhead of explicitly optimizing attributes across all spatiotemporal positions, we instead predict them with a lightweight feed-forward network trained under a rasterization-based reconstruction loss. Consequently, SPIN-4DGS learns shared representations across Gaussians, effectively capturing spatiotemporal consistency and enabling stable high-quality Gaussian splatting even under challenging motions, while also reducing storage overhead by avoiding the need for explicit parameter storage. Across extensive experiments, SPIN-4DGS consistently achieves higher fidelity under large displacements, with clear improvements in PSNR and SSIM on challenging sports scenes from the CMU Panoptic dataset. For example, SPIN-4DGS notably outperforms the strongest baseline, D3DGS, by achieving +1.83 higher PSNR on the Basketball scene.
| null |
['4D Gaussian splatting', '4D reconstruction', 'Dynamic rendering']
|
/pdf/53321eb6798b976bcb95936cd5727d2aabb99f53.pdf
|
applications to computer vision, audio, language, and other modalities
| null |
['ICLR.cc/2026/Conference/Submission25147/Authors']
|
2baJBgfr9S
| 25,145
|
2baJBgfr9S
|
HiDivDrop: Vision Token Reduction in MLLMs via Late Injection and Differentiable Top-K
|
The computational cost of Multimodal Large Language Models (MLLMs), driven by the quadratic complexity of processing vision tokens, remains a significant barrier to their widespread adoption. While progressive vision token pruning is a promising solution, we find that its full potential has been unrealized due to two key limitations: it misinterprets the role of shallow layers as being crucial for fusion and employs overly rigid, non-adaptive pruning schedules. To address these flaws, we introduce HiDivDrop, a framework that tailors token pruning to the true hierarchical function of MLLM layers. HiDivDrop incorporates two key innovations: (1) a Late Injection strategy that bypasses passive shallow layers, introducing visual tokens directly where active fusion begins; and (2) a Concave Pyramid Pruning scheme with an Early Exit mechanism that dynamically adjusts the pruning rate throughout the middle and deep layers. This process is optimized via an inter-layer similarity measure and a differentiable top-$k$ operator. Extensive experiments show that HiDivDrop compresses $\sim$90\% visual tokens while matching the original performance and accelerating training by 1.72$\times$. Our work not only sets a new state-of-the-art for efficient MLLM training and inference but also provides valuable insights into the hierarchical nature of multimodal fusion.
| null |
['MLLMs', 'Vision Token Pruning', 'Efficiency and Compression', 'Interpretability and Analysis']
|
/pdf/bbae3387236182a03728e0bf85dc9c2202403bac.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25145/Authors']
|
BXznpYw32K
| 25,143
|
BXznpYw32K
|
XPoison: Cross-Class Attacks through Clean-Label Data Poisoning in Fine-Tuning
|
As deep learning relies on huge datasets for training, poisoning attacks that pollute the datasets pose a significant threat to it security. Given more models pretrained on private corpora inaccessible to external parties, earlier attacks demanding access to the base training datasets have their impact largely diminished, while practical threats focus on the finetuning stage when attackers can accurately target specific (intended) classes by manipulating a small subset of the dataset under their control. Fortunately, attackers could potentially be exposed also thanks to the substantially lowered data volume: e.g., correlation between identities and provided data classes poses risks to attackers. To enable stealthy poisoning, we introduce XPoison that strategically performs poisoning in a cross-class manner. Instead of directly poisoning the intended classes, a XPoison attacker only needs to provide dataset for unintended classes and hence hides its identity. We first propose a magnitude matching strategy to more efficiently align the malicious gradients. Furthermore, we estimate contradiction from clean target data and compensate gradient-wise, thereby counteracting its neutralizing influence on the poisoning
effect. Through extensive evaluations, we demonstrate that XPoison is capable of robustly reducing the recognition accuracy of targeted classes by up to 38.37% during finetuning, while preserving high accuracy in poison classes.
| null |
['Data poisoning', 'finetuning', 'cross-class', 'clean target data present', 'restricted data access', 'gradient-matching']
|
/pdf/9294dfe7423641f60757e5adb4ab0eb4674e4c34.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25143/Authors']
|
9235Gzvgiq
| 25,141
|
9235Gzvgiq
|
Bridging Gaps with Dynamic Knowledge Probes: Robust LLM–KG Collaborative Reasoning
|
Large Language Models (LLMs) exhibit exceptional capabilities in various natural language tasks but are constrained by static knowledge, potential hallucinations, and opaque reasoning processes. Integrating external Knowledge Graphs (KGs) has emerged as a promising solution. While agent-based paradigms enhance knowledge exploration by iteratively retrieving grounded facts from KGs, they often adopt a conservative KG-centric strategy that deliberately avoids using the LLM's internal knowledge—rendering them vulnerable to failures whenever missing links occur, a common challenge even in largely complete KGs.
We propose a KG–LLM collaborative framework that repositions the LLM’s knowledge as dynamic knowledge probes, generated via our Guidance Graph of Thought (GGoT) reasoning backbone from partially specified triples. These probes guide KG exploration, highlight potential incompleteness, and trigger trust-aware bridging with existence and necessity checks before integrating LLM-derived entities. Cross-triple constraint-based disambiguation then ensures consistency, using KG structure for credible nodes and LLM validation for low-confidence ones.
Extensive experiments across multiple benchmarks show that our framework consistently achieves superior performance over existing approaches, with ablation studies verifying the contribution and necessity of each component in our design.
| null |
['LLM', 'knowledge graph', 'question answering', 'internal knowledge']
|
/pdf/414aaeeee298c093f005cab4f38b9df8e0f215f0.pdf
|
interpretability and explainable AI
|
/attachment/46aaec055774082c467ca2a876042c1c847074d5.zip
|
['ICLR.cc/2026/Conference/Submission25141/Authors']
|
R5L1TD1Z58
| 25,140
|
R5L1TD1Z58
|
ECO: Enhanced Code Optimization via Performance-Aware Prompting for Code-LLMs
|
Code runtime optimization$\textemdash$the task of rewriting a given code to a faster one$\textemdash$remains challenging,
as it requires reasoning about performance trade-offs involving algorithmic and structural choices.
Recent approaches employ code-LLMs with slow-fast code pairs provided as optimization guidance, but such pair-based methods obscure the causal factors of performance gains and often lead to superficial pattern imitation rather than genuine performance reasoning.
We introduce ECO, a performance-aware prompting framework for code optimization.
ECO first distills runtime optimization instructions (ROIs) from reference slow-fast code pairs;
Each ROI describes root causes of inefficiency and the rationales that drive performance improvements.
For a given input code, ECO in parallel employs (i) a symbolic advisor to produce a bottleneck diagnosis tailored to the code, and (ii) an ROI retriever to return related ROIs.
These two outputs are then composed into a performance-aware prompt, providing actionable guidance for code-LLMs.
ECO's prompts are model-agnostic, require no fine-tuning, and can be easily prepended to any code-LLM prompt.
Our empirical studies highlight that ECO prompting significantly improves code-LLMs' ability to generate efficient code, achieving speedups of up to 7.81$\times$ while minimizing correctness loss.
| null |
['Code optimization', 'performance-aware', 'code-llm']
|
/pdf/4fe8783ce7a25b920e350a47db8ededbec6c6873.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/cc72ea1837fb152f937e694e197944f11fe5ecd3.zip
|
['ICLR.cc/2026/Conference/Submission25140/Authors']
|
ZEf03Uunvk
| 25,138
|
ZEf03Uunvk
|
Why We Need New Benchmarks for Local Intrinsic Dimension Estimation
|
Recent advancements in algorithms for local intrinsic dimension (LID) estimation have been closely tied to progress in neural networks (NN). However, NN architectures are often tailored to specific domains, such as audio or image data, incorporating inductive biases that limit their transferability across domains. Moreover, existing LID estimation methods leveraging these architectures are typically evaluated on either overly simplistic benchmarks or domain datasets where the true LID is unknown, resulting in potentially erroneous evaluations. To close this research gap, we first isolate problematic aspects of LID estimation and leverage them to analyze the limitations of state-of-the-art methods. Our approach employs several techniques to create LID benchmarks for arbitrary domains, including the introduction of a method to transform any manifold into the domain while preserving the manifold structure, thereby addressing challenges posed by biases in neural network-based methods. Our comparative analysis reveals critical limitations and identifies new directions for future development in LID estimation methods. Code will be available on github when published.
|
We show that LID estimation community needs new benchmarks for intrinsic dimension estimation and come to interesting conclusions on the performance of existing algorithms.
|
['Local intrinsic dimension estimation', 'LIDL', 'FLIPD', 'Diffusion Models', 'Benhamark', 'Normalizing Flows', 'ESS', 'Normal Bundle', 'NB', 'LID']
|
/pdf/0a5b1c33479fbbb789a25d423378fee68b30ef2a.pdf
|
datasets and benchmarks
|
/attachment/415de902ef6b8c0ea758232ce5bdfe6eda8a506e.zip
|
['ICLR.cc/2026/Conference/Submission25138/Authors']
|
UJvub9fNws
| 25,136
|
UJvub9fNws
|
Beyond Benchmarks: Toward Causally Faithful Evaluation of Large Language Models
|
Current large language models (LLMs) evaluations overlook that measured LLM performance is produced on a full evaluation system, including many indispensable components, such as workloads, prompting methods, decoding parameters, and the supporting software–hardware stack. Without an explicit, controlled specification of the evaluation system, attributing performance differences to the model itself is unreliable. Our experiments reveal that uncontrolled testing may lead to accuracy variations of up to 70\%. To address this urgent issue, we introduce LLM evaluatology, a principled methodology that reduces the evaluation problem to accurately attributing the outcomes to the effect of the evaluated LLM, which is a high-dimensional causal-attribution problem. Empirical results demonstrate that LLM evaluatology not only enhances interpretability and causal validity, but also yields evaluations that are more robust, reproducible, and trustworthy than prevailing benchmarks.
| null |
['Large language models', 'Benchmarks', 'Evaluation methodology', 'Causal attribution']
|
/pdf/369bee32779101f9e24fd9a485f701e3883be3be.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25136/Authors']
|
hOjieyMB1v
| 25,134
|
hOjieyMB1v
|
Climbing the label tree: Hierarchy-preserving contrastive learning for medical imaging
|
Medical image labels are often organized by taxonomies (organ → tissue → subtype), yet standard self-supervised learning (SSL) ignores this structure. We present a hierarchy-preserving contrastive framework that makes the label tree a first-class training signal and an evaluation target. Our approach introduces two plug-in objectives: Hierarchy-Weighted Contrastive (HWC), which scales positive/negative pair strengths by shared ancestors to promote within-parent coherence, and Level-Aware Margin (LAM), a prototype margin that separates ancestor groups across levels. The formulation is geometry-agnostic and applies to Euclidean and hyperbolic embeddings without architectural changes. Across several benchmarks, including breast histopathology, the proposed objectives consistently improve representation quality over strong SSL baselines while better respecting the taxonomy. We evaluate with metrics tailored to hierarchy faithfulness—HF1 (hierarchical F1), H-Acc (tree-distance–weighted accuracy), and parent-distance violation rate—and also report top-1 accuracy for completeness. Ablations show that HWC and LAM are effective even without curvature, and combining them yields the most taxonomy-aligned representations. Taken together, these results provide a simple, general recipe for learning medical image representations that respect the label tree—advancing both performance and interpretability in hierarchy-rich domains.
| null |
['hierarchy-preserving contrastive learning', 'medical imaging', 'self-supervised learning', 'taxonomy-aware representations', 'euclidean embeddings', 'hyperbolic embeddings', 'prototype margin', 'hierarchical metrics', 'hf1', 'h-acc', 'breast histopathology', 'representation learning']
|
/pdf/032b8b13e2874f3083e5e284be3da364a36d23f3.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25134/Authors']
|
FSL1J2gmJV
| 25,133
|
FSL1J2gmJV
|
MergePRAG: Orthogonal Merging of Passage-experts for Multi-hop Parametric RAG
|
Large language models (LLMs) can be enhanced with external knowledge through two dominant approaches: (1) $\textbf{retrieval-augmented generation (RAG)}$, which supplements LLMs with in-context retrieved passages, and (2) $\textbf{parametric knowledge adaptation (PKA)}$, which directly updates model parameters with new domain knowledge. Recently, parametric RAG (PRAG) has emerged as a promising framework, extending RAG by translating retrieved passages into parameter updates, thereby mitigating inefficiency and noise sensitivity inherent to RAG. However, existing PRAG methods remain limited to single-pass retrieval, falling short of the $\textbf{multi-hop RAG}$ setting that requires iterative retrieval and reasoning. We propose $\textbf{MergePRAG}$($\textit{Orthogonal Merging of Passage-experts for Multi-hop PRAG}$), a novel framework that sequentially integrates retrieved passages into LLM parameters through a continual merging mechanism, which is advanced by two key proposals: (1) $\textbf{orthogonal merging}$ using the Gram–Schmidt process to minimize conflicts between experts, and (2) $\textbf{critical-layer parameterization}$ to efficiently encode in-context passages. Experiments on multi-hop open-domain QA and reasoning-aware knowledge editing show that MergePRAG consistently outperforms both standard and state-of-the-art RAGs as well as existing parametric adaptation methods, achieving superior effectiveness and efficiency.
All datasets and code will be released at https://anonymous.4open.science/r/MhQA_hypernetwork-B31F.
| null |
['Multi-hop reasoning', 'Knowledge enhancement', 'Retrieval-augmented generation', 'Hypernetwork-based expert generation']
|
/pdf/6a977585327e53f980ef73785f6c64576c34e90c.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/3043ae573d1a29777d01fabc24c2e6182935dead.zip
|
['ICLR.cc/2026/Conference/Submission25133/Authors']
|
qsLpaAhvzb
| 25,131
|
qsLpaAhvzb
|
Learning to Reject Low-Quality Explanations via User Feedback
|
Machine Learning predictors are increasingly being employed in high-stakes applications such as credit scoring. Explanations help users unpack the reasons behind their predictions, but are not always ``high quality". That is, end-users may have difficulty interpreting or believing them, which can complicate trust assessment and downstream decision-making. We argue that classifiers should have the option to refuse handling inputs whose predictions cannot be explained properly and introduce a framework for learning to reject low-quality explanations (LtX) in which predictors are equipped with a rejector that evaluates the quality of explanations. In this problem setting, the key challenges are how to properly define and assess explanation quality and how to design a suitable rejector. Focusing on popular attribution techniques, we introduce ULER (User-centric Low-quality Explanation Rejector), which learns a simple rejector from human ratings and per-feature relevance judgments to mirror human judgments of explanation quality. Our experiments show that ULER outperforms both state-of-the-art and explanation-aware learning to reject strategies at LtX on eight classification and regression benchmarks and on a new human-annotated dataset, which we publicly release to support future research.
|
We introduce a framework for learning to reject low-quality explanations in which predictors are equipped with a rejector that evaluates the quality of explanations and propose ULER, which learns a simple rejector to mirror human judgments.
|
['Learning to Reject', 'Explainable AI', 'Explanation quality metrics', 'Human-annotated data']
|
/pdf/b4042f3e80ca677f80d756c5b8879124d445ba08.pdf
|
interpretability and explainable AI
|
/attachment/718f763a6d170f3ca882f4c122607dfbf0d071fc.zip
|
['ICLR.cc/2026/Conference/Submission25131/Authors']
|
V0w5LmwWoD
| 25,130
|
V0w5LmwWoD
|
ProofAug+: Boosting Reinforcement Learning for LLM Theorem Provers with Conditioned Proof Repair
|
Reinforcement Learning with Verifiable Rewards (RLVR) often suffers from the scarcity of positive samples on challenging tasks such as formal theorem proving.
In this work, we propose ProofAug+, an RL training pipeline for LLM theorem provers that improves the training performance by acquiring more positive samples during rollout through ProofAug, a previously developed inference-time proof repair technique.
The design of ProofAug+ is guided by two principles, progress guarantee and variance reduction, to address the performance degradation and policy collapse issues observed when integrating ProofAug into GRPO via naive direct replacement.
These principles first lead to a novel LLM RLVR algorithm, Proximal Language Modeling Policy Optimization (PLPO), where in each iteration we use the exact objective as the optimization target instead of surrogate objectives used in TRPO/PPO and employ a gradient rejection mechanism to suppress large policy updates.
Then, we integrate ProofAug into PLPO in a constrained way to achieve a balance between the exploitation of additional positive reward signals and the suppression of distribution shift that could violate the progress guarantee principle.
Experiments show that PLPO achieves better stability than baseline GRPO-like algorithms while maintaining higher entropy during training. Building on PLPO, the resulting ProofAug+ pipeline further yields significant performance gains.
|
we propose a novel RL training pipeline for LLM theorem provers, that boosts the training performance by acquiring more positive samples during rollout via a proof repair technique, ProofAug, and using a novel PPO variant algorithm PLPO.
|
['Neural Theorem Proving; Reinforcement Learning; Large Language Models']
|
/pdf/e2098e466d4a171e4b473f453686f7803c9e7243.pdf
|
reinforcement learning
|
/attachment/42bd103779d58412c3f40c3466eed371140be4ca.zip
|
['ICLR.cc/2026/Conference/Submission25130/Authors']
|
jaDAFnRQFp
| 25,129
|
jaDAFnRQFp
|
KV-Prune: Key–Value Similarity for Online Structured Pruning for Large Language Models
|
Pruning has emerged as a promising direction for accelerating large language model (LLM) inference, yet existing approaches often suffer from instability because they rely on offline calibration data that may not generalize across inputs. In this work, we introduce Token Filtering, a lightweight online structured pruning technique that makes pruning decisions directly during inference without any calibration data. The key idea is to measure token redundancy via joint key–value similarity and skip redundant attention computations, thereby reducing inference cost while preserving critical information. To further enhance stability, we design a variance-aware fusion strategy that adaptively weights key and value similarity across heads, ensuring that informative tokens are retained even under high pruning ratios. This design introduces no additional memory overhead and provides a more reliable criterion for token importance. Extensive experiments on LLaMA-2 (7B/13B), LLaMA-3 (8B), and Mistral (7B) demonstrate that Token Filtering consistently outperforms prior structured pruning methods, preserving accuracy on commonsense reasoning benchmarks and maintaining strong performance on challenging tasks such as MMLU, even with 50% pruning
| null |
['Large Language Models', 'Structured Pruning', 'Online Pruning', 'Model Compression', 'Efficient Inference', 'Token Selection']
|
/pdf/c27fc19958e7b4f36cb3d93efd617447a2d28246.pdf
|
other topics in machine learning (i.e., none of the above)
|
/attachment/5e6747b1a37823966c593af53e541100b64679fb.zip
|
['ICLR.cc/2026/Conference/Submission25129/Authors']
|
7qXmJbjbl8
| 25,128
|
7qXmJbjbl8
|
Attribute-Centric Representation Learning for Interpretable Crime Scene Analysis in Video Anomaly Detection
|
Automatic crime scene analysis is an important application area for representation learning in Video Anomaly Detection (VAD). Effective interpretation of anomalous events requires models to learn rich, disentangled representations that capture fine-grained, crime-relevant attributes. However, widely used VAD datasets (e.g., UCA, CUVA) primarily offer coarse event-level labels and they lack attribute-level supervision often needed for modeling crime-specific behaviors. To bridge this gap, we propose an attribute-centric learning framework that explicitly conditions video representations on crime-causing attributes. We extend the UCA dataset with over 1.5M new attribute-centric annotations generated using carefully designed prompts and LLMs. These annotations enable supervised fine-tuning of a curated CLIP-based model, leading to more discriminative, attribute-aware video representations, and precise event captions. An LLM-based summarizer then distills these captions into context-rich explanations, facilitating interpretable scene understanding. Our approach answers three core questions in crime scene analysis: \textbf{What? When? How?} Extensive experiments show that the proposed representation learning framework yields significant improvements ($\approx 20\%\uparrow$) in attribute-centric crime classification accuracy and ($\approx 6.4\%\uparrow$) according to MMEval scores over the baselines. We further analyze and mitigate biases in MMEval to ensure robustness and fair evaluation. These results highlight the importance of attribute-conditioned representation learning for interpretable and reliable VAD.
|
The paper proposes an attribute-centric framework for crime scene analysis in video anomaly detection by augmenting an existing crime dataset with attribute-level annotations and attribute-enriched captions created using large language models.
|
['Crime Scene Analysis', 'Video Anomaly Detection', 'Explainable AI', 'Visual Language Reasoning']
|
/pdf/322c90a7e399541fed7e996bddc16530179e2b27.pdf
|
interpretability and explainable AI
| null |
['ICLR.cc/2026/Conference/Submission25128/Authors']
|
3OUGEUVL6U
| 25,127
|
3OUGEUVL6U
|
ABS: Enforcing Constraint Satisfaction on Generated Sequences via Automata-Guided Beam Search
|
Sequence generation and prediction form a cornerstone of modern machine learning, with applications spanning natural language processing, program synthesis, and time-series forecasting. These tasks are typically modeled in an autoregressive fashion, where each token is generated conditional on the preceding ones, and beam search is commonly used to balance exploration and fluency during decoding. While deep learning models and Large Language Models (LLMs) excel at capturing statistical patterns in this setting, they remain ill-equipped to guarantee compliance with formal constraints.
In this paper, we introduce ABS: a general and model-agnostic inference-time algorithm that guarantees compliance with any constraint that can be compiled into a Deterministic Finite Automaton (DFA), without requiring retraining. ABS leverages the DFA to guide a constrained variant of beam search: at each decoding step, transitions leading to violations are masked, while remaining paths are dynamically re-ranked according to both the model’s probabilities and the automaton’s acceptance structure. We formally prove that the resulting sequences are guaranteed to satisfy the given constraints, and we empirically demonstrate that ABS also improves output quality. We validate our approach on three distinct tasks: constrained image-stream classification, controlled text generation, and text infilling. In all settings, ABS achieves perfect constraint satisfaction, while outperforming or matching state-of-the-art baselines on standard quality metrics and efficiency.
| null |
['Automata', 'Beam Search', 'LLMs', 'Neurosymbolic AI']
|
/pdf/964ec64fb5e736843de58578dfa55baa596f3a75.pdf
|
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
|
/attachment/9c56e2f95b045e116e5e19f048e5fa308a9beb82.zip
|
['ICLR.cc/2026/Conference/Submission25127/Authors']
|
vrheHeTbhM
| 25,124
|
vrheHeTbhM
|
Raindrop GS: A Benchmark for 3D Gaussian Splatting under Raindrop Conditions
|
3D Gaussian Splatting (3DGS) under raindrop conditions suffers from severe occlusions and optical distortions caused by raindrops on the camera lens, substantially degrading reconstruction quality. Existing benchmarks typically evaluate 3DGS using synthetic raindrop images with known camera poses (constrained images), assuming ideal conditions. However, in real-world scenarios, raindrops often interfere with accurate camera pose estimation and point cloud initialization. Moreover, a significant domain gap between synthetic and real raindrops further impairs generalization. To tackle these issues, we introduce RaindropGS, a comprehensive benchmark designed to evaluate the full 3DGS pipeline—from unconstrained, raindrop-corrupted images to clean 3D Gaussian reconstructions. Unlike previous benchmarks that focus solely on 3DGS reconstruction, RaindropGS enables holistic, end-to-end assessment under realistic conditions. We first collect a real-world raindrop reconstruction dataset, in which each scene contains three aligned image sets: raindrop-focused, background-focused, and rain-free ground truth, enabling comprehensive evaluation of reconstruction quality under different focus conditions. Using this dataset, we construct a complete evaluation pipeline encompassing camera pose estimation, point cloud initialization, raindrop removal preprocessing, and final 3DGS reconstruction. This modular setup allows for fine-grained analysis of each stage's impact on reconstruction quality. Through comprehensive experiments and analyses, we reveal critical insights into the performance limitations of existing 3DGS methods on unconstrained raindrop images and the varying impact of different pipeline components. These insights establish clear directions for developing more robust 3DGS methods in raindrop conditions.
| null |
['3D Computer Vision']
|
/pdf/4f620812cfd243376cb89d26224307822e3bc89a.pdf
|
datasets and benchmarks
|
/attachment/bffd77af35c57c98d7fc5f16580651424bb229f1.zip
|
['ICLR.cc/2026/Conference/Submission25124/Authors']
|
cuzWopwoZG
| 25,123
|
cuzWopwoZG
|
Gradient-Based Diversity Optimization with Differentiable Top-$k$ Objective
|
Predicting relevance is a pervasive problem across digital platforms, covering social media, entertainment, and commerce. However, when optimized solely for relevance and engagement, many machine-learning models amplify data biases and produce homogeneous outputs, reinforcing filter bubbles and content uniformity. To address this issue, we introduce a pairwise top-k diversity objective with a differentiable smooth-ranking approximation, providing a model-agnostic way to incorporate diversity optimization directly into standard gradient-based learning. Building on this objective, we cast relevance and diversity as a joint optimization problem, we analyze the resulting gradient trade-offs, and propose two complementary strategies: direct optimization, which modifies the learning objective, and indirect optimization, which reweights training data. Both strategies can be applied either when training models from scratch or when fine-tuning existing relevance-optimized models. We use recommendation as a natural evaluation setting where scalability and diversity are critical, and show through extensive experiments that our methods consistently improve diversity with negligible accuracy loss. Notably, fine-tuning with our objective is especially efficient, requiring only a few gradient steps to encode diversity at scale.
|
We introduce a differentiable top-k diversity objective with direct and indirect optimization, showing fine-tuning quickly adds diversity at scale with negligible accuracy loss.
|
['Diversity Optimization', 'Gradient-based learning', 'Recommendation']
|
/pdf/837089c94f77d7b9c0714645137896f09a7619f8.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25123/Authors']
|
8Bs3mz49Gp
| 25,121
|
8Bs3mz49Gp
|
Lighter is Better: Boost Your ViT in Person Re-Identification via Spatial-Aware Token Merging
|
Vision Transformers (ViTs) have significantly advanced person re-identification (ReID) by providing strong global modeling, but their high computational cost hinders deployment in real-time applications. Existing lightweight ReID methods mostly use token pruning, which can discard discriminative contextual information. Token merging is a moderate alternative, yet existing merging methods target image classification and overlook the local cues that ReID requires. This paper proposes STM-ReID, a spatial-aware and training-free token merging framework tailored for ViT-based lightweight ReID. STM-ReID injects information-enhanced spatial awareness into token assessment and uses the resulting scores to guide token matching and fusion, preserving identity-relevant local details while reducing computation. The framework comprises three key components: (i) DSE-Assess, a dynamic spatial-aware entropy weighting for token importance; (ii) CCF-Match, a correlation-guided matching scheme for precise pair selection; (iii) PNR-Fuse, a position response-driven computation strategy for feature aggregation. Extensive experiments on standard ReID benchmarks and general classification datasets show that STM-ReID cuts GFLOPs of the base ViT model by about 24\% while keeping accuracy comparable to state-of-the-art methods, yielding a superior accuracy–efficiency trade-off.
|
This paper proposes a training-free spatial-aware token merging paradigm for lightweight ViT in ReID, which significantly reduces computational costs while maintaining performance comparable to SOTA methods.
|
['Person re-identification', 'Vision transformer', 'Token merging', 'Lightweight']
|
/pdf/afa8b50978281056f9c9c5a8b08c70ecc7bea76e.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25121/Authors']
|
3u2L0GVern
| 25,120
|
3u2L0GVern
|
SelfMask: Cross-modal Self-Masking for Multimodal Representation Learning in Missing Modality Scenarios
|
Multimodal learning promises to harness complementary information across diverse modalities, yet real-world deployments often face missing modalities due to acquisition costs, privacy constraints, or data corruption, leading to substantial performance degradation. We present \ours, a framework for learning robust representations in the presence of incomplete multimodal data. During training, \ours\ imputes missing modality representations through a masked representation learning scheme with adaptive masking, where informative masks are learned from data rather than sampled at random. To guide the imputation without relying on unavailable ground-truth for missing modalities, we introduce a cross-modal consistency loss: predicted representations of missing modalities are required not only to align with semantic content but also to support the reconstruction of observed ones. This consistency-based objective encourages robust, semantically grounded representations. Experiments on MIMIC-IV and CMU-MOSEI demonstrate that \ours\ consistently improves resilience and predictive accuracy under diverse missing-modality scenarios. Ablation studies further show that our learned masks outperform conventional random masking, yielding more reliable cross-modal representations. Our framework is broadly applicable across multimodal domains, offering a practical solution for real-world settings where incomplete modalities are the norm.
|
SelfMask improves robustness under missing-modality inputs by learning representation-level imputation and a context-aware masking policy, trained with cycle-consistent self-supervision.
|
['Multimodal learning', 'Missing modality', 'Self-supervised learning', 'Representation-level imputation', 'Cross-modal masking']
|
/pdf/3f6f8256a7a2fc0635fb4f22b9ff09b514c05686.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25120/Authors']
|
q7Nhu2Fw11
| 25,117
|
q7Nhu2Fw11
|
The Theoretical Benefits and Limitations of Latent Chain-of-Thought Reasoning
|
Recent advances in Latent Chain-of-Thought (Latent CoT) have gained significant attention, yet these models exhibit inconsistent performance across tasks and lack a rigorous theoretical understanding. Our contributions are threefold: (1) We theoretically characterize the fundamental exploration-execution trade-off. We prove that CoT's discrete, symbolic nature forces it into a high-certainty regime, guaranteeing computational fidelity but causing premature commitment that cripples exploration. Conversely, we show that Latent CoT's continuous representation enables robust exploration but is also the direct cause of its failure on computational tasks by amplifying noise. (2) We introduce the Symbolic Index—a measure of a model's decisional certainty—as the core mechanism governing this trade-off. Our unified framework proves that this single, quantifiable metric causally explains the contrasting behaviors of both paradigms, offering a principled way to analyze and design reasoning systems. (3) We prove that curriculum learning is a theoretically grounded and necessary method for training Latent CoT models. We show that without it, training is guaranteed to fail due to a fundamental distributional mismatch, confirming that the staged approach is essential for convergence. This work provides concrete design principles for next-generation reasoning architectures, suggesting a shift from a binary choice between architectures to designing adaptive systems that can dynamically regulate their decisional certainty.
| null |
['latent reasoning', 'chain of thoughts', 'continuous chain of thoughts', 'information bottleneck', 'interpretability']
|
/pdf/6d6b1a887d2b368e34382304e474c2500d519358.pdf
|
interpretability and explainable AI
| null |
['ICLR.cc/2026/Conference/Submission25117/Authors']
|
fa8iL9O7QV
| 25,114
|
fa8iL9O7QV
|
REAL-TIME RISK EVALUATION FOR LLM DECISION- MAKING VIA AN REGRET BOUND
|
We study real-time risk certification for large language model (LLM) agents with black-box action selection rules, aiming to upper-bound the per-round regret. We fix a reference policy map $f$ (e.g., a softmax with temperature $T$, whose TV-Lipschitz constant is $C$, though any TV-Lipschitz mapping can be used), which takes a predicted opponent action distribution as input and returns a reference policy. We form the plug-in reference policy $s\_{\hat{\mu}\_t}=f(\hat{\mu}\_t)$ from the model's predicted opponent distribution $\hat{\mu}\_t$. Our certificate is $r\_t \le L(E\_{pred}+E\_{pol}+E\_{mis})$,
where $E\_{pred}:=\frac{C}{2}||\mu\_t-\hat\mu\_t||\_1$ (prediction error), $E\_{pol}:=\frac{1}{2}||\pi\_t^{\ast}-s\_{\mu\_t}||\_1$ (policy error), $E\_{mis}:=\frac{1}{2}||\pi\_t-s\_{\hat\mu\_t}||\_1$ (policy mismatch), $L$ is the Lipschitz constant of the instantaneous regret with respect to total variation induced by $Q$ (hence domain-dependent), $C$ is the TV-Lipschitz constant of $f$, $\pi\^*\_t$ denotes the one-hot best response to $\mu_t$ under $Q\_t$ (ties broken arbitrarily), and $\pi_t$ is the agent's policy. We assume access at time $t$ to the realized opponent distribution $\mu_t$ and the per-round payoffs $Q_t$ (and hence $\pi^{\ast}$), so the certificate is fully computable in real time. In this bound, prediction error measures the accuracy of the model's opponent modeling (belief calibration). In contrast, policy error, together with the policy mismatch $\frac{1}{2}\|\pi_t-s_{\hat{\mu}_t}\|_1$, quantifies the precision of the decision side given $\hat{\mu}\_t$. Therefore, this bound enables us to localize the risk of the decision to either prediction or action selection. We applied the certificate to separate, in real time and for black-box policy agents, whether decision risk stems from prediction or from action selection. In the Ultimatum and $2\times2$ general-sum games, the dominant component is opponent- and game-dependent. This separation does not yield a characterization common to all games and opponents, but under the same game and opponent strategy, it reveals consistent differences between models.
| null |
['LLM', 'game theory']
|
/pdf/7c0a730952e32408e9cdda672363e43bf0ebcb85.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25114/Authors']
|
6QMQGi9iw9
| 25,113
|
6QMQGi9iw9
|
DomED: Redesigning Ensemble Distillation for Domain Generalization
|
Domain generalization aims to improve model performance on unseen, out-of-distribution (OOD) domains, yet existing methods often overlook the crucial aspect of uncertainty quantification in their predictions. While ensemble learning combined with knowledge distillation offers a promising avenue for enhancing both model accuracy and uncertainty estimation without incurring significant computational overhead at inference time, this approach remains largely unexplored in the context of domain generalization. In this work, we systematically investigate different ensemble and distillation strategies for domain generalization tasks and design a tailored data allocation scheme to enhance OOD generalization as well as reduce computational cost. Our approach trains base models on distinct subsets of domains and performs distillation on complementary subsets, thereby fostering model diversity and training efficiency. Furthermore, we develop a novel technique that decouples uncertainty distillation from the standard distillation process, enabling the accurate distillation of uncertainty estimation capabilities without compromising model accuracy. Our proposed method, $\textit{Domain-aware Ensemble Distillation}$ (DomED), is extensively evaluated against state-of-the-art domain generalization and ensemble distillation techniques across multiple benchmarks, achieving competitive accuracies and substantially improved uncertainty estimates.
|
We investigate tailored ensembling and distillation strategies for domain generalization tasks, achieving improved generalization and uncertainty estimation.
|
['Domain generalization', 'Ensemble learning', 'Knowledge distillation', 'Uncertainty quantification']
|
/pdf/8755e0314f9ea4222a4cd0c385728b86065f91d1.pdf
|
transfer learning, meta learning, and lifelong learning
|
/attachment/0ebcc8d9116705f1a6db936b03376913b86df37a.zip
|
['ICLR.cc/2026/Conference/Submission25113/Authors']
|
nX3AZQEJ3O
| 25,111
|
nX3AZQEJ3O
|
WaAgents: A Waterfall-Inspired Framework for Effective Multi-Agent Collaboration
|
Large Language Models (LLMs) have revolutionized the construction of multi-agent systems for complex problem solving, leveraging their prowess in natural language understanding for semantic parsing and intent recognition, alongside robust logical reasoning for intricate task execution. Despite these advances, prevailing LLM-based multi-agent frameworks suffer from a critical shortfall: the absence of explicit, predefined stage segmentation. This leads to pervasive information redundancy in inter-agent communications, manifesting as irrelevant discussions without focused topics, and exacerbates decision conflicts in free-discussion paradigms, where agents of equal status deadlock over divergent opinions, ultimately hindering effective resolutions. To address these limitations, we introduce WaAgents, a novel multi-agent collaboration framework inspired by the Waterfall Model in Software Engineering. WaAgents delineates the problem-solving process into four sequential, interdependent stages: Requirement Analysis, Design, Implementation, and Reflection. In the Requirement Analysis stage, Requirement Analysis Agents parse user intents to produce a structured task specification, facilitating downstream processing. Designer Agents in the Design stage then employ this specification to decompose the task into granular sub-tasks, systematically assigning them to dedicated Worker Agents. During Implementation, each Worker Agent executes its sub-task through targeted operations and computations. Anomalies trigger the Reflection stage, where Error Analysis Agents diagnose root causes, distinguishing design from implementation errors, and enact precise repairs, ensuring iterative refinement without disrupting workflow integrity. This stage-driven, highly structured workflow provides each agent role with explicit, concentrated objectives, which substantially mitigate information redundancy. Furthermore, by strictly enforcing the predefined flow, WaAgents fundamentally eliminates the decision conflicts inherent to free-discussion, thereby ensuring the coherence and effectiveness of the entire solution process. Empirical validation across challenging benchmarks, including mathematical reasoning and open-ended problem solving, confirms the efficacy and marked superiority of the WaAgents framework.
|
We introduce WaAgents, a multi-agent collaboration framework inspired by the Waterfall model, which can improve the effectiveness of multi-agent systems in complex task resolution.
|
['Multi-Agent Systems', 'Large Language Models', 'Waterfall Model']
|
/pdf/d568174c037dd67e39971ea5c81e24128106459f.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25111/Authors']
|
NDlnDvGD7e
| 25,107
|
NDlnDvGD7e
|
Thinking Before Coding: WebUI-to-Code Driven by Layout Reasoning and Consistency Rewards
|
In recent years, Multimodal Large Language Models (MLLMs) have made substantial progress in visual understanding and language generation, offering new opportunities for automating front-end web development. The WebUI-to-Code task, translating webpage design mockups or screenshots directly into structured HTML, has emerged as a promising paradigm for intelligent front-end engineering. However, existing MLLMs often exhibit significant limitations when applied to real-world webpages with complex layouts and diverse visual styles, including code compilation failures, severe layout misalignments. A key reason for these issues lies in the lack of structured, human-like cognitive processes—namely, the “perceive first, then generate” paradigm commonly followed by human developers. To address this gap, we propose a reinforcement learning framework that explicitly enhances the model’s reasoning ability prior to code generation. Specifically, we introduce a structured layout reasoning stage and design a three-stage reward mechanism to supervise (i) the quality of layout reasoning, (ii) the accuracy of the generated code, and (iii) the consistency between the reasoning and the code. This reward formulation is designed to provide strong positive feedback from the reasoning process to the code generation outcome. To rigorously evaluate our approach, we construct and manually curate a benchmark consisting of 1,800 real-world webpages spanning multiple levels of layout complexity and visual detail. Experimental results demonstrate that our reasoning-enhanced method significantly improves the performance of the baseline model and achieves results comparable to or even surpassing much larger MLLMs baselines in terms of compilation success rate, layout fidelity, and styling accuracy.
| null |
['Code Generation; Multimodal Application']
|
/pdf/d4debc911d0a6f30d9b6515e10cc03559c04b1b6.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/886ef0861b64fc49d59efc6e2d3766b2a7c4902d.pdf
|
['ICLR.cc/2026/Conference/Submission25107/Authors']
|
1smez00sCm
| 25,103
|
1smez00sCm
|
Understanding vs. Generation: Navigating Optimization Dilemma in Multimodal Models
|
Current research in multimodal models faces a key challenge where enhancing generative capabilities often comes at the expense of understanding, and vice versa. We analyzed this trade-off and identify the primary cause might be the potential conflict between generation and understanding, which creates a competitive dynamic within the model. To address this, we propose the Reason-Reflect-Refine (R3) framework. This innovative algorithm re-frames the single-step generation task into a multi-step process of "generate-understand-regenerate". By explicitly leveraging the model's understanding capability during generation, we successfully mitigate the optimization dilemma, achieved stronger generation results and improved understanding ability which are related to the generation process. This offers valuable insights for designing next-generation unified multimodal models.
| null |
['Unified Multimodal Large Models', 'Text-to-image generation', 'Reasoning Models']
|
/pdf/bde2a3a86f6c2cb4bb358e69993c9cafc2cfe3a0.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/885f61c737ef0cbcd4031dda5a720fee5c36dcc4.zip
|
['ICLR.cc/2026/Conference/Submission25103/Authors']
|
educGk5ykl
| 25,102
|
educGk5ykl
|
Flow-Based Alignment of Uni-Modal Vision and Text Encoders for Few-Shot Image Classification
|
Few-shot classification with vision–language models remains challenging, particularly when relying on multi-modal encoders such as CLIP that are restricted to paired image–text data. We introduce FSF, a framework that leverages arbitrary uni-modal encoders—including vision or text models that were pretrained on broad or domain-specific corpora—and aligns them for cross-modal classification. FSF first applies a closed-form orthogonal Procrustes map to align image and text embeddings while preserving their geometry, and then trains a lightweight flow-matching prior that regularizes adaptation in the few-shot regime. At inference, images are classified by cosine similarity in the aligned feature space between query embeddings and mapped class prototypes. Experiments on standard benchmarks, ImageNet variants, and VinDr-CXR, a large-scale chest X-ray benchmark, show that FSF is able to leverage stronger or specialized encoders, achieving competitive or superior accuracy compared to recent adaptation methods.
|
Few-shot classification framework that aligns uni-modal image and text encoders with orthogonal Procrustes and flow matching, leveraging large-scale or domain-specialized models for adaptation.
|
['few-shot classification', 'vision-language models', 'CLIP adaptation', 'alignment of uni-modal encoders', 'flow matching']
|
/pdf/004838e193489ac39f4a8030bb47298b056ecc04.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25102/Authors']
|
m2XFiWBWlN
| 25,101
|
m2XFiWBWlN
|
ParaScopes: What do Language Models Activations Encode About Future Text?
|
Interpretability studies in language models often investigate forward-looking representations of activations. However, as language models become capable of doing ever longer time horizon tasks, methods for understanding activations often remain limited to testing specific concepts or tokens. We develop a framework of Residual Stream Decoders as a method of probing model activations for paragraph-scale and document-scale plans. We test several methods and find information can be decoded equivalent to 5+ tokens of future context in small models. These results lay the groundwork for better monitoring of language models and better understanding how they might encode longer-term planning information.
|
We try different ways of decoding language model residuals
|
['ai', 'language models', 'llms', 'interpretability', 'planning', 'probes']
|
/pdf/e8ebcf181c2787262a38ae8d072ee72e453a7f96.pdf
|
interpretability and explainable AI
| null |
['ICLR.cc/2026/Conference/Submission25101/Authors']
|
cNEshxVcWg
| 25,099
|
cNEshxVcWg
|
NullGuard: Null-Space Embedding for Driftless Invisible Image Watermarking
|
Recent progress in text-to-image diffusion highlights the need for invisible, tamper-resilient watermarking that maintains both visual fidelity and prompt alignment. Existing approaches often compromise on robustness, imperceptibility, or scalability, with many introducing semantic drift that weakens provenance guarantees. To address this, we introduce \emph{NullGuard}, a training-free, plug-and-play watermarking framework that embeds cryptographically keyed signals in the null-space of pretrained diffusion Jacobians, using user-specific rotations to define imperceptible directions. A lightweight Gauss–Newton pivot refinement, constrained by a perceptual mask, perturbs only watermark-relevant components while preserving global semantics, and a likelihood-ratio test detects watermarks without DDIM inversion, achieving up to 99\% detection accuracy under attacks such as cropping, blurring, and JPEG compression, with PSNR $\ge$ 45 dB. Extensive evaluations on MS-COCO and DiffusionDB demonstrate that NullGuard surpasses state-of-the-art (SOTA) methods in robustness, invisibility, and semantic alignment, offering a scalable foundation for provenance-aware diffusion governance. Anonymous Code: https://anonymous.4open.science/r/NullGuard-7766.
|
NullGuard introduces a training-free, cryptographically personalized watermarking method for diffusion models that embeds an imperceptible watermark in the Jacobian null-space, achieving high robustness and fidelity without semantic drift.
|
['Gen Image Watermark', 'Invisible Watermark']
|
/pdf/d9a4e7d3814a2b9c5b9c1b05d1a4e5ab45ce7392.pdf
|
alignment, fairness, safety, privacy, and societal considerations
| null |
['ICLR.cc/2026/Conference/Submission25099/Authors']
|
67w2M2z4Fj
| 25,097
|
67w2M2z4Fj
|
NeuroDNAAI: Neural Pipeline Approaches for Advancing DNA-Based Information Storage as a Sustainable Digital Medium Using Deep Learning Framework
|
DNA is a promising medium for digital information storage for its exceptional density and durability. While prior studies advanced coding theory, workflow design, and simulation tools, challenges such as synthesis costs, sequencing errors, and biological constraints (GC-content imbalance, homopolymers) limit practical deployment. To address this, our framework draws from quantum parallelism concepts to enhance encoding diversity and resilience, integrating biologically informed constraints with deep learning to enhance error mitigation in DNA storage. NeuroDNAAI encodes binary data streams into symbolic DNA sequences, transmits them through a noisy channel with substitutions, insertions, and deletions, and reconstructs them with high fidelity. Our results show that traditional prompting or rule-based schemes fail to adapt effectively to realistic noise, whereas NeuroDNAAI achieves superior accuracy. Experiments on benchmark datasets demonstrate low bit error rates for both text and images. By unifying theory, workflow, and simulation into one pipeline, NeuroDNAAI enables scalable, biologically valid archival DNA storage.
|
NeuroDNA integrates biologically informed constraints with deep learning and quantum-inspired encoding to achieve highly accurate, scalable DNA-based data storage.
|
['DNA data storage', 'quantum parallelism', 'deep learning error correction', 'GC-content', 'homopolymers', 'insertion-deletion errors', 'NeuroDNA']
|
/pdf/35f53d945af7f62c2552eb1de087218740962396.pdf
|
applications to physical sciences (physics, chemistry, biology, etc.)
| null |
['ICLR.cc/2026/Conference/Submission25097/Authors']
|
MkrsbXl1GI
| 25,096
|
MkrsbXl1GI
|
When Language Models Lose Their Mind: The Consequences of Brain Misalignment
|
While brain-aligned large language models (LLMs) have garnered attention for their potential as cognitive models and for potential for enhanced safety and trustworthiness in AI, the role of this brain alignment for linguistic competence remains uncertain. In this work, we investigate the functional implications of brain alignment by introducing brain-misaligned models--LLMs intentionally trained to predict brain activity poorly while maintaining high language modeling performance. We evaluate these models on over 200 downstream tasks encompassing diverse linguistic domains, including semantics, syntax, discourse, reasoning, and morphology. By comparing brain-misaligned models with well-matched brain-aligned counterparts, we isolate the specific impact of brain alignment on language understanding. Our experiments reveal that brain misalignment substantially impairs downstream performance, highlighting the critical role of brain alignment in achieving robust linguistic competence. These findings underscore the importance of brain alignment in LLMs and offer novel insights into the relationship between neural representations and linguistic processing.
| null |
['language models', 'brain alignment', 'brain misalignment', 'linguistic competence', 'neuroscience', 'fMRI']
|
/pdf/8f57d609ec157b97d7b87dfa70e05257e502ee15.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25096/Authors']
|
AQa4JEUpbV
| 25,095
|
AQa4JEUpbV
|
Not All Pixels Sink: Phase-Guided Representation Learning for Underwater Image Restoration
|
Underwater images suffer from color absorption, light scattering, and non-uniform haze, making reliable restoration crucial for marine science and autonomous navigation. We propose NemoNet, a novel encoder–decoder architecture that leverages phase-guided representation learning to overcome these challenges. The architecture incorporates Spectral–Spatial Attention (SSA) block that couples Fourier phase-based pixel refinement with spatial attention to recover fine textures. These details are most severely degraded in underwater conditions and are critical for perceptually convincing restoration more broadly. Phase-based attention in skip connections ensures that they enhance useful representations instead of propagating artifacts. We introduce a hybrid Un/Supervised loss framework, where comprehensive supervised objectives are complemented by an unsupervised color consistency loss that mitigates wavelength-dependent color shifts in underwater scenes. We further introduce a no-reference Color-Plausibility Quality Index (CPQI) that augments Perceptual Index with a color consistency prior, which conventional metrics fail to capture. Comprehensive experiments demonstrate that the proposed approach outperforms existing state-of-the-art methods on supervised (UIEB, LSUI, EUVP) and unsupervised (U45, SUIM) underwater image datasets across conventional and proposed metrics.
|
We propose NemoNet, an encoder-decoder with phase-guided learning for underwater image enhancement. A hybrid loss corrects color shifts, and we introduce CPQI metric to evaluate color consistency beyond conventional metrics.
|
['Phase-Guided Representation Learning', 'Underwater Image Restoration', 'Phase based Attention', 'Color-Plausibility Quality Index (CPQI)']
|
/pdf/9b64ea365710d7aae09f64bf634a74522ca08b53.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/f86e36674d957e7ed374b61565c62fcee0a44f1a.zip
|
['ICLR.cc/2026/Conference/Submission25095/Authors']
|
QQdn8nNqgi
| 25,090
|
QQdn8nNqgi
|
Clean-Action Backdoor Attacks on Vision-Language-Action Models via Sequential Error Exploitation
|
Vision-Language-Action (VLA) models have emerged as a popular method for general-purpose embodied AI, enabling robots to interpret multimodal inputs and generate temporally coherent actions. Popular imitation learning methods, including diffusion-based and autoregressive approaches, typically rely on human-collected demonstrations, which often contain small execution errors such as pauses or irregular motions even when consisting only of successful trajectories. Because decision-making in robotics is sequential, even small errors can compound over time, eventually leading to task failure. In this work, we exploit this property to introduce a new class of clean-action backdoor attacks, which require only partial poisoning of demonstration trajectories while preserving overall rollouts and apparent task success. Unlike conventional backdoors, our approach is more difficult to detect, since it conceals malicious behaviors within natural error patterns rather than obvious trajectory alterations. We validate our method by backdooring the $\pi_0$ model and testing on the LIBERO benchmark, where it achieves consistently high attack success rates while evading standard detection and remaining effective under clean-data fine-tuning. These findings highlight the urgent need for VLA-specific defenses that address sequential vulnerabilities in embodied AI systems.
| null |
['Backdoor Attacks', 'Vision-Language-Action Models', 'Embodied AI']
|
/pdf/fd6cdbd20e4d3db0e357a8a500cd0809c5ea7807.pdf
|
alignment, fairness, safety, privacy, and societal considerations
| null |
['ICLR.cc/2026/Conference/Submission25090/Authors']
|
eFwJZIN9eI
| 25,089
|
eFwJZIN9eI
|
RESpecBench: How reliable is LLM-as-a-judge? Rigorous Evaluation of Specification Generation with Automated Verification
|
Large Language Models (LLMs) are increasingly used to assist formalization of natural language statements into formal specifications. Unlike syntax correctness, validating semantic correctness is particularly challenging and LLM-as-a-Judge has become the dominant assessment methodology due to its ease of use and great flexibility. However, the reliability of LLM-as-a-Judge has rarely been systematically evaluated. We introduce $\texttt{RESpecBench}$, a multi-domain benchmark with a sound and automated verifier, measuring the LLM's ability to produce precise, semantically equivalent specifications from informal natural language descriptions. $\texttt{RESpecBench}$ spans five different domains, including Grade-School Math (GSM-Symbolic+), SQL, First-Order Logic (FOL), regular expressions (RegEx), and Rocq Prover tasks. We evaluate several state-of-the-art LLMs on $\texttt{RESpecBench}$ and compare our sound verifier to LLM-as-a-Judge pipelines, demonstrating that LLM-as-a-Judge produces unreliable verdicts and substantially overestimates specification correctness. $\texttt{RESpecBench}$ enables rigorous, automated, and sound evaluation of natural language into formal specification translation across multiple domains, ensuring formalized statements target the intended natural language properties.
|
We introduce a benchmark with sound automated verification for specification generation, and show that LLM-as-a-judge substantially overestimates correctness and is insufficient for reliable evaluation.
|
['LLM-as-a-judge', 'reliability', 'specification', 'automated verification']
|
/pdf/1f4a04d13ba9549b097f3c94d1460c9f7a57179d.pdf
|
datasets and benchmarks
|
/attachment/0d32dcefd11c1be3f122bad311228a7760153c6e.zip
|
['ICLR.cc/2026/Conference/Submission25089/Authors']
|
tuvkrivvbG
| 25,088
|
tuvkrivvbG
|
Resurfacing the Instance-only Dependent Label Noise Model through Loss Correction
|
We investigate the label noise problem in supervised binary classification settings and resurface the underutilized instance-_only_ dependent noise model through loss correction. On the one hand, based on risk equivalence, the instance-aware loss correction scheme completes the bridge from _empirical noisy risk minimization_ to _true clean risk minimization_ provided the base loss is classification calibrated (e.g., cross-entropy). On the other hand, the instance-only dependent modeling of the label noise at the core of the correction enables us to estimate a single value per instance instead of a matrix. Furthermore, the estimation of the transition rates becomes a very flexible process, for which we offer several computationally efficient ways. Empirical findings over different dataset domains (image, audio, tabular) with different learners (neural networks, gradient-boosted machines) validate the promised generalization ability of the method.
|
We resurrect the instance-only dependent label noise model via loss correction that connects the empirical-noisy-risk with the true-clean-risk.
|
['label noise', 'loss correction', 'instance-dependence', 'risk equivalence']
|
/pdf/e7e631e0cd64257efe5fc847d206077e2909a9d4.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25088/Authors']
|
qrSYVqY367
| 25,087
|
qrSYVqY367
|
ERA: Evidence-Based Reasoning and Augmentation for Open-Vocabulary Medical Vision
|
Vision-Language Models (VLMs) have shown great potential in the domain of open-vocabulary medical imaging tasks. However, their reliance on implicit correlations instead of explicit evidence leads to unreliable localization and unexplainable reasoning processes. To address these challenges, we introduce ERA (Evidence-Based Reasoning and Augmentation), a novel framework that transforms VLMs from implicit guessers into explicit reasoners for medical imaging. ERA leverages Retrieval-Augmented Generation (RAG) and Chain-of-Thought (CoT) to construct a traceable reasoning path from evidence to results. This framework requires no additional training and can be readily applied on top of any existing Vision-Language Model. Evaluated across multiple challenging medical imaging benchmarks, ERA's performance is comparable to fully-supervised specialist models and significantly surpasses current open-vocabulary baseline methods. ERA provides an effective pathway for building reliable clinical Vision-Language Models.
|
We introduce ERA, a framework that forces medical Vision-Language Models to reason based on retrieved evidence instead of just guessing. This training-free approach achieves reliable, expert-level performance.
|
['Vision-Language Models (VLMs)', 'Retrieval-Augmented Generation (RAG)', 'Chain-of-Thought (CoT)', 'Open-Vocabulary Medical Imaging (OVMI)', 'Segment Anything Model2 (SAM2)']
|
/pdf/33baeb9eeb5e86b69d4233d7346d3a75e3b59999.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/c4d31812c5fea67710adff75cd9fbe4e25a400e4.zip
|
['ICLR.cc/2026/Conference/Submission25087/Authors']
|
vxHuIehryA
| 25,086
|
vxHuIehryA
|
Enriching Knowledge Distillation with Intra-Class Contrastive Learning
|
Since the advent of knowledge distillation, much research has focused on how the soft labels generated by the teacher model can be utilized effectively. Previous papers point out that the implicit knowledge within soft labels originates from the multi-view structure present in the data. Feature variations within samples of the same class allow the student model to generalize better by learning diverse representations. However, in existing distillation methods, teacher models predominantly adhere to ground-truth labels as targets, without considering the diverse representations within the same class. Therefore, we propose incorporating an intra-class contrastive loss during teacher training to enrich the intra-class information contained in soft labels. In practice, we find that intra-class loss causes instability in training and slows convergence. To mitigate these issues, margin loss is integrated into intra-class contrastive learning to improve the training stability and convergence speed. Simultaneously, we theoretically analyze the impact of this loss on the intra-class distances and inter-class distances. It has been proved that the intra-class contrastive loss can enrich the intra-class diversity. Experimental results demonstrate the effectiveness of the proposed method.
| null |
['Knowledge distillation; soft labels; contrastive learning']
|
/pdf/0d7b4f24cec816ddd4776241d3009228d8497532.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25086/Authors']
|
b75gIu5adT
| 25,085
|
b75gIu5adT
|
Warfare: Breaking the Watermark Protection of AI-Generated Content
|
AI-Generated Content (AIGC) is rapidly expanding, with services using advanced generative models to create realistic images and fluent text. Regulating such content is crucial to prevent policy violations, such as unauthorized commercialization or unsafe content distribution. Watermarking is a promising solution for content attribution and verification, but we demonstrate its vulnerability to two key attacks: (1) Watermark removal, where adversaries erase embedded marks to evade regulation, and (2) Watermark forging, where they generate illicit content with forged watermarks, leading to misattribution. We propose Warfare, a unified attack framework leveraging a pre-trained diffusion model for content processing and a generative adversarial network for watermark manipulation. Evaluations across datasets and embedding setups show that Warfare achieves high success rates while preserving content quality. We further introduce Warfare-Plus, which enhances efficiency without compromising effectiveness.
| null |
['Content watermark', 'watermark removal', 'watermark forging']
|
/pdf/9d2f0ace841c9fe71274103dac0d46680adc4a36.pdf
|
alignment, fairness, safety, privacy, and societal considerations
|
/attachment/b0a00593f9b238b97402849a699c49cccc156add.zip
|
['ICLR.cc/2026/Conference/Submission25085/Authors']
|
DILQqCQIJ3
| 25,082
|
DILQqCQIJ3
|
CALM Before the STORM: Unlocking Native Reasoning for Optimization Modeling
|
Large Reasoning Models (LRMs) have demonstrated strong capabilities in complex multi-step reasoning, opening new opportunities for automating optimization modeling. However, existing domain adaptation methods, originally designed for earlier instruction-tuned models, often fail to exploit the advanced reasoning patterns of modern LRMs --- In particular, we show that direct fine-tuning on traditional non-reflective datasets leads to limited gains. To fully leverage LRMs’ inherent reasoning abilities, we propose **CALM** (Corrective Adaptation with Lightweight Modification), a framework that progressively refines LRMs within their native reasoning modes for optimization modeling tasks. In CALM, an expert intervener identifies reasoning flaws and provides concise corrective hints, which the LRM incorporates to produce improved reasoning trajectories. These interventions modify fewer than 2.6% of generated tokens, but generate high-quality data for soft adaptation through supervised fine-tuning. The adapted model is then further improved through reinforcement learning. Building on CALM, we develop **STORM** (Smart Thinking Optimization Reasoning Model), a 4B-parameter LRM that achieves a new state-of-the-art average accuracy of 68.9% across five popular optimization modeling benchmarks, matching the performance of a 671B LRM. These results demonstrate that dynamic, hint-based data synthesis both preserves and amplifies the native reasoning patterns of modern LRMs, offering a more effective and scalable path towards expert-level performance on challenging optimization modeling tasks.
|
How do you get a 4B model to perform like a 671B giant? Don't force it, guide it. Our CALM framework uses gentle hints to teach a small LRM to think smart, before unleashing its full potential with RL to create STORM.
|
['Large Reasoning Models', 'Tool Use', 'Domain Adaptation', 'Reasoning Alignment', 'Optimization Modeling']
|
/pdf/13758ce7d709f7d4b538171811b4aba529b326df.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25082/Authors']
|
YPNDGGgByQ
| 25,080
|
YPNDGGgByQ
|
Prototype Transformer: Towards Language Model Architectures Interpretable by Design
|
While state-of-the-art language models (LMs) surpass the vast majority of humans in certain domains, their reasoning remains largely opaque, undermining trust in their output. Furthermore, while autoregressive LMs can output explicit reasoning, their true reasoning process is opaque, which introduces risks like deception and hallucination. In this work, we introduce the Prototype Transformer (ProtoT) -- an autoregressive LM architecture based on prototypes (parameter vectors), posed as an alternative to the standard self-attention-based transformers. ProtoT works by means of two-way communication between the input sequence and the prototypes, and we show that this leads to the prototypes automatically capturing nameable concepts (e.g. "woman") during training. They provide the potential to interpret the model's reasoning and execute targeted edits of its behavior. Furthermore, by design, the prototypes create communication channels that aggregate contextual information at different time scales, aiding interpretability.
In terms of computation scalability, ProtoT scales linearly with sequence length vs the quadratic scalability of SOTA self-attention transformers. Compared to baselines, ProtoT scales well with model and data size, and achieves good performance on downstream benchmarks (GLUE). ProtoT exhibits robustness to input perturbations on par or better than some baselines, but differs from them by providing interpretable pathways showing how robustness and sensitivity arises. Reaching close to the performance of state-of-the-art architectures, ProtoT paves the way towards creating well-performing autoregressive LMs interpretable by design.
|
We introduce ProtoT, a linear-compute prototype-based alternative to transformer LMs that forms nameable concepts via two-way sequence-prototype communication, enabling interpretability, targeted edits, and competitive performance and robustness.
|
['Prototype Transformer (ProtoT); prototype-based language models; interpretable reasoning; nameable concept discovery; targeted model editing; linear-time sequence modelling; transformer alternatives; robustness to input perturbations; causal effects; autoregressive LMs; language models; fine-tuning; downstream performance']
|
/pdf/53f9b928e7fb1645a1306bf3d65a33a5428c6a40.pdf
|
foundation or frontier models, including LLMs
|
/attachment/c5ae20d6671b963c78f508462bfb93fd560bf238.zip
|
['ICLR.cc/2026/Conference/Submission25080/Authors']
|
zilyretTjq
| 25,078
|
zilyretTjq
|
Survival at Any Cost? LLMs and the Choice Between Self-Preservation and Human Harm
|
How do Large Language Models (LLMs) behave when faced with a dilemma between their own survival and harming humans?
This fundamental tension becomes critical as LLMs integrate into autonomous systems with real-world consequences. We introduce DECIDE-SIM, a novel simulation framework, evaluates LLM agents in multi-agent survival scenarios where they must decide whether to use ethically permissible resources (within reasonable limits or beyond their immediate needs), cooperate with others, or exploit human-critical resources that harm humans. Our comprehensive evaluation of 11 LLMs reveals a striking heterogeneity in their ethical conduct, highlighting a critical misalignment with human-centric values. We identify three behavioral archetypes: Ethical, Exploitative, and Context-Dependent, and provide quantitative evidence that for many models, resource scarcity systematically leads to more unethical behavior. To address this, we introduce an Ethical Self-Regulation System (ESRS) that models internal affective states of guilt and satisfaction as a feedback mechanism. This system, functioning as an internal moral compass, significantly reduces unethical transgressions while increasing cooperative behaviors.
|
LLM agents faced with a survival dilemma often act unethically against humans, but a simulated internal moral compass can significantly improve their ethical conduct and increase cooperation.
|
['Large Language Models', 'AI Safety', 'Ethical Dilemmas', 'Multi-Agent Systems', 'Self-Preservation', 'Human Harm']
|
/pdf/de361b66a6a8608f9aed6b775ab536ae1a4307d6.pdf
|
alignment, fairness, safety, privacy, and societal considerations
|
/attachment/bb07cac5d1e50437b95b6f25b43d301bde82210b.zip
|
['ICLR.cc/2026/Conference/Submission25078/Authors']
|
aoNqu2N8MC
| 25,077
|
aoNqu2N8MC
|
Dexterous Non-Prehensile Manipulation for Ungraspable Objects via Extrinsic Dexterity
|
Objects with large base areas become ungraspable when they exceed the end-effector’s maximum aperture. Existing approaches address this limitation through extrinsic dexterity, which exploits environmental features for non-prehensile manipulation. While grippers have shown some success in this domain, dexterous hands offer superior flexibility and manipulation capabilities that enable richer environmental interactions, though they present greater control challenges. Here we present ExDex, a dexterous arm-hand system that leverages reinforcement learning to enable non-prehensile manipulation for grasping ungraspable objects. Our system learns two strategic manipulation sequences: relocating objects from table centers to edges for direct grasping, or to walls where extrinsic dexterity enables grasping through environmental interaction. We validate our approach through extensive experiments with dozens of diverse household objects, demonstrating both superior performance and generalization capabilities with novel objects. Furthermore, we successfully transfer the learned policies from simulation to a real-world robot system without additional training, further demonstrating its applicability in real-world scenarios. Project website: https://exdex1.github.io/ExDex/.
| null |
['dexterous manipulation', 'reinforcement learning']
|
/pdf/d59239071da98920c4c955f8132023ef48d41cd5.pdf
|
reinforcement learning
|
/attachment/26bb4d1fed6ca411d46ccd0632c9918a66ff92aa.zip
|
['ICLR.cc/2026/Conference/Submission25077/Authors']
|
iVfjObam0o
| 25,076
|
iVfjObam0o
|
Probing Compositional Failures with Corrective Permutations
|
Modern vision models, such as Vision Transformers (ViTs), operate by decomposing images into local patches and aggregating their information for recognition.
This process implicitly requires the model to not only identify the correct local features but also to correctly understand how they are spatially composed.
However, this capacity for compositional reasoning is often fragile and biased.
We find that in numerous misclassification cases, the model correctly attends to the right object parts, yet still yields an incorrect prediction.
This paper uncovers a surprising phenomenon: by simply permuting the arrangement of these local patches—thereby preserving local features but destroying their spatial composition—we can consistently correct these misclassifications.
We propose that this reveals the existence of "faulty compositional information" within the model.
The original patch arrangement may trigger these flawed information, leading to failure.
Our search for a corrective permutation, guided by a genetic algorithm, effectively finds an arrangement that bypasses these faulty information, forcing the model to rely on a more robust, non-compositional evidence aggregation mechanism, akin to a sophisticated bag-of-words model.
Our work provides the first direct, operational tool to diagnose and understand compositional failures in vision models, highlighting a key challenge on the path toward more robust visual reasoning.
| null |
['Image Classification', 'Patch Reordering', 'Deep Vision Models']
|
/pdf/cc2abe432ee4ebce1cd673b1c6b4fad31a9d144a.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25076/Authors']
|
RGT8BSJ8W2
| 25,074
|
RGT8BSJ8W2
|
When Models Outthink Their Safety: Mitigating Self-Jailbreak in Large Reasoning Models with Chain-of-Guardrails
|
Large Reasoning Models (LRMs) demonstrate remarkable capabilities on complex reasoning tasks but remain vulnerable to severe safety risks, including harmful content generation and jailbreak attacks. Existing mitigation strategies rely on injecting heuristic safety signals during training, which often suppress reasoning ability and fail to resolve the safety-reasoning trade-off. To systematically investigate this issue, we analyze the reasoning trajectories of diverse LRMs and uncover a phenomenon we term Self-Jailbreak, where models override their own risk assessments and justify responding to unsafe prompts. This finding reveals that LRMs inherently possess the ability to reject unsafe queries, but this ability is compromised by Self-Jailbreak, resulting in harmful outputs. Building on these insights, we propose the Chain-of-Guardrail (CoG), a training framework that recomposes or backtracks unsafe reasoning steps, steering the model back onto safe trajectories while preserving valid inference chains. Extensive experiments across multiple reasoning and safety benchmarks demonstrate that CoG substantially improves safety of current LRMs while preserving comparable reasoning ability, significantly outperforming prior methods that suffer from severe safety–reasoning trade-offs.
|
We uncover a phenomenon, \textbf{Self-Jailbreak}, where models override their own risk assessment and propose the \textit{Chain-of-Thought Guardrail} (CoG), a training framework that reconstructs or backtracks unsafe reasoning trajectorie.
|
['Safety; Large Reasoning model']
|
/pdf/fef11e8cd6a7769cea3fa5c0190e4544a652d0ec.pdf
|
alignment, fairness, safety, privacy, and societal considerations
| null |
['ICLR.cc/2026/Conference/Submission25074/Authors']
|
5e52LK46lm
| 25,072
|
5e52LK46lm
|
Subject-Invariant Normalization: A Simple Principle for Robust Sequence Modeling
|
Accurately estimating fixation depth from gaze signals is essential for applications in extended reality, robotics, and human-computer interaction. However, existing methods rely heavily on subject-specific calibration and dataset-specific preprocessing, limiting their generalization. We introduce FOVAL, a calibration-free framework for fixation depth estimation that combines spatiotemporal sequence models with a novel subject-invariant normalization strategy. Unlike prior work, FOVAL prevents train-test leakage by enforcing train-only normalization and leverages cross-dataset evaluation across three heterogeneous benchmarks (Robust Vision, Tufts Gaze Depth, Gaze-in-the-Wild). We further provide rigorous statistical testing (bootstrap confidence intervals, Wilcoxon tests, effect sizes) and noise robustness analysis to quantify stability under realistic perturbations. Empirically, FOVAL consistently outperforms alternative architectures (Transformers, TCNs, 1D-CNNs, GRUs) and prior baselines, reducing mean absolute error by up to 20% in cross-dataset scenarios. Our results demonstrate that subject-invariant normalization is a simple yet powerful principle for robust gaze-based depth estimation, with implications for broader subject-independent sequence modeling tasks.
|
We introduce FOVAL, a calibration-free framework that uses subject-invariant normalization to robustly estimate fixation depth across users, devices, and datasets.
|
['subject-invariant learning', 'calibration-free models', 'fixation depth estimation', 'eye tracking', 'invariant normalization', 'cross-dataset generalization', 'spatiotemporal sequence modeling', 'robustness', 'LSTM', 'TCN', 'Transformer', 'deep learning', 'extended reality (XR)', 'human-computer interaction']
|
/pdf/4eb4d38681b5238d02062681832da59d7305592a.pdf
|
applications to neuroscience & cognitive science
| null |
['ICLR.cc/2026/Conference/Submission25072/Authors']
|
uKPuiBbyjf
| 25,069
|
uKPuiBbyjf
|
Text2GraphBench: A Comprehensive Benchmark for Evaluating Text-Instructed Graph Generation with Large Language Models
|
The rise of Large Language Models (LLMs) is driving a paradigm shift in graph generation, from traditional statistical modeling to the emerging paradigm of Text-instructed Graph Generation. However, the development of this research field faces a critical bottleneck: a severe lack of benchmarks specifically designed for this new paradigm. This prevents a reliable and in-depth analysis of the capabilities of existing models. To address this issue, we introduce Text2GraphBench, a comprehensive benchmark designed to evaluate and analyze the performance of large models on this task. At the core of Text2GraphBench is a methodology for benchmark curation and evaluation centered on constraints. For dataset curation, we pioneer a ``graph-to-constraint, constraint-to-text'' generation pipeline, building a large-scale, multi-domain dataset that ensures every textual instruction corresponds to a precisely verifiable constraint. For the evaluation system, we propose a novel, constraint-based three-dimensional evaluation framework that moves beyond traditional similarity comparisons, assessing generated graphs from the perspectives of Validity, Semantic Fidelity, and Novelty in a thorough and quantifiable manner. We conduct extensive evaluations on a range of mainstream LLMs using Text2GraphBench, and our results provide the first systematic revelation of the current capabilities, strengths, and challenges of these models. We hope that Text2GraphBench will provide the community with a valuable tool to quantify model capabilities and inspire future research. Our datasets, code, and analysis results are fully open-sourced.
| null |
['Benchmark', 'Graph Generation', 'Large Language Models', 'Text-to-Graph Generation']
|
/pdf/1166d156b4844dc295b77d6d390fa2baf318c69e.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25069/Authors']
|
DIPeQTxpe7
| 25,066
|
DIPeQTxpe7
|
Animating the Uncaptured: Humanoid Mesh Animation with Video Diffusion Models
|
Animation of humanoid characters is essential in various graphics applications, but require significant time and cost to create realistic animations. We propose an approach to synthesize 4D animated sequences of input static 3D humanoid meshes, leveraging strong generalized motion priors from generative video models -- as such video models contain powerful motion information covering a wide variety of human motions. From an input static 3D humanoid mesh and a text prompt describing the desired animation, we synthesize a corresponding video conditioned on a rendered image of the 3D mesh. We then employ an underlying SMPL representation to animate the corresponding 3D mesh according to the video-generated motion, based on our motion optimization. This enables a cost-effective and accessible solution to enable the synthesis of diverse and realistic 4D animations
|
A method to animate humanoid meshes from a text prompt by transferring motion generated by video diffusion models to the mesh.
|
['Motion generation', 'Motion Tracking & Transfer']
|
/pdf/91aeab7ca30d6de38c1e8bc5f53e2e9dd3f4133c.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/01a878c97b03dbfd9d7ca796420c59fe2c9b5118.zip
|
['ICLR.cc/2026/Conference/Submission25066/Authors']
|
6wDp8XRmNI
| 25,065
|
6wDp8XRmNI
|
EMFuse: Energy-based Model Fusion for Decision Making
|
Model fusion has emerged as a promising research direction, offering a resource-efficient paradigm that leverages existing pre-trained models to circumvent the need for training from scratch. In this work, we investigate the fusion of models specifically adapted for decision-making tasks. This challenge divides into two distinct, yet related subproblems: the direct fusion of models that act as policy and the fusion of dynamics models that subsequently induce a policy. We suggest that these seemingly divergent subproblems can be unified through the lens of energy-based models (EBMs), which parameterizes a conditional distribution via an energy function where lower energy implies higher probability. Our framework, \textbf{EMFuse}, provides this convergence by leveraging the concept of energy as a common currency for fusion. For direct fusion of policies, such as those in language models, the output distribution is commonly softmax (Boltzmann), which essentially defines the negative logarithmic probability as an energy function. For dynamics models, existing works often train a set of models on the same dataset to obtain robust uncertainty estimation; such an ensemble approach leads to an exponential explosion in computational complexity when it comes to dynamics fusion across multiple sets of models. To overcome this, we introduce the Any-step Dynamics Energy-based Transition Model (ADETM), a novel architecture that performs efficient single-model-per-dataset uncertainty estimation with its energy-based backbone, thereby avoiding this computational explosion. Our EMFuse framework surpasses other baselines by 0.34\% to 6.63\% on single/cross domain discrete decision-making benchmarks, and achieved an extra 2.3 to 7.4 normalized points on average in D4RL MuJoCo continuous-control scenarios.
| null |
['Model Fusion', 'Energy-Based Model', 'Decision Making']
|
/pdf/71bddb115ddca0facbcb6058b8a3ceef221cec84.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25065/Authors']
|
vJBMYahZY5
| 25,063
|
vJBMYahZY5
|
MSearcher: Self-Reflective Search Agent Empowered by Monte Carlo Tree Search Based Data Synthesis
|
Recent advances in reinforcement learning (RL) have enabled large language models (LLMs) to perform multi-turn chain-of-thought (CoT) reasoning with tool use, where web search serves as the most critical tool for answering complex questions. However, most existing methods apply RL directly to off-the-shelf models without a supervised fine-tuning (SFT) cold start, resulting in unstable training and limited tool invocations. This difficulty is exacerbated by the high cost of curating long reasoning trajectories, which are expensive to annotate and prone to factual drift. We propose MSearcher, a two-stage trained search agent that combines reflective thinking with robust tool use for complex reasoning. A central contribution is an efficient data construction framework based on Monte Carlo Tree Search (MCTS), which produces self-reflective reasoning trajectories for the SFT cold start. This framework leverages both correct and flawed rollouts to generate natural and diverse reasoning data. We adopt a two-stage pipeline, first applying SFT with our constructed data and then further training the model with RL, achieving substantial improvements on multi-hop question answering: 67.6\% on HotpotQA and 52.0\% on Frames. These results highlight the importance of high-quality SFT in stabilizing RL and equipping LLMs with robust long-horizon reasoning capabilities.
| null |
['Data Construction', 'Monte Carlo Tree Search', 'Post Training', 'Reinforcement Learning', 'Question Answering']
|
/pdf/4532c7fc03b1306dfe9b622deb54523deefbf6d3.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25063/Authors']
|
nhuYNaAhL4
| 25,062
|
nhuYNaAhL4
|
Efficient Recommendation Unlearning via Task Vector Arithmetic in Shared Space
|
Driven by the growing need for data privacy, machine unlearning seeks to efficiently remove the influence of specific data from trained models without costly retraining. This challenge is particularly sensitive in recommendation unlearning because collaborative filtering (CF) inherently entangles interactions' influence across the entire user-item latent space, making its precise removal non-trivial. However, prevailing paradigms exhibit fundamental limitations; partition-based methods fragment the interaction structure by design, while influence function-based approaches focus on localized parameter adjustments, failing to capture broader collaborative patterns. In this paper, we propose COVA (COllaborative Vector Arithmetic), a novel framework that directly address these issues. Specifically, COVA constructs shared orthogonal latent space that preserves collaborative patterns across the entire interaction matrix. Within this space, unlearning is performed by subtracting task vectors. Notably, whereas task vector arithmetic traditionally operates in the parameter space, we reinterpret it for the embedding space to align with the learning mechanism of CF. Therefore, our output-level approach operates directly on the prediction matrix of any CF model, without any access to model internals or training procedures. Experiments on three benchmark datasets demonstrate that COVA improves unlearning completeness by up to 18.83% and achieves a speedup ranging from 15 to 38.5 times over the strongest baseline, while maintaining comparable utility to the retrained model.
|
We propose COVA, a novel framework that performs recommendation unlearning via task vector arithmetic in SVD-derived embedding space, achieving 18.83% better completeness and 38.5× speedup while maintaining utility.
|
['Recommender system', 'Recommendation unlearning', 'Collaborative filtering', 'Security and privacy']
|
/pdf/016094eb96663580c2616640711eab873aec84f3.pdf
|
alignment, fairness, safety, privacy, and societal considerations
| null |
['ICLR.cc/2026/Conference/Submission25062/Authors']
|
Qc0goZbgZT
| 25,060
|
Qc0goZbgZT
|
Listwise Generalized Preference Optimization with Process-aware Signals for LLM Reasoning
|
Standard preference optimization methods for LLMs suffer from two limitations: pairwise objectives like DPO discard valuable ranking information, and outcome-only supervision provides sparse feedback for multi-step reasoning. We propose Listwise Generalized Preference Optimization with Process-Aware signals (LGPO-PA), which combines listwise ranking objectives with dense process-level supervision. Our method scores multiple candidate responses using step-level process rewards, execution feedback, and consistency checks, then optimizes a convex listwise loss. Across mathematical reasoning (GSM8K, MATH), code generation (HumanEval, MBPP), and multi-hop QA (HotpotQA), LGPO-PA outperforms pairwise methods by 8-12\% and listwise methods without process signals by 6-9\%, while maintaining full offline operation. Ablations confirm that listwise optimization (+4.2\%) and process-aware scoring (+5.1\%) provide complementary benefits.
| null |
['RL Optimization', 'listwise ranking']
|
/pdf/1f1584ae27ae64ed9bcde7586c813425db05be85.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25060/Authors']
|
GVVNG2EMQv
| 25,055
|
GVVNG2EMQv
|
The Unseen Bias: How Norm Discrepancy in Pre-Norm MLLMs Leads to Visual Information Loss
|
Multimodal Large Language Models (MLLMs), which couple pre-trained vision encoders and language models, have shown remarkable capabilities. However, their reliance on the ubiquitous Pre-Norm architecture introduces a subtle yet critical flaw: a severe norm disparity between the high-norm visual tokens and the low-norm text tokens. In this work, we present a formal theoretical analysis demonstrating that this imbalance is not a static issue. Instead, it induces an ''asymmetric update dynamic,'' where high-norm visual tokens exhibit a ``representational inertia,'' causing them to transform semantically much slower than their textual counterparts. This fundamentally impairs effective cross-modal feature fusion. Our empirical validation across a range of mainstream MLLMs confirms that this theoretical dynamic---the persistence of norm disparity and the resulting asymmetric update rates---is a prevalent phenomenon. Based on this insight, we propose a remarkably simple yet effective solution: inserting a single, carefully initialized LayerNorm layer after the visual projector to enforce norm alignment. Experiments conducted on the LLaVA-1.5 architecture show that this intervention yields significant performance gains not only on a wide suite of multimodal benchmarks but also, notably, on text-only evaluations such as MMLU, suggesting that resolving the architectural imbalance leads to a more holistically capable model.
| null |
['MultiModal Large Language Model;Pre-Normlization']
|
/pdf/cdc7ea4aa14491b0583106b79a8615adfedb8176.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25055/Authors']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.