Standard

Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling. / Miron, Bratenkov; Bondarenko, Ivan.

Advances in Neural Computation, Machine Learning, and Cognitive Research IX. ed. / Boris Kryzhanovsky; Witali Dunin-Barkowski; Vladimir Redko; Yury Tiumentsev; Valentin V. Klimov. Springer, 2026. p. 47-63 5 (Studies in Computational Intelligence; Vol. 1241 SCI).

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Harvard

Miron, B & Bondarenko, I 2026, Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling. in B Kryzhanovsky, W Dunin-Barkowski, V Redko, Y Tiumentsev & VV Klimov (eds), Advances in Neural Computation, Machine Learning, and Cognitive Research IX., 5, Studies in Computational Intelligence, vol. 1241 SCI, Springer, pp. 47-63, XXVII International Conference on Neuroinformatics, Москва, Russian Federation, 20.10.2025. https://doi.org/10.1007/978-3-032-07690-8_5

APA

Miron, B., & Bondarenko, I. (2026). Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling. In B. Kryzhanovsky, W. Dunin-Barkowski, V. Redko, Y. Tiumentsev, & V. V. Klimov (Eds.), Advances in Neural Computation, Machine Learning, and Cognitive Research IX (pp. 47-63). [5] (Studies in Computational Intelligence; Vol. 1241 SCI). Springer. https://doi.org/10.1007/978-3-032-07690-8_5

Vancouver

Miron B, Bondarenko I. Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling. In Kryzhanovsky B, Dunin-Barkowski W, Redko V, Tiumentsev Y, Klimov VV, editors, Advances in Neural Computation, Machine Learning, and Cognitive Research IX. Springer. 2026. p. 47-63. 5. (Studies in Computational Intelligence). doi: 10.1007/978-3-032-07690-8_5

Author

Miron, Bratenkov ; Bondarenko, Ivan. / Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling. Advances in Neural Computation, Machine Learning, and Cognitive Research IX. editor / Boris Kryzhanovsky ; Witali Dunin-Barkowski ; Vladimir Redko ; Yury Tiumentsev ; Valentin V. Klimov. Springer, 2026. pp. 47-63 (Studies in Computational Intelligence).

BibTeX

@inproceedings{77f893059f3e4cf8a01758a746a17ac7,
title = "Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling",
abstract = "Generalization under data shifts remains a critical challenge in machine learning. The invariant risk minimization (IRM) paradigm enhances robustness by searching invariant features across environments, but its practical application is constrained by the need to predefine environment. To overcome this limitation, we propose a clustering-based method that enables IRM training without prior environment knowledge by treating clusters as environments. Our experiments show that this approach improves model robustness to data shifts compared to empirical risk minimization (ERM). Specifically, on a weather prediction task, the mean squared error (MSE) was reduced by 10%, while in a language modeling task involving long texts, perplexity improved by up to 75%. Additionally, we introduce an adaptive hyperparameter tuning strategy for the IRM penalty term, which stabilizes training and further enhances robustness. This adaptive IRM achieves an additional 10% MSE improvement for weather prediction and a 460% perplexity gain on long textual inputs compared to classical IRM. An analysis of linear dependence between input variables and targets reveals that adaptive IRM encourages learning more complex, nonlinear invariant features, which underpins its superior generalization under distributional shifts. These results demonstrate that combining environment discovery via clustering with adaptive IRM substantially improves model generalization under distributional shifts.",
keywords = "Domain shift, ERM, Empirical Risk Minimization, IRM, Invariant Risk Minimization, Machine learning, OOD, Out of distribution",
author = "Bratenkov Miron and Ivan Bondarenko",
note = "Miron, B., Bondarenko, I. (2026). Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y., Klimov, V.V. (eds) Advances in Neural Computation, Machine Learning, and Cognitive Research IX. NEUROINFORMATICS 2025. Studies in Computational Intelligence, vol 1241. Springer, Cham. https://doi.org/10.1007/978-3-032-07690-8_5; XXVII International Conference on Neuroinformatics ; Conference date: 20-10-2025 Through 24-10-2025",
year = "2026",
doi = "10.1007/978-3-032-07690-8_5",
language = "English",
isbn = "978-3-032-07689-2",
series = "Studies in Computational Intelligence",
publisher = "Springer",
pages = "47--63",
editor = "Boris Kryzhanovsky and Witali Dunin-Barkowski and Vladimir Redko and Yury Tiumentsev and Klimov, {Valentin V.}",
booktitle = "Advances in Neural Computation, Machine Learning, and Cognitive Research IX",
address = "United States",

}

RIS

TY - GEN

T1 - Environment-Agnostic IRM via Unsupervised Clustering and Adaptive Penalty Scaling

AU - Miron, Bratenkov

AU - Bondarenko, Ivan

N1 - Conference code: 27

PY - 2026

Y1 - 2026

N2 - Generalization under data shifts remains a critical challenge in machine learning. The invariant risk minimization (IRM) paradigm enhances robustness by searching invariant features across environments, but its practical application is constrained by the need to predefine environment. To overcome this limitation, we propose a clustering-based method that enables IRM training without prior environment knowledge by treating clusters as environments. Our experiments show that this approach improves model robustness to data shifts compared to empirical risk minimization (ERM). Specifically, on a weather prediction task, the mean squared error (MSE) was reduced by 10%, while in a language modeling task involving long texts, perplexity improved by up to 75%. Additionally, we introduce an adaptive hyperparameter tuning strategy for the IRM penalty term, which stabilizes training and further enhances robustness. This adaptive IRM achieves an additional 10% MSE improvement for weather prediction and a 460% perplexity gain on long textual inputs compared to classical IRM. An analysis of linear dependence between input variables and targets reveals that adaptive IRM encourages learning more complex, nonlinear invariant features, which underpins its superior generalization under distributional shifts. These results demonstrate that combining environment discovery via clustering with adaptive IRM substantially improves model generalization under distributional shifts.

AB - Generalization under data shifts remains a critical challenge in machine learning. The invariant risk minimization (IRM) paradigm enhances robustness by searching invariant features across environments, but its practical application is constrained by the need to predefine environment. To overcome this limitation, we propose a clustering-based method that enables IRM training without prior environment knowledge by treating clusters as environments. Our experiments show that this approach improves model robustness to data shifts compared to empirical risk minimization (ERM). Specifically, on a weather prediction task, the mean squared error (MSE) was reduced by 10%, while in a language modeling task involving long texts, perplexity improved by up to 75%. Additionally, we introduce an adaptive hyperparameter tuning strategy for the IRM penalty term, which stabilizes training and further enhances robustness. This adaptive IRM achieves an additional 10% MSE improvement for weather prediction and a 460% perplexity gain on long textual inputs compared to classical IRM. An analysis of linear dependence between input variables and targets reveals that adaptive IRM encourages learning more complex, nonlinear invariant features, which underpins its superior generalization under distributional shifts. These results demonstrate that combining environment discovery via clustering with adaptive IRM substantially improves model generalization under distributional shifts.

KW - Domain shift

KW - ERM

KW - Empirical Risk Minimization

KW - IRM

KW - Invariant Risk Minimization

KW - Machine learning

KW - OOD

KW - Out of distribution

UR - https://www.scopus.com/pages/publications/105020042436

UR - https://www.mendeley.com/catalogue/bf16e3e5-aa7d-3de8-88e9-aa6ab49b3d9e/

U2 - 10.1007/978-3-032-07690-8_5

DO - 10.1007/978-3-032-07690-8_5

M3 - Conference contribution

SN - 978-3-032-07689-2

T3 - Studies in Computational Intelligence

SP - 47

EP - 63

BT - Advances in Neural Computation, Machine Learning, and Cognitive Research IX

A2 - Kryzhanovsky, Boris

A2 - Dunin-Barkowski, Witali

A2 - Redko, Vladimir

A2 - Tiumentsev, Yury

A2 - Klimov, Valentin V.

PB - Springer

T2 - XXVII International Conference on Neuroinformatics

Y2 - 20 October 2025 through 24 October 2025

ER -

ID: 71986243