Standard

CALM: Continual Associative Learning Model via Sparse Distributed Memory. / Нечесов, Андрей Витальевич; Ruponen, Janne.

In: Technologies, Vol. 13, No. 12, 587, 13.12.2025.

Research output: Contribution to journalArticlepeer-review

Harvard

APA

Vancouver

Нечесов АВ, Ruponen J. CALM: Continual Associative Learning Model via Sparse Distributed Memory. Technologies. 2025 Dec 13;13(12):587. doi: 10.3390/technologies13120587

Author

BibTeX

@article{62dd81e4396a43f197859b143c3a4d91,
title = "CALM: Continual Associative Learning Model via Sparse Distributed Memory",
abstract = "Sparse Distributed Memory (SDM) provides a biologically inspired mechanism for associative and online learning. Transformer architectures, despite exceptional inference performance, remain static and vulnerable to catastrophic forgetting. This work introduces Continual Associative Learning Model (CALM), a conceptual framework that defines the theoretical base and integration logic for the cognitive model seeking to establish continual, lifelong adaptation without retraining by combining SDM system with lightweight dual-transformer modules. The architecture proposes an always-online associative memory for episodic storage (System 1), as well as a pair of asynchronous transformer consolidate experience in the background for uninterrupted reasoning and gradual model evolution (System 2). The framework remains compatible with standard transformer benchmarks, establishing a shared evaluation basis for both reasoning accuracy and continual learning stability. Preliminary experiments using the SDMPreMark benchmark evaluate algorithmic behavior across multiple synthetic sets, confirming a critical radius-threshold phenomenon in SDM recall. These results represent deterministic characterization of SDM dynamics in the component level, preceding the integration in the model level with transformer-based semantic tasks. The CALM framework provides a reproducible foundation for studying continual memory and associative learning in hybrid transformer architectures, although future work should involve experiments with non-synthetic, high-load data to confirm scalable behavior in high interference.",
keywords = "continual learning, associative memory, hybrid architecture, sparse distributed memory (SDM)",
author = "Нечесов, {Андрей Витальевич} and Janne Ruponen",
note = "Nechesov, A.; Ruponen, J. CALM: Continual Associative Learning Model via Sparse Distributed Memory. Technologies 2025, 13, 587. https://doi.org/10.3390/technologies13120587 This work was supported by a grant for research centers, provided by the Ministry of Economic Development of the Russian Federation in accordance with the subsidy agreement with the Novosibirsk State University dated 17 April 2025 No. 139-15-2025-006: IGK 000000C313925P3S0002.",
year = "2025",
month = dec,
day = "13",
doi = "10.3390/technologies13120587",
language = "English",
volume = "13",
journal = "Technologies",
issn = "2227-7080",
publisher = "Multidisciplinary Digital Publishing Institute (MDPI)",
number = "12",

}

RIS

TY - JOUR

T1 - CALM: Continual Associative Learning Model via Sparse Distributed Memory

AU - Нечесов, Андрей Витальевич

AU - Ruponen, Janne

N1 - Nechesov, A.; Ruponen, J. CALM: Continual Associative Learning Model via Sparse Distributed Memory. Technologies 2025, 13, 587. https://doi.org/10.3390/technologies13120587 This work was supported by a grant for research centers, provided by the Ministry of Economic Development of the Russian Federation in accordance with the subsidy agreement with the Novosibirsk State University dated 17 April 2025 No. 139-15-2025-006: IGK 000000C313925P3S0002.

PY - 2025/12/13

Y1 - 2025/12/13

N2 - Sparse Distributed Memory (SDM) provides a biologically inspired mechanism for associative and online learning. Transformer architectures, despite exceptional inference performance, remain static and vulnerable to catastrophic forgetting. This work introduces Continual Associative Learning Model (CALM), a conceptual framework that defines the theoretical base and integration logic for the cognitive model seeking to establish continual, lifelong adaptation without retraining by combining SDM system with lightweight dual-transformer modules. The architecture proposes an always-online associative memory for episodic storage (System 1), as well as a pair of asynchronous transformer consolidate experience in the background for uninterrupted reasoning and gradual model evolution (System 2). The framework remains compatible with standard transformer benchmarks, establishing a shared evaluation basis for both reasoning accuracy and continual learning stability. Preliminary experiments using the SDMPreMark benchmark evaluate algorithmic behavior across multiple synthetic sets, confirming a critical radius-threshold phenomenon in SDM recall. These results represent deterministic characterization of SDM dynamics in the component level, preceding the integration in the model level with transformer-based semantic tasks. The CALM framework provides a reproducible foundation for studying continual memory and associative learning in hybrid transformer architectures, although future work should involve experiments with non-synthetic, high-load data to confirm scalable behavior in high interference.

AB - Sparse Distributed Memory (SDM) provides a biologically inspired mechanism for associative and online learning. Transformer architectures, despite exceptional inference performance, remain static and vulnerable to catastrophic forgetting. This work introduces Continual Associative Learning Model (CALM), a conceptual framework that defines the theoretical base and integration logic for the cognitive model seeking to establish continual, lifelong adaptation without retraining by combining SDM system with lightweight dual-transformer modules. The architecture proposes an always-online associative memory for episodic storage (System 1), as well as a pair of asynchronous transformer consolidate experience in the background for uninterrupted reasoning and gradual model evolution (System 2). The framework remains compatible with standard transformer benchmarks, establishing a shared evaluation basis for both reasoning accuracy and continual learning stability. Preliminary experiments using the SDMPreMark benchmark evaluate algorithmic behavior across multiple synthetic sets, confirming a critical radius-threshold phenomenon in SDM recall. These results represent deterministic characterization of SDM dynamics in the component level, preceding the integration in the model level with transformer-based semantic tasks. The CALM framework provides a reproducible foundation for studying continual memory and associative learning in hybrid transformer architectures, although future work should involve experiments with non-synthetic, high-load data to confirm scalable behavior in high interference.

KW - continual learning

KW - associative memory

KW - hybrid architecture

KW - sparse distributed memory (SDM)

UR - https://www.scopus.com/pages/publications/105025960137

U2 - 10.3390/technologies13120587

DO - 10.3390/technologies13120587

M3 - Article

VL - 13

JO - Technologies

JF - Technologies

SN - 2227-7080

IS - 12

M1 - 587

ER -

ID: 73777934