Back

Compression Efficiency and Structural Learning as a Computational Model of DLN Cognitive Stages

Wu, A.

2026-02-03 neuroscience
10.64898/2026.02.01.703168 bioRxiv
Show abstract

We propose a computational instantiation of three cognitive stages from the Dot-Linear- Network (DLN) framework, grounded in a compression-efficiency thesis. DLN stages are characterized as graph-structured belief-dependency representations used to evaluate options: Dot as no persistent belief graph (reactive policies with negligible internal state), Linear as a null graph over option beliefs (K independent option estimates with no information sharing), and Network as shared latent structure (a bipartite factor graph in which F latent factors connect to K options), augmented by a temporal exposure state and an explicit structural learning cycle (hypothesis [->] test [->] update/expand). We distinguish two compression targets--option-factor structure (shared components in expected outcomes) and stakes-factor structure (shared drivers of consequence-bearing exposures)-- whose intersection yields jointly efficient actions that simultaneously improve expected outcomes and marginal exposure impact. In a bandit-like simulation (100 seeds, K [isin] { 20, 50, 100, 200 }, F =5), Network policies dominate Linear policies in cost-adjusted utility at large K, with the empirical crossover occurring much earlier than an analytic cost-only prediction (K* = F + cmeta/cparam), revealing that the advantage is primarily statistical (shrinkage-like estimation gains from factor pooling) rather than purely computational. Under stakes, all non-DLN agents--including Linear-Plus agents with identical factor structure and Network-standard agents with hierarchical Bayesian learning--collapse due to unmodeled cumulative exposure, while Network-DLN maintains positive utility. Within-stage consistency tests (two algorithmically distinct agents per stage) confirm that the collapse pattern is determined by representational topology, not algorithmic choice. These results evaluate internal consistency of a DLN-to-computation mapping under explicit assumptions; they do not validate a developmental theory in humans.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
Nature Communications
4913 papers in training set
Top 8%
17.3%
2
PLOS Computational Biology
1633 papers in training set
Top 2%
14.5%
3
eLife
5422 papers in training set
Top 7%
9.0%
4
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 9%
7.1%
5
Nature Human Behaviour
85 papers in training set
Top 0.3%
6.7%
50% of probability mass above
6
Communications Psychology
20 papers in training set
Top 0.1%
6.2%
7
Scientific Reports
3102 papers in training set
Top 32%
3.9%
8
Computational Psychiatry
12 papers in training set
Top 0.1%
2.6%
9
Neural Computation
36 papers in training set
Top 0.3%
1.8%
10
Nature
575 papers in training set
Top 11%
1.7%
11
PLOS ONE
4510 papers in training set
Top 56%
1.6%
12
Philosophical Transactions of the Royal Society B
51 papers in training set
Top 3%
1.6%
13
Network Neuroscience
116 papers in training set
Top 0.7%
1.5%
14
Journal of The Royal Society Interface
189 papers in training set
Top 3%
1.5%
15
PNAS Nexus
147 papers in training set
Top 0.4%
1.5%
16
Nature Neuroscience
216 papers in training set
Top 5%
1.3%
17
Psychological Review
19 papers in training set
Top 0.1%
1.2%
18
Cell Systems
167 papers in training set
Top 10%
1.1%
19
Entropy
20 papers in training set
Top 0.3%
0.9%
20
Nature Computational Science
50 papers in training set
Top 2%
0.8%
21
Neural Networks
32 papers in training set
Top 0.9%
0.7%
22
Biometrics
22 papers in training set
Top 0.2%
0.7%
23
Science
429 papers in training set
Top 20%
0.7%
24
Physical Review X
23 papers in training set
Top 0.8%
0.6%
25
Cognition
44 papers in training set
Top 0.5%
0.6%