Back

LLM-Evolved Regularization Schedules Prevent Posterior Collapse in Latent Factor Analysis via Dynamical Systems

Knight, J.

2026-02-12 neuroscience
10.64898/2026.02.10.705076 bioRxiv
Show abstract

Latent Factor Analysis via Dynamical Systems (LFADS) is a powerful variational autoencoder for inferring neural population dynamics from spike train data. However, LFADS suffers from pos-terior collapse, where the learned posterior collapses to the prior, eliminating meaningful latent representations. Current solutions require computationally expensive Population-Based Training (PBT) to dynamically tune regularization hyperparameters. Here, we demonstrate that Large Lan-guage Model (LLM)-based program evolution can discover regularization schedules that prevent posterior collapse without PBT. Using FunSearch, an evolutionary algorithm that uses LLMs to generate and refine Python functions, we evolved adaptive regularization schedules that respond to training dynamics. Our best evolved schedule prevents posterior collapse across all tested conditions, maintaining KL divergence 6.5x higher than baseline schedules at 50 epochs (n = 10 seeds each, p < 0.001) and stable above 0.09 through 500 epochs across three Neural Latents Benchmark datasets, while preserving reconstruction quality. This work represents the first application of LLM-based program synthesis to variational autoencoder hyperparameter scheduling, offering a computationally efficient alternative to population-based optimization.

Matching journals

The top 8 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 2%
14.4%
2
Nature Computational Science
50 papers in training set
Top 0.1%
6.8%
3
Neural Computation
36 papers in training set
Top 0.1%
6.8%
4
Nature Machine Intelligence
61 papers in training set
Top 0.4%
6.3%
5
Journal of Neural Engineering
197 papers in training set
Top 0.5%
4.9%
6
Nature Communications
4913 papers in training set
Top 36%
4.2%
7
eLife
5422 papers in training set
Top 22%
4.0%
8
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.6%
4.0%
50% of probability mass above
9
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 24%
2.9%
10
Scientific Reports
3102 papers in training set
Top 44%
2.7%
11
NeuroImage
813 papers in training set
Top 3%
2.5%
12
Communications Biology
886 papers in training set
Top 5%
2.1%
13
Imaging Neuroscience
242 papers in training set
Top 2%
2.1%
14
Bioinformatics
1061 papers in training set
Top 7%
1.9%
15
PLOS ONE
4510 papers in training set
Top 50%
1.9%
16
Neural Networks
32 papers in training set
Top 0.3%
1.9%
17
eneuro
389 papers in training set
Top 5%
1.7%
18
Network Neuroscience
116 papers in training set
Top 0.6%
1.7%
19
Cell Reports
1338 papers in training set
Top 24%
1.7%
20
Medical Image Analysis
33 papers in training set
Top 0.7%
1.3%
21
iScience
1063 papers in training set
Top 24%
1.0%
22
Patterns
70 papers in training set
Top 2%
1.0%
23
Neuron
282 papers in training set
Top 7%
0.9%
24
Nature Methods
336 papers in training set
Top 6%
0.8%
25
Nature Human Behaviour
85 papers in training set
Top 4%
0.8%
26
Human Brain Mapping
295 papers in training set
Top 4%
0.8%
27
Nature Neuroscience
216 papers in training set
Top 6%
0.8%
28
BMC Bioinformatics
383 papers in training set
Top 7%
0.7%
29
PNAS Nexus
147 papers in training set
Top 2%
0.7%
30
Neurocomputing
13 papers in training set
Top 0.6%
0.7%