LLM-Evolved Regularization Schedules Prevent Posterior Collapse in Latent Factor Analysis via Dynamical Systems
Knight, J.
Show abstract
Latent Factor Analysis via Dynamical Systems (LFADS) is a powerful variational autoencoder for inferring neural population dynamics from spike train data. However, LFADS suffers from pos-terior collapse, where the learned posterior collapses to the prior, eliminating meaningful latent representations. Current solutions require computationally expensive Population-Based Training (PBT) to dynamically tune regularization hyperparameters. Here, we demonstrate that Large Lan-guage Model (LLM)-based program evolution can discover regularization schedules that prevent posterior collapse without PBT. Using FunSearch, an evolutionary algorithm that uses LLMs to generate and refine Python functions, we evolved adaptive regularization schedules that respond to training dynamics. Our best evolved schedule prevents posterior collapse across all tested conditions, maintaining KL divergence 6.5x higher than baseline schedules at 50 epochs (n = 10 seeds each, p < 0.001) and stable above 0.09 through 500 epochs across three Neural Latents Benchmark datasets, while preserving reconstruction quality. This work represents the first application of LLM-based program synthesis to variational autoencoder hyperparameter scheduling, offering a computationally efficient alternative to population-based optimization.
Matching journals
The top 8 journals account for 50% of the predicted probability mass.