IFAC-PapersOnLine
○ Elsevier BV
Preprints posted in the last 90 days, ranked by how well they match IFAC-PapersOnLine's content profile, based on 12 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit.
Hunter, P. J.; Dowrick, J. M.; Ai, W.; Nickerson, D. P.; Shafieizadegan, M. H.; Argus, F.
Show abstract
We present an approach to analysing cell homeostasis using a bond graph modelling approach that ensures that the conservation laws of physics (conservation of mass, charge, and energy, respectively) are satisfied for the interdependent biochemical, electrical, mechanical, and thermal energy storage mechanisms operating within the cell. We apply the bond graph approach to several cell membrane transport mechanisms and then consider how physics constrains intracellular electrolyte homeostasis for enterocytes (the epithelial absorptive cells of the gut). The model includes the electrogenic sodium-potassium ATPase pump (NKA), the glucose transporter (GLUT2), and an inwardly rectifying potassium channel, all in the basolateral membrane, and the electrogenic sodium-driven glucose transporter (SGLT1) in the apical membrane. Glycolysis converts the imported glucose to ATP to drive NKA. For specified levels of sodium, potassium, and glucose in the blood, the model demonstrates how enterocytes absorb sodium and glucose from the gut and transfer glucose to the blood while maintaining the membrane potential and homeostasis of intracellular sodium and potassium. The Gibbs free energy available from the ATP hydrolysis ensures that the cell operates as a sodium battery with a high external to internal ratio of sodium concentration in order to provide the energy for many other cellular transport processes. We show that the 3:2 stoichiometry of Na+/K+ exchange in NKA, coupled with 2:1 Na+/glucose cotransport in SGLT1, a 1:2:2 ratio between glucose consumption and ATP and water production in glycolysis, and K+ and glucose efflux through Kir and GLUT2, respectively, provides a balanced system that maintains homeostasis of intracellular Na+, K+, glucose, ATP and water, and homeostasis of the membrane potential, under varying levels of transport of glucose from the gut to the blood. We also show how the flux expressions for SLC transporters, ATPase pumps and ion channels can all be expressed in a consistent and thermodynamically valid way.
D'Hondt, L.; Afschrift, M.; De Groote, F.
Show abstract
Human walking is intrinsically variable. For example, there is considerable stride to stride variability even when walking speed is constant. This variability is due to uncertainty in the sensorimotor system and the environment, and is shaped by both musculoskeletal dynamics (e.g. joint stiffness and damping originating from muscles) and the control strategy used to mitigate the effects of uncertainty. Yet, insight into how sensorimotor noise shapes walking variability is limited due to a lack of experimental methods to assess sensorimotor noise and control strategies during walking. Simulations that account for uncertainty can elucidate how sensorimotor noise affects movement variability but due to numerical challenges, accounting for sensorimotor noise is not common in simulations of walking. Existing simulations have hugely simplified musculoskeletal dynamics (e.g. no muscles), the control policy (e.g. pre-defined feedback loops), or sensorimotor noise sources (e.g. only motor noise). Here, we performed stochastic optimal control simulations of walking based on a model with 9 degrees of freedom and 18 muscles to study how the level of sensory and motor noise influences walking. We solved for feedforward muscle excitations and full-state time-varying feedback gains that minimised expected effort while generating periodic, and hence stable, gait patterns. To enable these simulations, we approximated the state distribution with a Gaussian and used an unscented transform to propagate the state covariance. Resulting optimisation problems were solved with direct collocation. Sensorimotor noise level had a small effect on the mean kinematics but shaped kinematic and muscle activity variability as well as expected effort. Although simulations underestimated the magnitude of experimental positional variability, they captured its structure. In agreement with experimental results, the control policy prioritised limiting variability of centre of mass kinematics and minimal swing foot clearance over limiting joint angle variability. Hence, our simulations suggest that effort minimisation underlies these observations. Author summaryWhen performing a movement multiple times, each repetition will be slightly different due to random disturbances in the neural signals used to control movement, i.e. sensorimotor noise. Because it is difficult to measure inside the nervous system of a moving person, computer simulations are used to study movement control. They found that both sensorimotor noise and musculoskeletal mechanics determine how people control arm movements and standing. However, there are no simulations of walking that systematically evaluated how sensorimotor noise level influences walking kinematics because they pose computational challenges. Here, we proposed and used an approach for minimal effort simulations of walking in the presence of uncertainty. We imposed forward speed and stability but not kinematics. We found that the level of sensorimotor noise had little effect on the mean movement but a strong effect on the variability and the expected effort. The control strategy prioritised reducing the variability of the centre of mass position and swing foot clearance over reducing the variability of individual joint angles, which is also observed in experiments. Interestingly, strict control of centre of mass position and foot clearance in our simulations emerged from minimising effort.
Chakraborty, P.; Dey, S.; Kundu, R.; Banerjee, M.; Ghosh, S.
Show abstract
Exploring the emergence of spatio-temporal patterns due to nonlinearities in gene expression is a relatively new development. In this work, we explore the effect of resource constraint on gene regulatory motif from both equilibrium and spatio-temporal standpoint, taking into consideration the degradation class of resource, protease. We have demonstrated that protease-tagged degradation can cause an emergent bistability to form in the system in a steady-state scenario. Instead of a graded linear response in protein synthesis, two Saddle-node bifurcations caused by protease competition provide a switch-like response with hysteresis, where two drastically differing protein concentrations can coexist. We next turn our attention to spatio-temporal analysis: we extend our study for a two-dimensional sheet of cells with diffusible protein molecules and report the stationary patterns.To investigate the reasons behind these non-homogeneous stationary patterns, we investigate the traveling wave solution and observe that a stationary pattern is formed by the traveling wave solution. Considering that proteases play a major role in the regulation and expression of genes in a variety of diseased scenarios, the repercussions of this spatial patterning caused by protease competition can be extensive in gene regulatory systems.
Oh, C.; Wilkie, K. P.
Show abstract
We present the Toroidal Search Algorithm (TSA), a novel population-based metaheuristic optimization method inspired by the topology of a torus. Conventional metaheuristics frequently suffer from boundary stagnation, a phenomenon that severely degrades performance in bounded and high-dimensional search spaces. TSA addresses this limitation by embedding the search domain into a toroidal geometry, thereby eliminating artificial boundaries and enabling continuous cyclic exploration. Beyond boundary handling, TSA uses winding numbers to capture the history of agent movement across periodic dimensions, which are exploited to adaptively refine local search. A modified sigmoid control function regulates the transition between global and local search. Performance of TSA is evaluated on a collection of unimodal and multimodal benchmark functions at various dimensions. It consistently outperforms established metaheuristics. Notably, TSA demonstrates exceptional robustness to increasing dimensionality, maintaining fast convergence and low variance where competing methods deteriorate. To assess real-world applicability, we apply TSA to an inverse problem from mathematical oncology. With both synthetic and clinical data, TSA reliably recovers physiologically plausible parameters with greater stability and predictive accuracy than competing algorithms. These results demonstrate that TSA is a powerful and robust tool for large-scale global optimization in computational modelling applications. Striking Image O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=200 SRC="FIGDIR/small/709766v1_ufig1.gif" ALT="Figure 1"> View larger version (131K): org.highwire.dtl.DTLVardef@1b2c994org.highwire.dtl.DTLVardef@d045a8org.highwire.dtl.DTLVardef@18d296corg.highwire.dtl.DTLVardef@9a972d_HPS_FORMAT_FIGEXP M_FIG C_FIG Image generated with Google Gemini.
Ergon, R.
Show abstract
A moving average smoothing method for extraction of cycles in time series data is described, with focus on obliquity cycles and fossil data. The proposed method is intended for cases where the environmental driver of phenotypic evolution can be shown to include obliquity cycles, either by power spectrum analysis or simply by inspection of raw or smoothed time series. The method gives improved mean trait predictions and better understanding when applied on stickleback fish fossil data from around 10 million years ago. The possibility to extract obliquity cycles will depend on the dynamics of the time series, and the method is thus not universally applicable. It may, however, be possible to adapt the size of the moving window to problems under study, or possibly to obtain improved predictions by inclusion of a sinusoidal component in the mean trait prediction modeling.
Cabeleira, M. T.; Diaz, V.; Ray, S.; Ovenden, N. C.
Show abstract
Calibration of mechanistic cardiovascular models is a central barrier to their use in population analysis and patient-specific simulation, particularly in settings where key physiological variables are unobservable and multiple parameter combinations can reproduce the same haemodynamic targets. In this work, we present Embedded Gradient Descent (EGD), a calibration framework for ODE-based lumped-parameter cardiovascular models in which selected physiological parameters are promoted to dynamic states and driven toward prescribed targets through embedded controller equations. By exploiting the qualitative structure of the governing equations, EGD enforces physiologically consistent parameter-variable relationships, yielding unique calibrated solutions that are robust to initial conditions and scale efficiently with model complexity. The framework is demonstrated using a mechanistic cardiovascular model to generate virtual paediatric populations spanning normal physiology and two septic shock phenotypes (warm and cold shock), achieving low residual error across pressures, flows, and compartmental volumes. The resulting parameter distributions are consistent with known haemodynamic adaptations in paediatric sepsis, including alterations in vascular resistance, compliance, cardiac elastance, and effective blood volume. Importantly, persistent calibration residuals arise only when target combinations are structurally incompatible with the model, providing an explicit and interpretable diagnostic of feasibility limits rather than an optimisation failure. These results establish EGD as a general, scalable calibration strategy for mechanistic cardiovascular models and a practical foundation for virtual population generation and future patient-specific digital twin applications in critical care. NEW & NOTEWORTHYThis study introduces a novel, embedded gradient descent calibration framework that enables scalable generation of mechanistically interpretable virtual populations of patients from ODE-based cardiovascular models. By treating parameter inference as a dynamical extension of the governing equations and calibrating directly against cycle-derived physiological targets, the method preserves physiologically meaningful parameter-variable relationships. Applied to paediatric sepsis, the framework reproduces warm and cold shock phenotypes while exposing infeasible target combinations, while providing efficient calibration and physiological insight.
Mukherjee, P.; Mandal, S.
Show abstract
This paper describes MMP, a three-stage framework for systematic quantum optimization of constrained molecular docking problems. The protocol addresses the "formulation bottleneck"--the critical challenge of translating constrained optimization problems into valid QUBO (Quadratic Unconstrained Binary Optimization) formulations for quantum solvers. MMP replaces heuristic penalty tuning with data-driven calibration through: (1) classical solution-space analysis to validate fragment libraries before quantum deployment, (2) systematic penalty sweeps to identify optimal "Goldilocks Zone" coefficients, and (3) MAC-QAOA (MMP Adaptive Constraint QAOA) with layer-dependent penalty decay. Preliminary benchmarks on synthetic constrained optimization problems demonstrate 99.7% solution validity at identified elbow points and 25.5% improvement in solution quality over static-penalty QAOA. MMP is hardware-agnostic but designed for near-term devices including Pasqals Orion Gamma (140+ qubits). The theoretical framework, algorithmic details, and preliminary validation results of the protocol are discussed, establishing a systematic methodology for quantum-augmented optimization workflows for drug discovery. All benchmarks are conducted on synthetic constrained optimization instances that reproduce structural features of docking formulations; application to real molecular docking targets is left for future work.
Greenwood, M.; Reardon, K. F.; Prasad, A.
Show abstract
Reporter cell assays, such as those used to detect estrogenic chemicals, can detect target chemicals at low concentrations and can be used to analyze chemical mixtures without a priori knowledge of the mixture components. However, the outputs of these assays are affected by biological variability, which complicates their interpretation. Here, we describe and demonstrate a workflow that is useful for determining potential sources of biological variability and optimizing the performance of cell-based assays. The workflow involves developing an appropriate mathematical model for a transcriptional activation assay, calibrating it with experimental data, and conducting sensitivity analysis to characterize individual components of the genetic circuit based on their effect on the reporter signal output. This workflow was tested using an estrogen receptor transcriptional activation assay. For this circuit, our analysis predicts that controlling estrogen response element number, promoter strength, and reporter signal degradation rates minimizes reporter output variability. We show that careful model development, calibration, and analysis can offer biologically relevant insights to minimize the variability of cell-based assays and improve genetic circuits for increased sensitivity and dynamic range.
Biane, C.; Moon, K.; Lee, K.; Pauleve, L.
Show abstract
Boolean networks are discrete dynamical models that use Boolean states and logical functions to represent the dynamics of biological systems. A primary application of Boolean networks is to identify controls (e.g., genetic mutations or knockouts) that drive the system toward a desired phenotype. However, existing computational tools often produce inconsistent results because they rely on differing modeling assumptions. To better understand these differences, we survey existing tools and propose a taxonomy of control problems. Our taxonomy unveils hidden coverage relationships among their solutions that arise from these modeling assumptions. We provide a computational framework to empirically assess these relationships by comparing their predicted controls on a suite of artificial and biological models. Finally, we develop a coverage-consistent metric, the mutation co-occurrence score, to prioritize mutations based on their predicted impact on the phenotype. A case study on T-LGL leukemia highlights how an ensemble prediction of the score across multiple tools identifies key mutations associated with apoptosis. Author summaryBoolean networks let us model gene and protein regulation with simple on/off logic. That simplicity makes them useful for asking a practical question: which controls (e.g., genetic interventions, knockouts, or forced activations) can guarantee a desired cellular phenotype such as survival or apoptosis. In practice, different software tools often return inconsistent control sets, creating a practical barrier to reproducibility and reliability of predictions. Here we provide a comprehensive survey to navigate this landscape. We provide a comparison framework to assess coverage relationships, both theoretically and empirically, among the solutions of control tools. We classify the tools based on our new taxonomy that highlights the core differences among them. For many tools, we can theoretically determine their coverage relationships based on which components are fixed together to drive the phenotype. These new findings clearly explain subtle differences among solutions produced by various tools. We then develop a coverage-consistent metric for mutations called the mutation co-occurrence score. This new metric helps prioritize control targets based on their predicted impact on the phenotype. We also demonstrate that averaging scores from multiple tools gives reliable predictions. Our code is extensible to future tools and will facilitate the comparison of Boolean network control tools in biological research.
Popinga, A. N.; Forman, J.; Svetlov, D.; Vo, H. D.; Munsky, B. E.
Show abstract
Biological data is prone to both intrinsic and extrinsic noise and variability between experimental replicas. That same stochasticity and heterogeneity can carry information about underlying biochemical mechanisms but, if not incorporated in modeling and probabilistic inference, can also bias parameter estimates and misguide predictions and, subsequently, experiment design. Mechanistic inference typically requires lengthy simulations (e.g., the Stochastic Simulation Algorithm (SSA)); approximations to chemical master equation (CME) solutions that lack rigorous error tracking; or deterministic averaging that lacks the complexity necessary to reflect the data. We introduce the Stochastic System Identification Toolkit (SSIT) - a fast, flexible, and open-source software package available on GitHub that makes use of MATLABs efficient and diverse computational architecture. The SSIT is designed for building, simulating, and solving chemical reaction models using ODEs, moments, SSA, Finite State Projection truncations of the CME, or hybrid methods; sensitivity analysis and Fisher information quantification; parameter fitting using likelihood- or Bayesian-based methods; handling of experimental noise and measurement errors using probabilistic distortion operators; and sequential experiment design that empowers users to save time and resources while gaining the most information possible out of their data. The SSIT also offers advanced modeling tools, including model reduction methods for increased efficiency and joint fitting of models and datasets with overlapping reactions or parameters. To facilitate the ease and speed of use, the SSIT provides a graphical user interface and ready-made, adaptable pipelines that can be run in the background from commandline or high-performance computing clusters. We demonstrate features of the SSIT on two experimental datasets: the first consists of published mRNA count data that reflect Saccharomyces cerevisiae yeast cell response to osmotic shock using single-cell single-molecule fluorescence in situ hybridization; the second consists of single-cell RNA sequencing measurements of 151 activating genes in breast cancer cells following treatment with dexamethasone. Author summaryWe present the Stochastic System Identification Toolkit (SSIT) to model, fit, and predict any data that can be interpreted as changing populations or counts through time, including but not limited to single-cell experiments, economics, epidemiology, ecology, sociology, agriculture, and biotechnology. The SSIT was constructed particularly for stochastic modeling, which is important for systems whose states may experience significant fluctuations from mean behavior, thus affecting the inference of the underlying rate parameters and predictions of subsequent behavior. The SSIT provides statistical inference tools for parameter estimation; sensitivity analysis and information calculation; handling of distortions to probability distributions caused by experimental or measurement processes (e.g., dropout in single-cell RNA sequence data and total fluorescence intensities versus spot counting/puncta analysis); and quantitative design of experiments. The SSIT also offers a variety of complex modeling tools, including model reduction methods and fitting of combined models/datasets that share some behavior but remain distinct (e.g., different genes responding a single stimulus). The SSIT generates pipelines for easy, efficient analyses to run in the MATLAB environment, in the background on commandline, or on high-performance computing clusters, thus facilitating users to make informed, time- and cost-effective decisions about their next set of experiments.
Mbuguiro, W.; Holt, S. E.; Griffith, L. G.; Gnecco, J. S.; Mac Gabhann, F.
Show abstract
The endometrium and menstrual disorders, such as endometriosis and adenomyosis, are difficult to study, partly because menstruation depends on interactions between multiple cell types through complex molecular mechanisms. To help understand this system, researchers need humanized experimental and computational models that can interrogate how endometrial cell populations impact each other in both physiological and pathological conditions. Here, we use ordinary differential equations (ODEs) to model changes in the rates of endometrial cell proliferation and death in response to hormones, cytokines, and the specific cell types present. To calibrate this computational model, we used previous-published experimental datasets from a 3D co-culture platform supporting primary human endometrial epithelial organoids and endometrial stromal cells. Our ODE-based model can simulate the size of endometrial epithelial organoids and the density of stromal cells over time under multiple hormone/cytokine conditions in mono- and co-cultures. We further created a second, partial differential equation (PDE)-based model that simulates the diffusion of molecules added to these 3D cultures and their uptake by proliferating endometrial cells using the predicted cell densities from the ODE model as inputs to the PDE simulations. We show that the exposure to hormones and cytokines used in the experiments is reasonably homogenous throughout the 3D culture and identify conditions where this would not be true. Altogether we use these models to quantify the influence of stromal cells on epithelial cell proliferation and vice versa, to identify differences across cells from different donors, and to provide a quantitative assessment of experimental designs.
Pacheco, M.; Gonzalez, E.; Sauter, T.
Show abstract
The increase in size of metabolic network models especially with the advent of single-cell data calls for scalable reconstruction and analysis tools. Such models, often used for drug discovery and the analysis of microbial communities rely on consistency testing and reconstruction algorithms such as FASTCORE and FASTCC. However, with models nowadays comprising hundreds of thousands of reactions, the running times of such algorithms increased from few minutes to hours or days even with high performance computing. Experiments that require multiple reconstructions, such as parameter tuning or cross-validation, are practically infeasible in very large networks. Here we introduce FASTERCC, a new version of FASTCC, that leverages structural information for removing type I and II dead-ends, the orientation of reversible reactions and correcting the reversibility of reactions that are structurally incapable of carrying flux in both directions prior to any feasibility tests. These improvements reduce drastically the running time of FASTERCC by a median 20-fold speedup in comparison to FASTCC for networks with a larger number of block reactions. The model cleaning performed by FASTERCC also reduces the computational time of downstream analyses, notably of FASTCORE up to 50%.
Troitino-Jordedo, D.; Mansouri, A.; Minebois, R.; Querol, A.; Remondini, D.; Balsa-Canto, E.
Show abstract
Context-specific genome-scale metabolic models are critical tools for studying cellular metabolism under dynamic conditions. However, most existing methods for deriving these models are designed for steady-state settings and may fail to preserve reactions required for transient metabolic shifts, thereby limiting their compatibility with dynamic FBA. Here, we present GeNETop, a methodology for deriving context-specific GEMs designed to preserve dynamic compatibility. GeNETop integrates flux variability analysis (FVA), network topology metrics based on the Integrated Value of Influence (IVI), and transcriptomic data to identify reactions that are both flux-flexible and structurally influential. Reactions are prioritized based on variability and maximality indices, while topology and gene expression guide further refinement, reducing dependence on fixed expression thresholds. Using batch fermentation of Saccharomyces cerevisiae as a case study, we evaluate GeNETop against established methods for context-specific metabolic reconstruction. The resulting networks remain dynamically feasible across growth phases, capture key metabolic transitions, reduce non-essential reactions, and maintain computational tractability. Overall, GeNETop enables context-specific metabolic reconstructions that are compatible with dynamic simulations while maintaining computational efficiency. By overcoming key limitations of existing approaches, the method supports a more accurate representation of time-dependent metabolic processes in biotechnology and systems biology. Author summaryCellular metabolism relies on complex networks of reactions to process nutrients, generate energy, and build essential compounds for biomass. Context-specific metabolic models aim to represent only the reactions active under a given condition, improving biological realism and reducing computational complexity in flux balance analysis simulations. However, metabolic activity adapts dynamically to changing environmental conditions, and reactions that are inactive at one stage may become essential at another. Many current reconstruction methods are designed for steady-state conditions and may exclude reactions that are required during metabolic transitions, thereby limiting their ability to describe dynamic behavior. Here, we introduce GeNETop, a novel approach that refines context-specific networks by integrating multiple layers of information. GeNETop identifies the most relevant reactions by considering their flexibility, importance within the network topology, and gene activity levels. In this way, the method generates biologically meaningful models that focus on metabolic pathways relevant under dynamic conditions. We tested GeNETop on yeast fermentation, a key process in food and biofuel production. The resulting models capture metabolic changes over time and enable stable dynamic simulations, supporting improved flux balance analysis of time-dependent metabolic processes.
Alexis, E.; Rowley, C. W.; Avalos, J. L.
Show abstract
Achieving complex multi-species control objectives is essential for engineering advanced autoregulated biomolecular devices. This paper addresses the problem of robust steady-state tracking for outputs defined as multiplicative combinations of biomolecular species concentrations. We first introduce a control architecture realized via chemical reaction networks that steers the product of two target species concentrations in the controlled network to a prescribed value. A robust stability analysis is provided for closed-loop system families with distinct structural characteristics. The proposed framework is also extended to a more general formulation capable of regulating arbitrary monomial outputs involving multiple species. Numerical simulations of representative examples corroborate the theoretical results and illustrate the effectiveness of our approach.
Koshe, A.; Sobhani-Tehrani, E.; Jalaleddini, K.; Motallebzadeh, H.
Show abstract
Spectral similarity is often judged with a single metric such as RMSE, yet this can be misleading: physically different errors can produce similar scores. This is a critical limitation for computational biomechanics, where spectral agreement underpins both model validation and machine-learning loss design. Here, we develop a multi-metric framework for objective spectral biofidelity and test whether it better captures meaningful disagreement across complex frequency-domain responses. We evaluated 12 complementary similarity metrics, including CORA and ISO/TS 18571, using controlled spectral perturbations that mimic common real-world deviations such as resonance shifts, localized spikes, and broadband tilts. We then applied the framework to an SBI-tuned finite-element middle-ear model to assess convergence with training dataset size and robustness to measurement noise across repeated stochastic runs. No single metric performed reliably across all distortion types. Shape-based metrics tracked resonance morphology but could miss vertical scaling, whereas MaxError remained important for narrowband anomalies that smoother metrics underweighted. CORA and ISO 18571 did not consistently outperform simpler metrics. Rank aggregation using Borda count provided a robust consensus across metrics, enabling objective identification of training-data saturation and noise thresholds beyond which similarity rankings became unstable. These results show that spectral biofidelity cannot be reduced to a single norm. A multi-metric consensus provides a clearer and more physically meaningful basis for comparing experimental and simulated spectra, and offers a more defensible foundation for data-fidelity terms in physics-informed and simulation-based machine learning.
Faulon, J.-L.; Dursoniah, D.; Ahavi, P.; Raynal, A.; Asin-Garcia, E.
Show abstract
SummaryThis study presents dAMN, a hybrid neural-mechanistic model that integrates neural networks with genome-scale dynamic flux balance analysis (dFBA) to predict bacterial growth curves across diverse nutrient environments. dAMN uses neural networks to infer dynamic behavior from initial metabolite concentrations, while mechanistic constraints ensure stoichiometric and thermodynamic consistency based on genome scale metabolic models. dAMN is trained on E. coli and P. putida experimental growth data from media containing various combinations of sugars, amino acids, and nucleobases, and evaluated on two test sets: one for forecasting over time and another for predicting growth dynamics on unseen media. dAMN achieved high predictive power (R2 [≥] 0.9), successfully reproducing growth and substrate depletion dynamics including acetate overflow and glucose-acetate consumption shift for E. coli. An interesting innovation of dAMN is the treatment of the lag phase, enabling realistic adaptation dynamics absent from standard dFBA models. dAMN stands out for its ability to generalize across combinatorial nutrient inputs and produce full growth-curve predictions from minimal input data. Availability and implementationThe dAMN software, along with the associated models and data, is available at https://github.com/brsynth/dAMN-main-release and via DOI 10.5281/zenodo.17908125
Ahmed, M.; Akerkouch, L.; Vanyo, A.; Haage, A.; Le, T. B.
Show abstract
PurposeThis work investigates the role of the cancer cell morphology and elasticity on the deformation patterns under shear-flow in a micro-channel. MethodsA novel hybrid continuum-particle framework is developed to simulate cancer-cell dynamics. Cell membrane and nucleus geometries are reconstructed from microscopic images and modeled using Dissipative Particle Dynamics, while the surrounding blood plasma is treated as an incompressible Newtonian fluid. Cell-flow interactions are captured via an immersed boundary method. ResultsAll cancer-cell models exhibited a rapid deformation response within the first 1-2 ms, followed by morphology- and stiffness-dependent shape evolution. The compact morphologies showed strong recovery, whereas the other models evolved toward folded/lobed states with only intermittent partial recovery during shape transitions. Membrane stiffening dominated elongation and compactness loss, while nuclear stiffening modulated deformation excursions and partial recovery. These shape transitions were accompanied by near-field vortex reorganization and traction localization. Similar to deformation response the net membrane force exhibited a common start-up rise within 0-0.5 ms followed by relaxation. Compact morphologies produce lower and steadier forces. They show minimal stiffness dependence. Deformation-prone morphologies show stronger unsteadiness and clearer stiffness modulation. Cross-sectional velocity and vorticity fields showed a dominant x-directed hydrodynamic imbalance and lateral migration. ConclusionOur results demonstrate that morphology sets the stiffness modulated deformation patterns which effects the extracellular flow dynamics and traction. In turn, the resulting flow field and traction distribution feed back to influence subsequent deformation and migration. This mechanistic link provides a framework for interpreting circulating tumor cell transport in shear-dominated metastatic environments.
Lotfi, M.; Kaderali, L.
Show abstract
Change point detection is critical for identifying structural transitions in time series data. While most existing methods focus on changes in statistical properties of the data such as the mean or variance, many real-world systems are governed by dynamical models in which changes occur in model parameters. We introduce MICA, an algorithm that detects change points by minimizing the discrepancy between model simulations with a given dynamical model and observed data. The method integrates binary segmentation with a genetic algorithm to identify both the timing and nature of model parameter changes. MICA simultaneously estimates segment-specific and global parameters alongside change points, offering enhanced flexibility and interpretability. We demonstrate its utility on synthetic datasets and real-world scenarios, including COVID-19 epidemiological modeling, under policy interventions, and the analysis of generator cooling systems in wind turbines to monitor operational status. While illustrated using differential and difference equation models, MICA is model-agnostic and applicable to any simulatable system, making it broadly useful for applications requiring accurate tracking of structural dynamics.
Chen, Y.
Show abstract
Clavicle fractures often exhibit markedly different clinical outcomes: some patients recover acceptable function despite shortening or displacement, whereas others with apparently similar deformity develop persistent pain, functional loss, or poor healing. To explain this distinction, we propose a minimal nonlinear mechanical model for prognostic analysis of clavicle fractures. The model describes the interaction between fracture-related shortening and compensatory shoulder-girdle posture through a reduced equilibrium equation incorporating stiffness, geometric nonlinearity, and shortening-posture coupling. Within this framework, we analyze equilibrium branches, local stability, and the emergence of critical thresholds. We show that post-fracture destabilization can be interpreted as a fold bifurcation, while more complex parameter dependence gives rise to cusp-type structures and multistability. These bifurcation mechanisms provide a mathematical explanation for sudden deterioration after injury or treatment, as well as for strong inter-individual variability. We further introduce an optimization principle based on a utility functional to guide treatment planning. The analysis predicts that the optimal safe correction should lie strictly below the bifurcation threshold, thereby generating a natural safety margin. Although the model is simplified and has not yet been calibrated against patient data, it nevertheless provides a theoretical framework for understanding why fracture prognosis may deteriorate abruptly near critical mechanical conditions and offers a dynamical-systems interpretation of empirical treatment thresholds used in clinical practice.
Bauer, J. E. S.; Alibhai, F. J.; Vatani, P.; Romero, D. A.; Laflamme, M. A.; Amon, C. H.
Show abstract
PurposeLarge quantities of human pluripotent stem cells (hPSCs) are required for clinical applications. 3D suspension cultures are suitable for large scale manufacturing of hPSCs but yield, viability and quality are affected by the hydrodynamic environment. This paper characterizes the hydrodynamic environment inside vertical wheel bioreactors (VWBRs) as a function of size and agitation rates, measures its effect on cell aggregation and proliferation, and proposes the use of Lagrangian-based shear stress and energy dissipation rate (EDR) exposures to support scale-up. MethodsIn silico: Transient, 3D, turbulent flow simulations are conducted for two VWBR sizes (100, 500 mL) at five agitation rates between 20 and 80 rpm. Trajectories of cell aggregates of sizes from 200 to 1,000 microns are calculated, and shear stress and EDR exposures are collected along these trajectories. In vitro: ESI-017 hPSCs were cultured in VWBRs for 6 days. Aggregation efficiency and daily fold ratios were calculated based on cell counts and initial inoculation density. ResultsAggregate size, agitation rate and bioreactor size modulate cell aggregate exposures to EDR and shear stress, which significantly depart from maximum or volume average metrics used for scale-up. Combined in vitro/in silico results show EDR affects aggregation efficiency, cell counts and aggregate size, and has a small effect on daily fold ratios but a significant effect on total fold ratio. ConclusionHistory of trajectory-based cell aggregate exposures to EDRs provide a better scale-up basis for VWBRs than volume-averaged EDR. Shear stress does not significantly affect hPSC aggregation, proliferation and expansion in VWBRs under the tested conditions.