Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts R. Desmond Nzoyem, G. Stevens, A. Sahota, D.AW Barton, T. Deakin
ICLR 2025 - First Workshop on Scalable Optimization for Efficient and Adaptive Foundation Models
PDF ARXIV BIB ABSTRACT Keywords: foundation-models, mixture-of-experts, meta-learning As foundational models reshape scientific discovery, a bottleneck persists in dynamical system reconstruction (DSR): the ability to learn across system hierarchies. Many meta-learning approaches have been applied successfully to single systems, but falter when confronted with sparse, loosely related datasets requiring multiple hierarchies to be learned. Mixture of Experts (MoE) offers a natural paradigm to address these challenges. Despite their potential, we demonstrate that naive MoEs are inadequate for the nuanced demands of hierarchical DSR, largely due to their gradient descent-based gating update mechanism which leads to slow updates and conflicted routing during training. To overcome this limitation, we introduce MixER: Mixture of Expert Reconstructors, a novel sparse top-1 MoE layer employing a custom gating update algorithm based on K-means and least squares. Extensive experiments validate MixER's capabilities, demonstrating efficient training and scalability to systems of up to ten parametric ordinary differential equations. However, our layer underperforms state-of-the-art meta-learners in high-data regimes, particularly when each expert is constrained to process only a fraction of a dataset composed of highly related data points. Further analysis with synthetic and neuroscientific time series suggests that the quality of the contextual representations generated by MixER is closely linked to the presence of hierarchical structure in the data.
@inproceedings{nzoyemmixer, title={Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts}, author={{Desmond Nzoyem}, R. and {Stevens}, G. and {Sahota}, A. and {Barton}, D.AW and {Deakin}, T.}, booktitle={First Workshop on Scalable Optimization for Efficient and Adaptive Foundation Models, ICLR 2025} }