Keyword: foundation-models
- UMAP visualisations of the embeddings from AstroPT trained on VIS+NISP+SEDs with example cutouts and SEDs.Euclid Quick Data Release (Q1) Exploring galaxy properties with a multi-modal foundation model
Astronomy & Astrophysics
PDF DOI BIB ABSTRACT Keywords: foundation-models, computer-vision, euclid-consortium, classificationModern astronomical surveys, such as the Euclid mission, produce high-dimensional, multi-modal data sets that include imaging and spectroscopic information for millions of galaxies. These data serve as an ideal benchmark for large, pre-trained multi-modal models, which can leverage vast amounts of unlabelled data. In this work, we present the first exploration of Euclid data with AstroPT, an autoregressive multi-modal foundation model trained on approximately 300000 optical and infrared Euclid images and spectral energy distributions (SEDs) from the first Euclid Quick Data Release. We compare self-supervised pre-training with baseline fully supervised training across several tasks: galaxy morphology classification; redshift estimation; similarity searches; and outlier detection. Our results show that: (a) AstroPT embeddings are highly informative, correlating with morphology and effectively isolating outliers; (b) including infrared data helps to isolate stars, but degrades the identification of edge-on galaxies, which are better captured by optical images; (c) simple fine-tuning of these embeddings for photometric redshift and stellar mass estimation outperforms a fully supervised approach, even when using only 1% of the training labels; and (d) incorporating SED data into AstroPT via a straightforward multi-modal token-chaining method improves photo-z predictions, and allow us to identify potentially more interesting anomalies (such as ringed or interacting galaxies) compared to a model pre-trained solely on imaging data.@article{Siudek2025EuclidFoundation, author = {{Siudek}, M. and {Huertas-Company}, M. and {Smith}, M. and {Martinez-Solaeche}, G. and {Lanusse}, F. and {Ho}, S. and {Angeloudi}, E. and {Cunha}, P.~A.~C. and {Domínguez Sánchez}, H. and {Dunn}, M. and {Fu}, Y. and {Iglesias-Navarro}, P. and {Junais}, J. and {Knapen}, J.~H. and {Laloux}, B. and {Mezcua}, M. and {Roster}, W. and {Stevens}, G. and {Vega-Ferrero}, J. and the {Euclid Collaboration}.}, title = "{Euclid Quick Data Release (Q1) Exploring galaxy properties with a multi-modal foundation model}", journal={Astronomy \& Astrophysics}, year={2025}, publisher={EDP sciences}, DOI= "10.1051/0004-6361/202554611", } - Illustration of vanilla MoE and our proposed MixER layer. (Left) Vanilla MoE setting where a single input x is passed through a gating network whose outputs enable the router to assign computation to a specific expert (Chen et al., 2022). (Right) Our sparse MixER layer requires a context vector ξ alongside the input x. The gating network computes expert affinities based on this context vector. Contrary to MoE, the MixER layer disregards the softmax-weighted output aggregation.Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts
ICLR 2025 - First Workshop on Scalable Optimization for Efficient and Adaptive Foundation Models
PDF ARXIV BIB ABSTRACT Keywords: foundation-models, mixture-of-experts, meta-learningAs foundational models reshape scientific discovery, a bottleneck persists in dynamical system reconstruction (DSR): the ability to learn across system hierarchies. Many meta-learning approaches have been applied successfully to single systems, but falter when confronted with sparse, loosely related datasets requiring multiple hierarchies to be learned. Mixture of Experts (MoE) offers a natural paradigm to address these challenges. Despite their potential, we demonstrate that naive MoEs are inadequate for the nuanced demands of hierarchical DSR, largely due to their gradient descent-based gating update mechanism which leads to slow updates and conflicted routing during training. To overcome this limitation, we introduce MixER: Mixture of Expert Reconstructors, a novel sparse top-1 MoE layer employing a custom gating update algorithm based on K-means and least squares. Extensive experiments validate MixER's capabilities, demonstrating efficient training and scalability to systems of up to ten parametric ordinary differential equations. However, our layer underperforms state-of-the-art meta-learners in high-data regimes, particularly when each expert is constrained to process only a fraction of a dataset composed of highly related data points. Further analysis with synthetic and neuroscientific time series suggests that the quality of the contextual representations generated by MixER is closely linked to the presence of hierarchical structure in the data.@inproceedings{nzoyemmixer, title={Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts}, author={{Desmond Nzoyem}, R. and {Stevens}, G. and {Sahota}, A. and {Barton}, D.AW and {Deakin}, T.}, booktitle={First Workshop on Scalable Optimization for Efficient and Adaptive Foundation Models, ICLR 2025} }