2025 IEEE International Conference on AI and Data Analytics
(ICAD 2025)

24 June 2025 - Tufts University School of Engineering Graduate Programs, Medford, Massachusetts USA

Session 1 Workshops

Workshop 1:  TraML Meets AI in Space

Dr. Di Wu Presentation

Abstract:  Benchmarking large language models in space engineering presents unique challenges, particularly regarding model evaluation and interpretability in high-stakes environments. This work introduces an innovative framework based on autoencoder architectures to systematically assess and benchmark AI models used in space applications. By extracting latent representations through optimized autoencoder pipelines, our approach elucidates inherent model behaviors and performance nuances across various architectures. Detailed case studies demonstrate how this method enhances simulation accuracy and risk assessments in complex space scenarios. The framework bridges the gap between deep learning and practical engineering challenges, providing a unified strategy for model evaluation. We discuss both the theoretical foundations and empirical validations of our approach, highlighting its potential to drive advancements in autonomous system design and decision-making for space engineering missions.

Dr. Di Wu:  Di Wu is a faculty in the Aerospace Engineering Department of the Embry-Riddle Aeronautical University and the founder and director of XDLab. He was a postdoc associate in AeroAstro ARCLab at MIT. His work roots in perturbation theory and dynamical system with a broad interest in integrating optimization, ML/AI, and control theory into understanding space situational awareness and space sustainability. He has been actively collaborating with students and professionals all over the world.

Dr. Amit Jain Presentation

Abstract:  Control policies that generalize across dynamically distinct regimes present significant challenges in complex systems like spacecraft mission phases—from orbit-raising to rendezvous. While reinforcement learning (RL) has shown promise in individual control tasks, existing approaches often require separate policies for different dynamical regimes, limiting adaptability. This work introduces a transformer-based RL framework that aims to unify control across varying dynamical contexts through a single policy architecture. Building on proximal policy optimization (PPO), our framework replaces conventional recurrent networks with a Gated Transformer-XL (GTrXL) architecture, enabling the agent to maintain extended temporal contexts critical for complex sequential decision-making. We validate our approach on canonical control problems—the double integrator and Van der Pol oscillator—and demonstrate its effectiveness on multi-phase spacecraft trajectory optimization scenarios. Results show near-optimal performance comparable to analytical solutions where available, confirming the transformer’s ability to process long-range dependencies offers particular advantages for autonomous mission planning across dynamically distinct phases.

Dr. Amit Jain:  Dr. Amit Jain is a postdoctoral associate in the Department of Aeronautics and Astronautics at MIT, where he focuses on applying artificial intelligence and machine learning techniques to aerospace controls. He earned his master’s and Ph.D. in Aerospace Engineering from Pennsylvania State University.

Workshop 2:  Transformer Neural Networks: A Pedagogical Overiew

Dr. Peter Cho Presentation

Abstract:  Over the past few years, Transformers have become the dominant neural network type used in many deep learning applications. In this tutorial, we focus on the network architecture underlying Large Language Models developed by OpenAI called the Generative Pretrained Transformer.

GPT is trained to predict the next word following some input text sequence. As each text token passes through a GPT, its corresponding embedding vector is refined to encode richer contextual meaning.  After being processed by repeated combinations of Attention and Multilayer Perceptron layers inside a large Transformer, the final input token ideally soaks up so much general understanding from the textual prompt that the next word in the sequence can be predicted with high confidence.

This tutorial intentionally concentrates on conceptual ideas rather than technical details. And it leans heavily on Grant Sorenson’s
pedagogically outstanding videos. By the tutorial’s end, audience members will hopefully gain useful intuitions enabling them to more comfortably read the rapidly growing Transformer literature.

Dr. Peter Cho:   Peter Cho received a bachelor’s degree in physics from Caltech in 1987 and a Ph.D. in theoretical physics from Harvard in 1992.  From 1992 to 1998, he worked as a physics postdoc at Caltech and Harvard.  Peter joined Lincoln Laboratory in 1998 where his research work spanned a broad range of subjects including space systems, lidar imaging and photo reconstruction. In 2014, Peter moved to Apple’s 3D vision group in Silicon Valley.  After working for 3 years at Apple and another year at an autonomous vehicle startup, Peter joined Analog Garage in 2018.  His current research interests center upon machine learning in general and computer vision in particular.

Created and maintained by Ballos Associates

Join our mailing list and stayed informed of SiPS 2024 Updates!