Hi! I am an Associate Professor and Frank M. Freimann Collegiate Professor of Computer Science and Engineering at the University of Notre Dame. I'm appointed as a Lucy Family Institute Fellow as well as the Program Chair of ND-IBM Tech Ethics Lab. I am also an Amazon Scholar. My research fields are AI and Data Science. I'm interested in developing Strong, Controllable, Multimodal AI for applications such as Sciences (e.g., material discovery, math reasoning) and Engineering (e.g., physical systems, recommender systems), as well as education and mental health. Typical approaches are:
- Tuning multimodal AI to gain knowledge and skills, and precisely follow complex instructions;
- Generalizing multimodal AI capabilities with multi-objective reinforcement learning;
- Controlling multimodal AI to act under moral, ethical, or rational guidelines;
- Adapting multimodal AI efficiently for challenging tasks such as personalization and scientific discovery.
|
I am directing the Data Mining towards Decision Making (DM2) Lab. The lab is hiring one PhD student to work on Physical AI, Multimodal AI, AI Safety, and AI Controllability to begin in Fall 2026 or Spring 2027. Feel free to reach out to me if you are interested!
The Foundation Models and Applications (FAML) Lab at Lucy Family Institute is looking for one postdoc to begin in Fall 2026 or Spring 2017. The research topic emphasizes interdisciplinary collaborations. To apply, please visit this link. Drop me an e-mail if you are interested!
What's New
- February 2026: Welcome Jinduo Guo to join our lab and the CSE PhD Program in Fall 2026!
- January 2026: Jim and I are organizing the Physical AI Working Group supported by the Data, AI, and Computing (DAC) Initiative at the University of Notre Dame. Let us know if you are interested in the group activities!
- January 2026: Scientific AI Winter School at the University of Puerto Rico was successful! We presented the work of learning and reasoning for molecular inverse design.
- December 2025: Welcome Weijiang (Vicky) Li to join our lab and the CSE PhD Program in January 2026! Gang Liu, with great work on AI for Science, is on the academic job market!
- October 2025: We're proud to announce that Coefficient Giving is supporting our work on AI safety - Probing-Guided Robust Unlearning!
- June 2025: Zhihan Zhang and Lingbo Tong have successfully passed their dissertation defense. Congratulations, Dr. Zhang and Dr. Tong!
Latest Publications
- Learning to Optimize Multi-Objective Alignment Through Dynamic Reward Weighting,
TACL, 2026.
- Graph Diffusion Transformers are In-Context Molecular Designers,
ICLR, 2026.
- Dual-Space Smoothness for Robust and Balanced LLM Unlearning,
ICLR, 2026.
- GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning,
ICLR, 2026.
- Context Selection and Rewriting for Video-based Educational Question Generation,
EAAI, 2026.
- Learning Repetition-Invariant Representations for Polymer Informatics,
NeurIPS, 2025.
- Pre-trained Models Perform the Best When Token Distributions Follow Zipf's Law,
EMNLP, 2025.
- Improving Large Language Models Function Calling and Interpretability via Guided-Structured Templates,
EMNLP, 2025.
- Leopard: A Vision Language Model for Text-Rich Multi-Image Tasks,
TMLR, 2025.
- CodeTaxo: Enhancing Taxonomy Expansion with Limited Examples via Code Language Prompts,
Findings of ACL, 2025.
- QG-SMS: Enhancing Test Item Analysis via Student Modeling and Simulation,
ACL, 2025.
- Optimizing Decomposition for Optimal Claim Verification,
ACL, 2025.
- Modality-Aware Neuron Pruning for Unlearning in Multimodal Large Language Models,
ACL, 2025.
- Disentangling Biased Knowledge from Reasoning in Large Language Models via Machine Unlearning,
ACL, 2025.
- Aligning Large Language Models with Implicit Preferences from User-Generated Content,
ACL, 2025.
- Enhancing Mathematical Reasoning in LLMs by Stepwise Correction,
ACL, 2025.
- UniConv: Unifying Retrieval and Response Generation for Large Language Model in Conversation,
ACL, 2025.
- Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench,
NAACL, 2025.
- IHEval: Evaluating Language Models on Following the Instruction Hierarchy,
NAACL, 2025.
- MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart Problems,
NAACL, 2025.
- Benchmarking Language Model Creativity: A Case Study on Code Generation,
NAACL, 2025.
- Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning,
ICLR, 2025.
- Learning Molecular Representation in a Cell,
ICLR, 2025.
Advised PhD Dissertations
Last updated on February 17, 2026.