Biography

My research is focused on designing robust and generalizable AI systems that feature advanced capabilities beneficial to humans, align with human principles, and integrate safely and responsibly into our daily lives. I develop methods blending AI for multi-agent systems (e.g. reinforcement learning, graph machine learning, generative models), social learning, economics and game theory. I use these methods to study social and economic factors driving human behaviors and interactions with an aim to build AI systems that engender cooperation between humans, AI and organizations.

I'm currently a Postdoctoral Associate in the Algorithmic Alignment Group at Computer Science and Artificial Intelligence Laboratory, MIT. I also work in collaboration with David Parkes at the Harvard School Of Engineering And Applied Sciences (SEAS), where I was a Postdocotoral Fellow before joinging MIT. I completed my PhD at Georgia Institute of Technology, where I was incredibly fortunate to be advised by Hongyuan Zha.

I am on the job market for 2024. If you think my experience would be a good fit for your organization or institution, reach out!

Recent News
  • March 2024: Honored to be selected as a Kavli Fellow and invited to the esteemed 34th Annual Kavli Frontiers of Science Symposium by the National Academy of Sciences.
  • December 2023: Organized the NeurIPS 2023 competition on "Melting Pot Contest" in collaboration with Google Deepmind and Cooperative AI foundation. This contest challenged researchers to push the boundaries of multi-agent reinforcement learning for mixed-motive cooperation by evaluating how well agents can adapt their cooperative skills to interact with novel partners in unforeseen situations. Check out more details here and here and the recorded stream of NeurIPS 2023 contest can be accessed here!
  • June 2023: Our work on "Plug-and-Play Controllable Graph Generation with Diffusion Models" was accepted in ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling.
  • May 2023: Our work on "Temporal Dynamics-Aware Adversarial Attacks on Discrete-Time Dynamic Graph Models" was accepted in KDD 2023.
  • April 2023: Invited talk on “Foundations for Learning in Multi-agent Ecosystems: Modeling, Imitation and Equilibria” at University of Southern California.
  • December 2022: Our work on "Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph Models" was accepted and presented at Neurips Temporal Graph Learning Workshop.
  • August 2022: I gave an invited talk on "Learning from Interactions in Networked Systems" as a part of Beneficial AI seminar series at the Center for Human-Compatible Artificial Intelligence (CHAI), UC Berkeley.
  • May 2022: Our work on "Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning" was accepted and presented at AAMAS 2022.
  • April 2022: Our work on "CrowdPlay: Crowdsourcing human demonstration data for offline learning in Atari games" was accepted and presented at ICLR 2022.