Khai Loong Aw

I am a Computer Science PhD student at Stanford University. I work on research at the intersection of neuroscience, AI, and psychology.

Previously, I worked on identifying methods to train AI language models that improve their functional similarity to the language processing mechanisms in the human brain. In 2023, I worked with Antoine Bosselut and Martin Schrimpf at EPFL. In 2022, I worked with Mariya Toneva at the MPI for Software Systems.

Before that, I worked with Qianru Sun on the computer vision task of semantic segmentation, where we focused on approaches using transfer learning and self-supervised learning algorithms. I also worked with David Lo, where we applied machine learning methods to identify bugs in software programs.

Email  /  CV  /  Bio  /  Scholar  /  Twitter  /  GitHub

profile photo

Research

I work on research at the intersection of neuroscience, cognitive science, and artificial intelligence.

Instruction-tuning Aligns LLMs to the Human Brain
Khai Loong Aw, Syrielle Montariol1, Badr AlKhamissi1, Martin Schrimpf2, Antoine Bosselut2
COLM, 2024
arXiv

We investigate how instruction-tuning affects language models from a neuroscientific perspective, revealing that it generally improves their alignment with human brain activity, with model size and world knowledge playing key roles.

Training language models to summarize narratives improves brain alignment
Khai Loong Aw, Mariya Toneva
ICLR, 2023   (Notable Top 25%)
arXiv / GitHub

We show that training language models to summarize narratives (i.e., deeper understanding of characters, emotions, and relationships) results in richer representations that are more aligned to human brain activity.

Detecting False Alarms from Automatic Static Analysis Tools: How Far are We?
Hong Jin Kang, Khai Loong Aw, David Lo
ICSE, 2022   (Distinguished Paper Nominee)
arXiv / Poster / Video