Khai Loong Aw
I am a Computer Science PhD student at Stanford University.
I am currently working on computer vision and developmental cognitive science in the NeuroAI Lab, advised by Daniel Yamins.
Previously, I worked on identifying methods to train AI language models that improve their functional similarity to the language processing mechanisms in the human brain.
In 2023, I worked with Antoine Bosselut and Martin Schrimpf at EPFL.
In 2022, I worked with Mariya Toneva at the MPI for Software Systems.
Email /
CV /
Bio /
Scholar /
Twitter /
GitHub
|
|
|
3D Scene Understanding Through Local Random Access Sequence Modeling
Wanhee Lee*, Klemen Kotar*, Rahul Venkatesh*, Jared Watrous*, Honglin Chen*, Khai Loong Aw, Daniel Yamins
2025
arXiv
We propose Local Random Access Sequence (LRAS), an autoregressive generative model architecture.
Using optical flow as an intermediate representation, LRAS achieves state-of-the-art novel view synthesis and 3D object manipulation.
|
|
Instruction-tuning Aligns LLMs to the Human Brain
Khai Loong Aw,
Syrielle Montariol1,
Badr AlKhamissi1,
Martin Schrimpf2,
Antoine Bosselut2
COLM, 2024
arXiv
We investigate how instruction-tuning affects language models from a neuroscientific perspective, revealing that it generally improves their alignment with human brain activity, with model size and world knowledge playing key roles.
|
|
Training language models to summarize narratives improves brain
alignment
Khai Loong Aw,
Mariya Toneva
ICLR, 2023   (Notable Top 25%)
arXiv
/
GitHub
We show that training language models to summarize narratives (i.e., deeper understanding of characters,
emotions, and relationships) results in richer representations that are more aligned to human brain
activity.
|
|
Detecting False Alarms from Automatic Static Analysis Tools: How Far are We?
Hong Jin Kang,
Khai Loong Aw,
David Lo
ICSE, 2022   (Distinguished Paper Nominee)
arXiv
/
Poster
/
Video
|
|