|
Yulu Qin(秦瑜璐)
Hi, I'm currently a second-year PhD Student at the Boston University advised by Prof. Najoung Kim. Previously, I worked as an assistant research scientist in the Human & Machine Intelligence Lab at NYU Center for Data Science, advised by Prof. Brenden Lake, where my research focuses on the intersection of cognitive science, linguistics and machine learning.
I received my Master's in Computer Science from NYU Courant. For my undergrad, I spent two years at Nankai University (Tianjin, China) and completed my B.A. in Sociology and Computer Information Science at The Ohio State University (Columbus, Ohio).
Email /
scholar /
cv /
Github
|
|
Research
I am interested in language acquisition and its relevance to deep learning models, particularly in how human language acquisition can inform these models. My past projects include interpreting the inner syntactic and semantic representations of Large Language Models (LLMs).
|
|
Vision-and-language training helps deploy taxonomic knowledge but does not fundamentally alter it
Yulu Qin*,
Dheeraj Varghese*,
Adam Dahlgren Lindström,
Lucia Donatelli,
Kanishka Misra†,
Najoung Kim†
Advances in Neural Information Processing Systems (NeurIPS 2025)
project page
/
code
/
poster
We designed a taxonomy-informed GQA benchmark and found that vision-language models outperform text-only models; vision training improves how models use taxonomic information without fundamentally changing the underlying knowledge.
* and † indicate equal contribution
|
|
RExBench: Can coding agents autonomously implement AI research extensions?
Nicholas Edwards*,
Yukyung Lee*,
Yujun(Audrey) Mao,
Yulu Qin,
Sebastian Schuster†,
Najoung Kim†
Under Review
code
We designed a benchmark to test whether coding agents can autonomously implement extensions to existing AI research.
* and † indicate equal contribution
|
|
A systematic investigation of learnability from single child linguistic input
Yulu Qin,
Wentao Wang,
Brenden Lake
In Proceedings of the 46th Annual Conference of the Cognitive Science Society (CogSci 2024)
We trained LSTM and Transformer-based models on single-child-directed speech to examine what syntactic and semantic knowledge they can learn and how well they generalize.
|
|
Layer by Layer - Examining BERT's Syntactic and Semantic Representation
Harsh Dubey,
Yulu Qin,
Rahul Meghwal
Computational Cognitive Modeling 2023
project page
We conducted a comprehensive analysis to examine BERT's inner workings and learned semantic/syntactic representations via probing, visualization and subject-verb agreement tests.
|
Big thank you to Jon Barron for the source code of this website.
|
|