Student Spotlight – Daphne Zhang
Meet UCLA undergraduate researcher Daphne Zhang!
Daphne majors in Cognitive Science and is a part of our Summer Fellows Program.
Her project, “Exploring Large Language Model (LLM) Comprehension: Syntax vs. Semantics,” aims to demonstrate that Large Language Models (LLMs) are not purely encapsulated within syntax but are also influenced by semantic content– a finding that could have significant implications in the field of computational linguistics and cognitive science, challenging existing assumptions about how LLMs process meaning.
How did you first get interested in your research project?
I first became interested in my research project while working in Professor Idan Blank’s lab, where my mentor, Thomas, was exploring Large Language Models (LLMs). My project specifically investigates the relationship between human plausibility ratings and the attention mechanisms within these models. Since attention heads in LLMs are known to be influenced by semantic content, I aim to determine whether they can accurately track semantic plausibility in a way that mirrors human cognition. This research is particularly exciting because it provides insight into how LLMs process meaning and how their performance compares to human reasoning, offering a deeper understanding of both artificial and human intelligence.
What has been the most exciting aspect of your research so far?
The results turned out great, and I will be able to publish a paper with my mentor.
What has surprised you about your research or the research process?
I was surprised that so many participants did the study. I conducted the study through SONA system, and online study like this really gets a lot of participants very quickly.
What is one piece of advice you have for other UCLA students thinking about doing research?
Make sure to thoroughly consider the study design before conducting the experiment and pilot the study. That way, we can ensure our efforts are used in the best possible way.
What effect do you hope your research has in your field, at UCLA, in your community, or in the world?
After publishing this research, we aim to demonstrate that Large Language Models (LLMs) are not purely encapsulated within syntax but are also influenced by semantic content. This finding could have significant implications in the field of computational linguistics and cognitive science, challenging existing assumptions about how LLMs process meaning. At UCLA, it could inspire further research into the intersection of AI and human cognition. In a broader context, our work could contribute to the development of more interpretable and cognitively aligned AI models, ultimately improving their ability to interact meaningfully with human users.