Moving Picture

Robin Shing Moon Chan

PhD Student in HCI and NLP @ ETH Zürich

Email  /  Twitter  /  LinkedIn

About Me

I'm a first-year CS PhD student at the Institute for Visual Computing and the Institute for Machine Learning at ETH Zürich. I'm co-advised by Prof. Menna El-Assady (IVIA) and Prof. Ryan Cotterell (Rycolab). Previously, I obtained a master's degree in data science from ETH Zürich, graduating with a thesis on LLM program synthesis at IBM Research Europe (ZRL).

I'm curious about many things. My main research interest lies at the intersection of visualization and natural language processing. More concretely, I want to better understand how to make humans and LLMs interact more effectively.

If you're interested in similar topics, message me. I'm looking for collaborators and motivated master's students to supervise.

Personal bits: some of my favorite books: [1, 2, 3], some of my favorite movies: [1, 2, 3]. I like making music [me Christmas caroling with friends].

News
February 27th, 2024 Started a PhD at ETH Zürich, co-advised by Prof. Menna El-Assady and Prof. Ryan Cotterell! 🎉
September 26th, 2023 Together with Katya Mirylenka, we will give a talk at Zurich-NLP about our work at IBM Research, at the ETH AI Center. RSVP here!
July 9th, 2023 Our paper on counterfactual sample generation was accepted at ACL. I will be presenting it in Toronto in a few days! Check out our blog post about the paper!
Publications
A Theoretical Result on the Inductive Bias of RNN Language Models
Anej Svete, Robin Shing Moon Chan, Ryan Cotterell
arXiv, 2024

Proving that Elman RNNs can optimally represent some LMs defined by bounded stack pushdown automata. This sheds light on the inductive biases of RNN LMs, showing that there is nothing inherently hierarchical about the languages that RNNs can implement efficiently, as proposed by Hewitt et al. (2020).

Language Model Expressivity · Formal Language Theory

Which Spurious Correlations Impact Reasoning in NLI Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals
Robin Chan, Afra Amini, Menna El-Assady
In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2023
blog post / arXiv

Proposing a mixed-initiative, data-centric approach to generate a rich set of diverse counterfactual NLI samples. Our approach uncovers failure modes and biases of NLI models in a targeted and interactive way.

Mixed-Initiative Learning · Language Model Biases


Website source. Consider using Leonid Keselman's Jekyll fork of this page.