177 Huntington Avenue
Boston, MA 02115
- MS in Computer Science, Tufts University
- BS in Computer Science, Tufts University
- Hometown: Demarest, New Jersey
- Field of Study: Natural Language Processing
- PhD Advisor: Lu Wang
Lisa Fan is a PhD student in the Natural Language Processing program at Northeastern University’s Khoury College of Computer Sciences, advised by Professor Lu Wang. Lisa is a New England native and earned her BS/MS in Computer Science from Tufts University in Medford, Massachusetts in 2017. Her research is primarily focused on creating more effective ways for computers to process natural languages. Lisa is particularly interested in summarization generation, or the process by which text is summarized automatically by a computational system. She hopes through the course of her studies at Northeastern to both better assess existing summarization models and help to develop new ones.
What are the specifics of your graduate education (thus far)?
I’ve been reading up on current literature on topics like summarization generation, multi-task learning, and semantic parsing. In the fall I will begin taking some of the required courses and am excited to start conducting research with Professor Wang.
What are your research interests?
My interests are in Natural Language Processing and Computational Linguistics. I became interested in machine learning and artificial intelligence during my undergraduate studies and was drawn to how computers cannot yet perform NLP tasks humans can easily do, but have the potential to far surpass us. I would like the work I produce to solve real world problems.
What’s one problem you’d like to solve with your research/work?
Because tasks like summarization don’t have a single correct answer, there aren’t many easy ways to empirically measure how well a model is performing. I think extensive reasoning about what makes a summary good or bad can help us both better evaluate existing models and create new ones.
What aspect of what you do is most interesting?
I have a three year old brother who is currently learning how to speak. In a lot of ways, training an NLP model is similar to how we teach him: we repeat words and actions over and over again, cheer when he repeats after us until eventually, he learns the connection between the word and concept. On the other hand, we have to figure out the best way to present that concept to him so that he will understand and learn quickly. Watching the parallel between his development and my work in the lab gives me a more personal connection with what is essentially just lines of code.
What are your research or career goals, going forward?
Right now I’m focused on producing good research as a PhD student. I haven’t given too much thought to what lies beyond that, I don’t want to get ahead of myself!