The workshop will be hosted on December 8, 2022, in Abu Dhabi, UAE, Captial Suite 12B / Zoom on Underline. Timezone: UTC+4
In this talk I’ll cover my experience of identifying and developing research opportunities in a product focused environment where often the top priority is user satisfaction. How do we identify research problems? How do we align them to our goal of improving user satisfaction? How can we build a career while working on products? These are some of the topics that we’ll touch upon in this talk.
In this talk, I will share the story behind ELMo - the first NLP system named after a children’s TV star to win a best paper - and some of the lessons learned along the way that have heavily influenced my research in the years since. When I started the project I had no formal training in computer science, machine learning, or NLP, and only a strong intuition that a language model would provide a universally good starting point for any language task. It took over a year with many dead ends and failed experiments during which I considered abandoning the project multiple times. Through this process I learned many things, but most importantly the value of the willingness to fail and maintaining a methodological approach when pursuing high risk projects.
Much work at the intersection of NLP and computational social science/cultural analytics focuses on developing algorithmic measuring devices that can be seen as counting some phenomenon in text in order to tell us something about the world in which that text was created. The instrument validity of those devices in critical; and while advances in large language modeling and domain adaptation have enabled models trained on one domain to generalize (to some extent) to another, what is often required is a model optimized for that domain being studied, and the specific phenomena that are being measured within it. In this talk, I’ll discuss my own group’s work creating annotated datasets for the study of culture, including both contemporary novels and movies, highlighting a number of the practical, legal and conceptual challenges that arise.
In this talk, Zhijing Jin (she/her) will share about her PhD journey in NLP with stories and lessons learned. Her talk will be friendly to audiences of all backgrounds, from beginners in NLP to senior researchers. She will address common career path questions such as “how to get started in NLP?”, “How to do well in a PhD”, and “What is a meaningful career”. The talk will cover her personal stories, research pursuit (NLP for Social Good and CausalNLP), and community contributions (ACL Year-Round Mentorship, etc). The talk slides are available here.
Everyone has challenges to overcome, some have already experienced what would be the most difficult obstacles life had in store for them, while for others there is still a hard road ahead. My life is no exception, in the pursuit of my career goals I’ve faced some obstacles that I’ll share in this talk. But I want to focus on the little happy coincidences in my professional life that have had a significant impact on myself, at the professional and at the personal level. I will also share how some of my personal life has influenced my research curiosity and motivated my research agenda through the years. Lastly, if I have time, I’ll share my perspective on the always controversial notion of “life-work” balance.
Few of us really like writing papers. And few of us are really very good at it. To this end, ACL Student Research Workshops now commonly run a Pre-submission Mentorship Program that gives students the chance to receive feedback on their papers before submitting them for review. The feedback comes from experienced members of the field, in the form of suggestions for improve the paper’s overall organization and writing. It is a program that is of clear benefit to participant. But what if such a program isn’t on offer? What then? Early in my career, I learned to pretend that a paper I was working on had been written by someone else – someone who was too caught up in the paper to see its shortcomings. This let’s pretend of self-editing will be my contribution to Stories Shared and Lessons Learned.
In this talk, I’ll reflect on my unusual industry-to-academia path and various unexpectedly difficult things about professorship.
I care about building NLP technologies that can help real-world practitioners gain insights from text, which means I have had the opportunity to collaborate with a variety of domain experts outside our field. It turns out this is really hard. In this talk I’ll reflect on lessons learned from interdisciplinary collaborations, and present some related open challenges to doing great science.
How does someone in NLP start doing interdisciplinary research? Venturing out from NLP to new lands can be daunting: people speak new languages, are interested in new questions, and get confused why we keep talking about Sesame Street characters. I’ll share my story of going off the NLP map, switching from lexical semantics to doing research primarily in interdisciplinary settings like computational social science. Join me for tales from my journey out to these foreign locales and how I have navigated some of the challenges like finding fellow adventurers, summiting the mountains of papers, avoiding the plains of isolation, and finding treasured research problems relevant to both fields.
Bonnie Webber received her PhD from Harvard University and taught at the University of Pennsylvania in Philadelphia for 20 years before joining the School of Informatics at the University of Edinburgh, where she is now professor emeritus.Known for early research on 'cooperative question-answering' and extended research on discourse anaphora and discourse relations, she has served as President of the Association for Computational Linguistics (ACL) and Deputy Chair of the European COST action IS1312, 'TextLink: Structuring Discourse in Multilingual Europe'. Along with Aravind Joshi, Rashmi Prasad, Alan Lee and Eleni Miltsakaki, she is co-developer of the Penn Discourse TreeBank, including the recently released PDTB-3. She is a Fellow of the Association for Advancement of Artificial Intelligence (AAAI), the Association for Computational Linguistics (ACL) and the Royal Society of Edinburgh (RSE). In July, she was awarded the ACL Life Time Achievement award for 2020. In both the RSE and the ACL, she continues to work towards ensuring that women are recognized for their achievements in the NLP community and in Science and Technology more generally.
Matthew Peters is a Research Scientist at the Allen Institute for Artificial Intelligencea exploring applications of deep neural networks to fundamental questions in natural language processing. Prior to joining AI2, he was the Director of Data Science at a Seattle start up, a research analyst in the finance industry and a post-doc investigating cloud-climate feedback. He has a PhD in Applied Math from the University of Washington.
Manaal Faruqui is a research scientist at Google working on Google Assistant, where he leads a team of engineers working at the intersection of speech and language processing problems. His research focuses on conversational dialog systems, representation learning, distributional semantics, multilingual learning, morphology, natural language processing, deep learning and machine learning.
Emma Strubell is an Assistant Professor at the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University, and a part-time Research Scientist at Google AI. Previously she was a Visiting Researcher at Facebook AI Research after earning her doctoral degree in 2019 at the University of Massachusetts Amherst. Her research lies at the intersection of natural language processing and machine learning, with a focus on green (computationally efficient) AI and providing pragmatic solutions to practitioners who wish to gain insights from natural language text. Her work has been recognized with a Madrona AI Impact Award, best paper awards at ACL 2015 and EMNLP 2018, and cited in news outlets including the New York Times and Wall Street Journal.
David Bamman is an associate professor in the School of Information at UC Berkeley, where he works in the areas of natural language processing and cultural analytics, applying NLP and machine learning to empirical questions in the humanities and social sciences. His research focuses on improving the performance of NLP for underserved domains like literature (including LitBank and BookNLP) and exploring the affordances of empirical methods for the study of literature and culture. Before Berkeley, he received his PhD in the School of Computer Science at Carnegie Mellon University and was a senior researcher at the Perseus Project of Tufts University. Bamman's work is supported by the National Endowment for the Humanities, National Science Foundation, an Amazon Research Award, and an NSF CAREER award.
David Jurgens is an assistant professor in the School of Information at the University of Michigan. He holds a PhD from the University of California Los Angeles and was a postdoctoral scholar in the Department of Computer Science at Stanford University and prior at McGill University. His research combines natural language processing, network science and data science to discover, explain and predict human behavior in large social systems. His research has been published in top computational social science and natural language processing venues including PNAS, WWW, ACL, ICWSM, EMNLP, and others. His work has won the Cozzarelli Prize from the National Academy of Science, Cialdini Prize from the Society for Personality and Social Psychology, best paper at ICWSM and W-NUT, best paper nomination at ACL and Web Science, and has been featured in news outlets such as the BBC, Time, MIT Technology Review, New Scientist, and Forbes.
Zhijing Jin is a Ph.D. at Max Planck Institute & ETH. Her research goals are two-fold: (1) to expand the impact of NLP by promoting NLP for social good, and (2) to improve NLP models by connecting NLP with causal inference. She is co-supervised by Prof Bernhard Schoelkopf at Max Planck Institute (main supervisor), Prof Rada Mihalcea at University of Michigan (as a mentor), and Prof Mrinmaya Sachan and Prof Ryan Cotterell (co-supervision through ELLIS program) at ETH Zürich. She has published at many NLP and AI venues (e.g., AAAI, ACL, EMNLP, NAACL, COLING, AISTATS), and NLP for healthcare venues (e.g., AAHPM, JPSM). Her work has been cited in MIT News, ACM TechNews, WeVolver, VentureBeat, and Synced. She is actively involved in AI for social good, as the organizer of NLP for Positive Impact Workshop at ACL 2021 and EMNLP 2022, and RobustML workshop at ICLR 2021. To support the NLP research community, she organizes the ACL Year-Round Mentorship Program. To foster the causality research community, she is the Publications Chair for the 1st conference on Causal Learning and Reasoning (CLeaR), and organizes the Tutorial on CausalNLP at EMNLP 2022. More information can be checked on her personal website: zhijing-jin.com
Thamar Solorio is Professor of Computer Science at the University of Houston (UH) and is an NLP scientist at Bloomberg. She holds graduate degrees in Computer Science from the Instituto Nacional de Astrofísica, Óptica y Electrónica, in Puebla, Mexico. Her research interests include information extraction from social media data, enabling technology for code-switched data, stylistic modeling of text, and more recently multimodal approaches for online content understanding. She is the director and founder of the RiTUAL Lab at UH. She is the recipient of an NSF CAREER award for her work on authorship attribution, and recipient of the 2014 Emerging Leader ABIE Award in Honor of Denice Denton. She is currently serving a second term as an elected board member of the North American Chapter of the Association of Computational Linguistics and was PC co-chair for NAACL 2019. She recently joined the team of Editors in Chief for the ACL Rolling Review (ARR) initiative. Her research is currently funded by the NSF and by ADOBE.
Colin Raffel is an Assistant Professor in the Department of Computer Science at the University of North Carolina, Chapel Hill. He also spends one day a week as a Faculty Researcher at Hugging Face. Much of his recent research focuses on machine learning algorithms for learning from limited labeled data, including semi-supervised, unsupervised, and transfer learning.