About the QA-SRL Project
We are a group of researchers spanning the University of Washington, Bar-Ilan University, Facebook AI Research, and the Allen Institute for Artificial Intelligence. Our goal is to advance the state of the art in broad-coverage natural language understanding. We believe the way forward is with new datasets that are:
- Crowdsourced: modern machine learning methods require big training sets, which means scalability is a top priority.
- Richly structured: in order to improve over powerful representations learned from unlabeled data, we need strong, structured supervision signal.
- Extensible: annotation schemas should be flexible enough to accommodate new semantic phenomena without requiring expensive rounds of reannotation or brittle postprocessing rules.
Our research explores a variety of points in the design space spanned by these criteria. The common feature between our projects is using natural language to annotate natural language. This results in interpretable structures that can be annotated by non-experts at scale, which have the further advantage of being agnostic to choices of linguistic formalism.
Publications
Inducing Semantic Roles Without Syntax
Julian Michael and Luke Zettlemoyer
Findings of ACL 2021
S2 PDF Code Bib
Large-Scale QA-SRL Parsing
Nicholas FitzGerald, Julian Michael, Luheng He, and Luke Zettlemoyer
ACL 2018 (Honorable Mention)
S2 PDF Website Code Data Bib
Human-in-the-Loop Parsing
Luheng He, Julian Michael, Mike Lewis, and Luke Zettlemoyer
EMNLP 2016
S2 PDF Code Slides Bib
Specifying and Annotating Reduced Argument Span Via QA-SRL
Gabriel Stanovsky, Meni Adler, and Ido Dagan
ACL 2016
S2 PDF Talk Slides Bib