Introducing QA-SRL Bank 2.0

The first large-scale QA-SRL dataset and high-quality parser

Explore the data » Paper » Download » Model » Live Demo »

About the QA-SRL Project

We are a group of researchers spanning the University of Washington, Bar-Ilan University, Facebook AI Research, and the Allen Institute for Artificial Intelligence. Our goal is to advance the state of the art in broad-coverage natural language understanding. We believe the way forward is with new datasets that are:

Our research explores a variety of points in the design space spanned by these criteria. The common feature between our projects is using natural language to annotate natural language. This results in interpretable structures that can be annotated by non-experts at scale, which have the further advantage of being agnostic to choices of linguistic formalism.



Large-Scale QA-SRL Parsing
Nicholas FitzGerald, Julian Michael, Luheng He, and Luke Zettlemoyer
ACL 2018
PDF Website Code Data Bib

Crowdsourcing Question-Answer Meaning Representations
Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, and Luke Zettlemoyer
NAACL 2018
PDF Code Data Bib
ArXiv long version (Nov 2017) PDF Bib

Supervised Open Information Extraction
Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan
NAACL 2018
PDF Code Poster Bib

Human-in-the-Loop Parsing
Luheng He, Julian Michael, Mike Lewis, and Luke Zettlemoyer
EMNLP 2016
S2 PDF Code Slides Bib

Specifying and Annotating Reduced Argument Span Via QA-SRL
Gabriel Stanovsky, Meni Adler, and Ido Dagan
ACL 2016
S2 PDF Talk Slides Bib