Open Information Extraction (OIE) is a well performing intermediate step for tasks like summarization, text comprehension relation extraction or knowledge base construction. However, there is surprisingly little work on evaluating and on comparing different methods. How can we compare results of existing OIE systems? How domain independent are they, in particular for idiosyncratic domains like medical text or fashion? What are typical error classes and what is their impact? From an extensive study on a corpus containing bio-medical academic papers and three OIE systems we report first quantitative and qualitative observations. Our insights improve the design of new OIE systems and their interplay with downstream applications, like relation extraction or chat bots.
27.03.2017 - 16:00