Building Annotated Corpora without Experts
Authors | |
---|---|
Year of publication | 2011 |
Type | Article in Proceedings |
Conference | Natural Language Processing, Multilinguality |
MU Faculty or unit | |
Citation | |
Field | Informatics |
Keywords | corpus annotation crowdsourcing |
Description | In this paper, we present a low-cost approach of building a multi-purpose language resource for Czech, based on currently available results of previous work done by various teams. We focus on the first phase that consists of verifying validity of automatically discovered syntactic elements in 10 000 sentences by 47 human annotators. Due to the number of annotators and very limited time for training, existing heavy-weight techniques for building annotated corpora were not applicable. We have decided to avoid using experts when results between annotators differed. This means that our corpus does not offer ultimate answers, but raw data and models for obtaining ``correct'' answer tailored to user's application. Finally we discuss the currently achieved results and future plans. |
Related projects: |