Matches in Nanopublications for { ?s ?p ?o <http://purl.org/np/RA00cli0m3KdV0yhsRQJEYcWxd7H3yUl6OmTSWRVVE4oQ#assertion>. }
Showing items 1 to 2 of
2
with 100 items per page.
- paragraph type Paragraph assertion.
- paragraph hasContent "Two of the authors of this paper (MA, AZ) generated a gold standard for two samples of the crowd- sourced triples. To generate the gold standard, each author independently evaluated the triples. After an individual assessment, they compared their results and resolved the conflicts via mutual agreement. The first sample evaluated corresponds to the set of triples obtained from the contest and submitted to MTurk. The inter-rater agreement between the authors for this first sample and was 0.4523 for object values, 0.5554 for datatypes, and 0.5666 for interlinks. For the second sample, we analyzed a subset from the triples identified in the Find stage by the crowd as ‘incorrect’. The subset has the same distribution of quality issues and triples as the one assessed in the first sample: 509 triples for object values, 341 for datatypes/language tags, and 223 for interlinks. We measured the inter- rater agreement for this second sample and was 0.6363 for object values, 0.8285 for datatypes, and 0.7074 for interlinks. The inter-rater agreement values were calculated using the Cohen’s kappa measure [6], designed for measuring agreement among two annotators. Dis- agreement arose in the object value triples when one of the reviewers marked number values which are rounded up to the next round number as correct. For example, the length of the course of the “1949 Ulster Grand Prix” was 26.5Km in Wikipedia but rounded up to 27Km in DBpedia. In case of datatypes, most dis- agreements were considering the datatype “number” of the value for the property “year” as correct. For the links, those containing unrelated content, were marked as correct by one of the reviewers since the link existed in the Wikipedia page." assertion.