Matches in Nanopublications for { ?s ?p ?o <http://purl.org/np/RAv2mFe7VuJ4y8zRFW5HaMSjxU1kYdQ6yCeuLfptgZvQI#assertion>. }
Showing items 1 to 4 of
4
with 100 items per page.
- paragraph type Paragraph assertion.
- paragraph hasContent "We compared the common 1, 073 triples assessed in each crowdsourcing approach against our gold standard and measured precision as well as inter-rater agreement values for each type of task (see Table 4). For the contest-based approach, the tool allowed two participants to evaluate a single resource. In total, there were 268 inter-evaluations for which we calculated the triple-based inter-agreement (adjusting the observed agreement with agreement by chance) to be 0.38. For the microtasks, we measured the inter-rater agreement values between a maximum of 5 workers for each type of task using Fleiss’ kappa measure [10]. While the inter-rater agreement between workers for the interlinking was high (0.7396), the ones for object values and datatypes was moderate to low with 0.5348 and 0.4960, respectively. Table 4 reports on the precision achieved by the LF experts and crowd in each stage. In the following we present further details on the results for each type of task." assertion.
- RAfMdSyW3nLMbyrR5dtvxawi_JEJ-EO4bAzzBBAwVtqOE introduces table assertion.
- paragraph contains table assertion.