Matches in Nanopublications for { <http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#paragraph> ?p ?o ?g. }
Showing items 1 to 2 of
2
with 100 items per page.
- paragraph type Paragraph assertion.
- paragraph hasContent "In this work, we crowdsource three specific LD quality issues. We did so building on previous work of ours [43] which analyzed common quality prob- lems encountered in Linked Data sources and classi- fied them according to the extent to which they could be amenable to crowdsourcing. The first research ques- tion explored is hence: RQ1: Is it feasible to detect quality issues in LD sets via crowdsourcing mecha- nisms? This question aims at establishing a general un- derstanding if crowdsourcing approaches can be used to find issues in LD sets and if so, to what degree they are an efficient and effective solution. Secondly, given the option of different crowds, we formulate RQ2: In a crowdsourcing approach, can we employ unskilled lay users to identify quality issues in RDF triple data or to what extent is expert validation needed and desirable? As a subquestion to RQ2, we also examined which type of crowd is most suitable to detect which type of quality issue (and, conversely, which errors they are prone to make). With these questions, we are interested (i) in learning to what extent we can exploit the cost- efficiency of lay users, or if the quality of error detec- tion is prohibitively low. We (ii) investigate how well experts generally perform in a crowdsourcing setting and if and how they outperform lay users. And lastly, (iii) it is of interest if one of the two distinct approaches performs well in areas that might not be a strength of the other method and crowd." assertion.