Matches in Nanopublications for { ?s <http://purl.org/spar/c4o/hasContent> ?o ?g. }
- paragraph hasContent "To summarize, in order to populate the WiseNET ontology with the a priori environment knowledge, the relevant information from the IFC file needs to be extracted, then share it and connect it to the WiseNET ontology using queries, rules and linked data techniques such as Uniform Resource Identifiers (URIs) and RDF. Linked data technology connects data from one data-source to other data-sources, in our case, linked data allows the WiseNET ontology to obtain extra information from the ifcowl if required (e.g., the dimensions of a door, its material and the dimensions of a wall)." assertion.
- section-5.2-title hasContent "Population query" assertion.
- section-5-title hasContent "Ontology population from IFC" assertion.
- paragraph hasContent "The built information has already been added to the WiseNET ontology, the next step is to populate the information about the sensors. A complete smart camera network has been installed in the third storey of the I3M building. The smart cameras are based on the Raspberry Pi 3 system [23] 4 ." assertion.
- paragraph hasContent "There are two types of information that needs to be populated concerning the smart cameras. Firstly, there is the smart camera setup information, which consist on describing the smart cameras and their relation to the built environment. Secondly, there is the detection information, which occurs each time the smart cameras performs a detection. The first one will be populated once, at the system initialization, therefore is considered as a static population. The second one will be populated multiple times (each time there is a detection), therefore it will be considered as a dynamic population." assertion.
- paragraph hasContent "Figure 5 presents the user interface of the software developed for adding and setting up smart cameras in the system. The software helps to perform the following tasks:" assertion.
- paragraph hasContent "There is optional information that may be added using the setup software, such as:" assertion.
- paragraph hasContent "The information concerning the restriction of a space is deduced by adding a rule stating: if a space has a door with a security system different than keylock then that space is a restricted area. A space with doors that have keylock system could also be a restricted space but it needs to be stated directly." assertion.
- paragraph hasContent "The WiseNET smart camera setup software is connected to a SPARQL endpoint that inserts the information set by the user by running the query shown in the Listing 5. This step can be seen as a soft camera calibration that requires only the location of the cameras in the building. This differs from many multi-camera based systems, that required overlapping between the fields of views of the cameras and their orientation, leading to a time-consuming and skill-dependent calibration process [36]. An important contribution of using an ontology during the smart camera setup is the automatic suggestion of pertinent elements according to the space. This is achieved by using the static population of the building information." assertion.
- section-6.1-title hasContent "Static population" assertion.
- paragraph hasContent "As aforementioned, the main functions of the smart cameras is to firstly detect pertinent information using different image processing algorithms and secondly to extract the knowledge from the images and to send it to the central unit." assertion.
- paragraph hasContent "For the Panoptes building application the main detectable object of the smart cameras is the person. Therefore, three different image processing algorithms have been implemented, person detection, face detection and fall detection 5. After detecting some pertinent information, the smart camera describes what it "observes" by using the vocabulary defined in the WiseNET ontology, i.e., the smart camera extracts the knowledge of the scene. This addition of semantic meaning to what the camera "observes" is a problem known as semantic gap [39]." assertion.
- paragraph hasContent "For instance, consider the scene "observed" by the smart camera Camera_5 (top-right image in the Figure 2) where a person is being detected in the region of interest Roi_5. The smart camera will describe that scene in the following form: cameraID: "Camera_5", ImageAlgorithm: "Person Detection", RegionOfInterest: "Roi_5", xywh: "107,20,30,50", visualDescriptors: "29,31,45", where all the variables corresponds to terms defined in the WiseNET ontology (see Tables 3 and 4). The xywh are the coordinates of the detection box (green bounding box in Figure 2). The visual descriptors of the detection box were obtained by using the RGB histogram method. This is a classic image processing method that consist on taking the most preeminent color tone for each channel (R (red), G (green), B (blue)). It is possible to use different visual descriptors such as different color space or even physical characteristics (e.g., height, head size and shoulder width) [4]. After describing the scene, the smart camera sends that knowledge to the central unit by using web services. Moreover, when a person is detected an instant event is created. An instant event is a type of event that occurs in an specific point in time/space. If it is the first time a person is detected in a specific space, then the event ’person in space’ is also created. This is an interval event and, as its name indicates, it occurs in a time interval (i.e., it has a starting and an end time). The ’person in space’ event is an array containing the detections of a person in an specific space. Finally, the central API will dynamically insert this knowledge into the ontology. The central API has many functions such as performing the statics and dynamic population and managing the system reconfiguration. Currently, the reconfiguration task is in development process." assertion.
- section-6.2-title hasContent "Dynamic population" assertion.
- section-6-title hasContent "Ontology population from smart cameras" assertion.
- paragraph hasContent "This paper focused on creating an ontology model to fusion and re-purpose the different types of information required by a Panoptes building. Once the model is assembled it needs to be evaluated to verify if it satisfy its intent." assertion.
- paragraph hasContent "Currently, the smart camera network (SCN) has already been deployed and the image processing algorithms have been embedded on it. However, the central API is in development process, therefore it is not possible to perform an evaluation of the complete system. Moreover, the WiseNET ontology may be evaluated. According to Hitzler et al. the accuracy criteria is a central requirement for ontology evaluation [14]. This criteria consists on verifying if the ontology accurately captures the aspects of the modeled domain for which it has designed for. The WiseNET ontology development was based on some competency questions (see Table 1), therefore the evaluation consists on showing that those questions can be answered by the ontology. Listing 6 presents the queries to answer some competency questions. Those questions were selected because they involve aspects which are important in a Panoptes building such as: knowing how many spaces there are in the storeys, which doors a smart camera monitors, how many people are in a space, at what time a person enter/leave a space and how much time a person stayed on a space. Question 1 (of Listing 6) was answered by getting all the elements aggregated by the building storeys. For answering question 2, the regions of interest (ROIs) in a camera’s field of view and their physical representation are obtained. Question 3 was answered by counting the number of ’person in space’ events in the specific space. Question 4 and 5 were answered by using the time interval entity of the event ’person in space’. The time interval entity has a beginning, end and duration (the duration is giving in a temporal unit)." assertion.
- paragraph hasContent "Regarding the built environment, in this paper the definition of a Panoptes building was given, as well as its formalization by using an ontology. Also, two types of populations in the Panoptes environment were defined, static and dynamic. Furthermore, we believe that a classification of smart building should be done due to the generality of the actual definition of smart building. This classification may be created by considering: the functionalities of the smart building, the devices utilize, or a combination of both. The third type of classification was applied by defining a Panoptes building as a smart building using only cameras and focusing in monitoring the activities of people. Moreover, we considered that applications concerning smart buildings should exploit the built environment information, specially the data obtained from the IFC. We also proposed to enhance the IFC information by adding functional facts to the spaces and information about the security systems in the built environment." assertion.
- paragraph hasContent "Concerning the WiseNET system, we believe that a semantic-based system may present great advantages over the classical computer vision and deep learning systems. First of all, the WiseNET system does not needs training and testing data like the other systems. Secondly, by sending only the extracted knowledge of the image the amount of data transmitted through the network is lower than the other systems. Furthermore, a semantic-based system allow us to efficiently fusion different type of information (as shown in this paper), specially environmental information, which gives advantages over other type of systems. Those and other advantages will be studied in future work." assertion.
- paragraph hasContent "Regarding the privacy protection, it is important to remark that the SCN used in the WiseNET system do not send or save any image, in that way the privacy of the individuals is protected. However, one exception could be done if the ontology infers that an illegal act is occurring, in that case, the central API can use that inferred knowledge to send a message to the smart camera telling it to start recording and to save the images locally (to have them as a proof). Even in that special case no images are send through the network, in this way the system is more secured." assertion.
- paragraph hasContent "The queries shown in Listing 6 could be extended to obtain information that enables the reconfigure the SCN. The bi-directionality between the SCN and the central unit is a novelty in the semantic web domain. Specifically, the mechanism of using the ontology to interact with external devices. In general, the ontologies are not designed to be used in that way, therefore a central API is needed to be designed to perform two main tasks. Firstly, to receive all the messages from the SCN, synchronize them (by using timestamps) and then populate the ontology. Secondly, to check the ontology inferred knowledge and to use it for reconfigure the SCN by triggering a specific action (e.g., recording and triggering an alarm), changing the image pro- cessing algorithm or by saying to the smart camera to focus on a specific ROI. The dependency on the central API leaves an open question about the automatic externalization of the ontology knowledge, i.e., making the ontology to automatically output the inferred knowledge." assertion.
- section-7-title hasContent "Discussion" assertion.
- paragraph hasContent "According to the legend of Argus we tried to develop a "all-seeing" smart building, which we have called Panoptes building. With that motivation, the WiseNET ontology was developed. Its main goal is to fusions the different built environment contextual information with information coming from the smart camera network and other domain information, to allow real-time event detections and system reconfiguration. The purpose of the developed ontology is to create a kernel of a Panoptes building system (i.e., the WiseNET system), rather than working towards publishing another generic ontology. The ontology development procedure was performed using different semantic technologies, and it consisted of: defining a set of questions that the ontology should answer (competency questions); reusing different domain on- tologies (DUL, event, ifcowl, person and ssn); creating a set of classes and properties to connect the different domain ontologies and to complete the application knowledge; defining a set of constrains and extending the expressiveness by using logic rules; and finally, populating the ontology with static information (built environment and smart camera setup) and dynamic information (smart camera detections)." assertion.
- paragraph hasContent "The WiseNET system is a semantic-based real-time reasoning system, that fuses different sources of data and is expected to overcomes limitations of multi-camera based system. The WiseNET system selects relevant information from the video streams and adds contextual information to overcome problems of miss- ing information due to false/missed detections. Ad- ditionally, it relates events that occurred at different times, without human interaction. It also protects the user’s privacy by not sending nor saving any image, just extracting the knowledge from them. It may as well, reconfigure the smart camera network according to the inferred knowledge. In few words, the WiseNET system enables interoperability of information from different domains such as the built environment, event information and information coming from the smart camera network. A future goal of the WiseNET system is to offer services to building users according to information coming from a network of sensors deployed on the built environment and contextual information. This is a highly complex task due to the large scope of the building system, that goes from the static physical structure of the built environment to the internal environment in terms of the dynamic building users and the way how they interact with the building facilities." assertion.
- paragraph hasContent "The future works will focus on completing the externalization of the ontology knowledge using the central API and on properly evaluate and compare the semantic-based system against a classical computer vision system and a deep learning system." assertion.
- section-8-title hasContent "Conclusion and prospectives" assertion.
- paragraph hasContent "The authors thank the Conseil Régional de Bourgogne Franche Comté and the french government for their fundings." assertion.
- paragraph hasContent "We would also like to thank Ali Douiyek and Arnaud Rolet for the technical assistance and management of the technical team. As well as E.Grillet, A.Goncalves, D.Barquilla, E.Menassol, L.Lamarque, G.Kasperek and M.Le Goff for their help in the development of the 3D visualization tools of the building." assertion.
- section-acknowledgements-title hasContent "Acknowledgements" assertion.
- paragraph hasContent "Listing 1" assertion.
- paragraph hasContent "Listing 2" assertion.
- paragraph hasContent "Listing 3" assertion.
- paragraph hasContent "Listing 4" assertion.
- paragraph hasContent "Listing 5" assertion.
- paragraph hasContent "Listing 6" assertion.
- paragraph hasContent "Table 1" assertion.
- paragraph hasContent "Table 3" assertion.
- paragraph hasContent "The outcome of this stage – triples judged as ‘incorrect’ – is then assessed in the Verify stage, in which the crowd confirms/denies the presence of quality issues in each RDF triple processed in the previous stage. We define the Verify Stage as follows:" assertion.