Show simple item record

dc.contributor.authorYadav, Piyush
dc.contributor.authorSalwala, Dhaval
dc.contributor.authorDas, Dibya Prakash
dc.contributor.authorCurry, Edward
dc.date.accessioned2021-08-13T11:49:11Z
dc.date.issued2020
dc.identifier.citationYadav, Piyush, Salwala, Dhaval, Das, Dibya Prakash, & Curry, Edward. (2020). Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing. International Journal of Semantic Computing, 14(03), 423-455. doi:10.1142/s1793351x20500051en_IE
dc.identifier.issn1793-7108
dc.identifier.urihttp://hdl.handle.net/10379/16902
dc.description.abstractComplex Event Processing (CEP) is an event processing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, CEP is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform matching over them. This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph-driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP-based state optimization VEKG-Time Aggregated Graph (VEKG-TAG) is proposed over VEKG representation for faster event detection. VEKG-TAG is a spatiotemporal graph aggregation method that provides a summarized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with F-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19X faster search time, achieving sub-second median latency of 4 20ms.en_IE
dc.description.sponsorshipThis work was supported with the financial support of the Science Foundation Ireland grant SFI/13/RC/2094 and SFI/12/RC/2289 P2.en_IE
dc.formatapplication/pdfen_IE
dc.language.isoenen_IE
dc.publisherWorld Scientific Publishingen_IE
dc.relation.ispartofInternational Journal Of Semantic Computingen
dc.rightsCC BY-NC-ND 3.0 IE
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/3.0/ie/
dc.subjectVideo representationen_IE
dc.subjectknowledge graphsen_IE
dc.subjectvideo streamsen_IE
dc.subjectcomplex event processingen_IE
dc.subjectevent rulesen_IE
dc.subjectpattern matchingen_IE
dc.subjectspatiotemporal networksen_IE
dc.titleKnowledge graph driven approach to represent video streams for spatiotemporal event pattern matching in complex event processingen_IE
dc.typeArticleen_IE
dc.date.updated2021-08-13T10:20:30Z
dc.identifier.doi10.1142/S1793351X20500051
dc.local.publishedsourcehttps://doi.org/10.1142/S1793351X20500051en_IE
dc.description.peer-reviewedpeer-reviewed
dc.contributor.funderScience Foundation Irelanden_IE
dc.description.embargo2021-09-30
dc.internal.rssid23844747
dc.local.contactPiyush Yadav, Insight Center For Data Analytics, Nuig. - Email: p.yadav1@nuigalway.ie
dc.local.copyrightcheckedYes
dc.local.versionACCEPTED
dcterms.projectinfo:eu-repo/grantAgreement/SFI/SFI Research Centres/13/RC/2094/IE/Lero - the Irish Software Research Centre/en_IE
dcterms.projectinfo:eu-repo/grantAgreement/SFI/SFI Research Centres/12/RC/2289/IE/INSIGHT - Irelands Big Data and Analytics Research Centre/en_IE
nui.item.downloads53


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

CC BY-NC-ND 3.0 IE
Except where otherwise noted, this item's license is described as CC BY-NC-ND 3.0 IE