00:00 | Hello everyone. My name is. a professor here in the department of |
|
00:05 | Science and I directly switch at the imaging laboratory. So my research interests |
|
00:12 | primarily in computer vision and video The heavily focus on understanding of human |
|
00:19 | , understanding of group activities as well seen understanding in video imagery along with |
|
00:28 | of anomalous and diverse events. We quite interested in being able to apply |
|
00:35 | algorithms specifically in this space to real scenarios. So as a result, |
|
00:41 | are also interested in looking at the systemic impact of video imagery and and |
|
00:48 | impact on specifically the quality of video . So we focus quite a bit |
|
00:54 | design of algorithms that are robust to artifacts that might be present in video |
|
01:00 | . Finally, we are also interested software development specifically in realization of software |
|
01:08 | where video analysis can be tested at , which means that we want to |
|
01:14 | able to deploy algorithms that can process video streams that are being transferred at |
|
01:22 | . We are specifically looking at challenges exist when we are deploying cameras in |
|
01:29 | world infrastructures that are designed family for safety type applications where they need to |
|
01:38 | large geographical spaces. Oftentimes the infrastructure supports this kind of video acquisition is |
|
01:46 | by a range of network infrastructures by as well as wireless, causing a |
|
01:54 | of artifacts to be embedded in the imagery that gets collected predominantly coming from |
|
01:59 | network artifacts or packet loss due to and so on so forth. We |
|
02:05 | look at how we can generally see for broader applicability because oftentimes in unconstrained |
|
02:15 | is very difficult to define exactly the or which specific analysis needs to be |
|
02:23 | . Um and and as a result have to be able to design custom |
|
02:29 | on the fly that may be able be deployed or to be adopted for |
|
02:36 | types of scenarios that one may And the third is an open challenge |
|
02:42 | the ability to provide the analyzed information a human consumable form, especially when |
|
02:52 | you are talking about hours and hours video imagery that needs to be |
|
02:56 | The information coming from them can be overwhelming for a human to be able |
|
03:02 | understand and take actionable steps of what do next. So to being able |
|
03:07 | provide the appropriate kind of information so humans can make the appropriate decision on |
|
03:14 | information that becomes available because of video . And that's a critical aspect that |
|
03:19 | look at as well. So I wanted to quickly highlight some of the |
|
03:23 | research efforts. First on the algorithmic , as I said, we quite |
|
03:28 | look at how we can understand video uh from acquired with the imagery in |
|
03:33 | time. We also look at algorithms we can potentially try and recover some |
|
03:40 | the lost information because of network So we can actually improve the quality |
|
03:47 | image that we are able to apply to at the same time. We |
|
03:54 | focus on design of video quality of algorithms for object protection, object tracking |
|
04:00 | well as object re identification. And , um as I said before, |
|
04:06 | design algorithms for understanding human activity as as abnormal events. So these are |
|
04:13 | areas that we are actively pursuing in research group and and of course these |
|
04:18 | some uh you know, fairly challenging . So we continue to explore different |
|
04:25 | on how we can improve algorithms in to analyze this information. Um |
|
04:32 | uh you know, I want to mention that we have been designing scalable |
|
04:37 | frameworks. We specifically leverage uh you , cloud architecture as well as you |
|
04:44 | , um kubernetes clusters and dockerized deployments be able to deploy algorithms so that |
|
04:52 | only we can test our algorithms that in the research group but we can |
|
04:57 | them available to other users, certainly , but also other end users that |
|
05:03 | often partner with to be able to our algorithms on their own video |
|
05:08 | Look forward to answering questions that you have uh in the coming days. |
|
05:13 | |
|