© Distribution of this video is restricted by its owner
Transcript ×
Auto highlight
Font-size
00:02 all right. Good morning. Um today we gonna do on started discussion

00:11 new topic girl, which is So, um, so far,

00:17 been talking about, um, designing Victor's on in general, trying to

00:25 how we build an ability to recognise , certainly in the context of

00:33 but also the context of trying to some of the objects of similar

00:39 So one of the other, open problems that has a very broad

00:47 of applications is tracking in tracking. could be relevant to, you

00:53 essentially detection. And, um, at how a particular object might,

01:02 know, move in space and time given observation. So tracking has multiple

01:11 . It's essentially trying to locate Ah, that potentially of interest to

01:17 . Of course, identifying what those might be can be part of the

01:21 problem. But the other aspect is actually deter mined the dynamic configuration,

01:28 is how the object is actually moving changing positions over time. Eyes critical

01:36 addressing the tracking problem. In a , it's ah, the glint of

01:41 Teoh follow something with their eyes. . That object goes very able to

01:48 it and track it. Ah, off all the challenges that might be

01:58 in the context of seems realization when the objects might actually appear really partially

02:05 the scene or the might be similar other objects. Eso can be

02:11 toe, identify and recognise a particular and so on. So far,

02:16 we'll look at each one of these to talk a little bit more about

02:20 problem. So here's an example off tracking, and what you see is

02:27 a bounding contour around the object appear tracking. In this case, a

02:34 , says the person who was in scene, our corn to responded and

02:39 able to localize the sort of the toe, the outline off the

02:47 And in a way, you can of this problem as saying that you

02:51 , we are tracking the center object object being start off as one

02:58 or we can also think of this you say, Well, I'm really

03:03 tracking the boundary. The boundary is up of multiple points, and each

03:09 is something that I'm actually being able match, that it's Ah, changing

03:16 over time, right? So, , a a very abstract sense.

03:24 is really about looking at a position evolving its position with tiny right.

03:32 , as I said, there are broad range of applications for tracking.

03:38 can be thought of as a low Albert of them, if you

03:41 for many other applications. Things like computer interaction. Right. So these

03:48 , um, you know, you our that was, in the context

03:54 augmented reality. You're essentially embedding real into a virtual world, if you

04:01 , in order to I understand how real object is exchanging and essentially be

04:07 to model that in the virtual We need to be able to track

04:11 geologic on DNA. In most the tracking is happening with respect to

04:18 riel objects in the scene. great example of this, you

04:23 that came about a few years back when Microsoft introduced the connect,

04:30 So the connector has two types of . If you know it's actually got

04:35 infrared sensor along with an infrared great on then, of course, got

04:42 visual sense in Israel. A visual on what it allows want to

04:48 Is this to essentially track of person based on their movements? It's able

04:59 sort of allow for interaction in a world. A game, if you

05:07 . It would be one example. was ah, beyond human computer interaction

05:13 reality and so on. Ah, applications and security and surveillance, of

05:19 . Ah, you know, because you can imagine in public safety

05:23 , there is a need to try understand how objects move potentially identify

05:30 collide with each other things like, know, traffic accidents and so

05:33 so forth. Ah, and at same time, there might be applications

05:39 relate to security Being able to, , you know, clack a

05:45 for example. You know, if were to, um, you

05:51 get into a public fight, you have to be able to tease out

05:56 are the people involved where they move maybe before or after a particular

06:03 People Teoh identify directions in which there may have traversed, store or

06:09 and so on so forth. So are broad range of applications for

06:15 um, in already off scenarios now day the court affect right? I

06:24 , we have to understand, from computer vision standpoint, what is striking

06:29 was really supposed to do, So in a very simple sense,

06:35 idea is that, given an input , right, if you have a

06:39 , uh, let's say for sea this, people walking down tracking has

06:46 essentially be able to tell us there the objects of interest in this case

06:53 , uh, at the cutter And, of course, it has

06:57 be able to tell us what is particular trajectory off those objects prior to

07:06 current time. Right? So it's important that a tracking elegant of them

07:12 its core be able to deliver two of information, which is the current

07:17 for objects of interest and how they to that current position over time.

07:24 ? So that's that's really what we're Teoh build or design in an Alberta

07:30 when we talk about a tracking are tracking out. So let's look

07:35 this problem on break it down into the elements Ah, that may be

07:43 for us to think about when we about a tracking Al, right?

07:47 let's look at a cell for, know, sort of Ah,

07:52 Ah scenario. Let's say you you have a whole way the person

07:59 you're interested in tracking and that's the person in the scene for now.

08:03 , so first and foremost, of , we want to be able to

08:08 the object of interest. Now, know, we've been talking about

08:13 detectors for the pastor. You I guess a few lectures eso you

08:20 how to try to go ahead and a detector. In this context,

08:24 talking about a person detectors so we do that. And we know the

08:28 will give us a bounding box around person. Now, of course,

08:31 know, we want to be able think about tracking where tracking tells is

08:37 position of the object we're interested So you know we can You can

08:42 about a position in two D. can pick any any position. Let's

08:47 we pick the position that the, know, on the lower edge of

08:53 bounding box, maybe the center one pick any other position could be the

08:57 right off this object. A but the central of the Barney box

09:02 be picked as the position. But in this example use this particular

09:06 the one as the position of the product. So now it At some

09:13 after that position, the object will mood and will appear in a different

09:19 . Now we can basically do our again on the particular image, and

09:24 get abounding walks. And similarly, know, we find the center off

09:29 lower edge of the Barney marks and us a new position. Let's say

09:33 T So piece of tea is our position. Great. So now we're

09:38 to actually just apply our detector over over again to be able to identify

09:45 object that we want to localize and detection. It gives us the position

09:51 the object. Ah, and of , Ah, Now, if I

09:56 all dispositions in time, I will up getting a trajectory. So in

10:01 baby, my tracking problem is solved it gave me the position of the

10:08 in every image. Give me on to connect all dispositions and essentially come

10:15 with. Ah, a temporal evolution points, Which tells me how that

10:21 object came to its current position in present time. Right now. In

10:26 case, of course. Ah, know, detection is what I'm

10:30 So we can call the striking by kind. Of course, all we

10:34 have is the bottle that we used while building our detector. Ah,

10:42 let's say you know, we be , for example, the shape you

10:46 , we could have used, uh , everything that you talked about so

10:51 , right? Hard descriptors are hard . Any one of those is what

10:57 could have used in our design, detector. But at the end,

11:01 end up being able to do uh, tracking on, in this

11:06 , tracking bad detection because that's all doing now, of course, as

11:11 know, um, you know, are not perfect. One of the

11:16 that we, uh, have talked is the fact that, you

11:20 detectors can produce false detections. So say Ah, for whatever reason,

11:29 , we come back to our simple . Single person in the scene.

11:34 doing our detection. We detected the at disposition Piece of wanted at the

11:40 , we have the person moving. at some point in time, let's

11:46 time, Step K. We got false detection from our detector, and

11:51 ended up finding this particular bounding box on, Of course, because we

11:56 a bounding box, we assume we you know, since you're not see

12:01 visualize ing every detection, we assume the defections are correct. We get

12:05 detection and of course, we get location of the bounding box.

12:10 the consistent way that he chose and call it Piece of cake.

12:15 of course, if I, you know, I'm simply creating the

12:19 by connecting all these points. The Lee I will get is sort off

12:24 I'm showing here. Um, so have, ah, evolution of the

12:29 with all these points were potentially one the points is not quite correct.

12:36 this can be problematic, because now don't necessarily have the correct trajectory re

12:41 . There are trajectory. The other that detectors is of course, you

12:47 , if we are unable to actually the object very well for various

12:53 One of them, ah, being obvious one, which is the object

12:57 occluded. So in this example, say there are objects in front and

13:00 can't quite see the object of interest person in our detection completing fields.

13:07 if that happens like that at a time point, we will simply not

13:12 any detection. Um, so both these issues related to detectors can kind

13:22 be problematic, right? One is detections. The others. Ah,

13:27 . And in either case, we not necessarily be able to get a

13:32 trajectory, and as a result, tracking solution is not quite to the

13:40 that be made desire. Now, course, there are other problems with

13:44 object detections, you know, cause , for example, of you talked

13:49 this rare once we train a Typically our training is is made up

13:56 example er images. And if exemplar don't capture all the potential variations that

14:02 might encounter Ah, during testing our detector may not quite toe work

14:08 well, right? And of when we're talking about moving objects,

14:12 gonna be a lot of aviation's posed elimination radiations could be Could be two

14:19 types. There could be others. in either case Ah, you

14:22 detection may or may not be quite par. And as that translates

14:28 ah, the accuracy of tracking as . Now the other thing is,

14:35 course, when we have multiple objects the seat, right. So let's

14:39 instead of having one person that be tracking now we have two people that

14:44 tracking. Okay, we can still our detection. And of course,

14:49 we do that, we'll end up two detections. Right? So we'll

14:52 to bounding boxes because they're two people the scene and of course,

14:57 we have at some point in as the Two Objects movie have grasped

15:04 detection belongs to which object, So rich detection would belong to which

15:10 in this case. So the idea piecing this out when we have multiple

15:16 in the scene and making sure that associate the correct detection at each time

15:23 . So the correct object that be tracking This problem is called data

15:28 right? So we have to know , rich observations are our measurements were

15:34 go with which particular I d often that we're tracking. Now, this

15:41 be quite uninterested. Problem, And typically data association. One way

15:49 think about us that maybe if I , um you know, I kind

15:54 know a little bit more about the that I'm tracking. Uh, maybe

15:59 additional information can help me solve the association problem. Right. So let's

16:05 a simplest strategy that they can apply that maybe as we get every new

16:11 , remember, For every new we are computing the current position off

16:17 particular object. So one simple we're did association is to say that why

16:23 assigned the current detection? We have the i d for which the location

16:34 the current detection is as close as to the location off the previous

16:40 Right. So this would be a simple strategy of saying, choose the

16:45 that is close us to the last . Okay, So if for

16:52 I have ah, detection, you know, let's say you

16:59 I'm tracking the red dot right? I end up seeing two detections.

17:05 of these purple dots. The question , which one is the red I

17:12 And you can say Well, you , let me look at the distance

17:16 the red and this purple dark and red and this purple dark and

17:21 this seems to be the closest the are that this is a purple

17:25 Exactly the same has the red dot d and I make those two association's

17:32 so that that's the idea. So be essentially end up doing that.

17:37 say they have the two objects, know, we find their locations that

17:42 detect and the object booze. You , we'll end up finding to new

17:47 boxes and, of course, the location of the object. And I

17:51 , Well, you know, obviously I look at this this blue dot

17:56 this late daughter closer to each other to this blue in this lead and

18:00 same for this So you know, two should be associated together belonging to

18:06 same object. Great on that kind works off course. Ah ah,

18:13 to think about that, you in the formulation is that you

18:16 if I have UH, two P and Q. Okay, And

18:22 course, I am creating the trajectories time. So p one teau p

18:27 T minus one is the trajectory of . P. Q. One to

18:32 T minus Juan is the trajectory of . Q. Sorry, this should

18:36 cute. And then I get to directions in one attempt to on.

18:41 gonna do the Data association for computing of T and Q sub team to

18:48 equal to the minimum. Off the off P of T minus one

18:56 Am I plus the distance off Q minus one MJ, right. So

19:03 I. For all all possible choices . Am I and M J that

19:07 have. I have to compute this off distances and the one that is

19:13 smallest should be the correct value for minus one. Q T off minus

19:20 in order to get the next Okay, that's the idea. Of

19:25 , this will only hold true if do the closest distance association now off

19:31 , This kind of works well. , you know, be generally do

19:36 know what we're not accounted for. the objects are moving. So what

19:41 if the two objects that were trying detect tend to sort of say that

19:47 moving essentially in a way that they come closer to each other?

19:53 So? So if both of them taboo sort of towards the center location

19:57 to the two of them, we end up saying getting to detections at

20:02 next time point that are very close each other. Now this, then

20:05 comes lock tougher, right? um, the distances Ah, for

20:12 the previous position. And the next can become very, very similar.

20:19 . Off which association be, you , sort of which radar be associate

20:26 the rich blue dot Right. Eso may not give us a good

20:31 So, um, this strategy off the closest location may not necessarily translate

20:40 in all applications. No, imagine we had some more information that we

20:48 leverage. Right. So as we're the two objects, the two people

20:53 this case let's say that one of objects is actually bearing, or the

21:00 of the object is different between the objects that we're looking at,

21:05 So in the simple example, you , given, given this this person

21:11 , colored shared compared to this person , as we saw try to solve

21:17 state association problem. What if we were to look at our detection results

21:23 say that well, rather than just the bounding box with one indicator,

21:29 is the position of the burning How about during the later association?

21:35 also use the information that we have the object within the bounding box?

21:41 ? So one example would be to , Why not compute the color

21:45 Graham. Let's say off the pixels the bounding box and use color history

21:52 as a representation off the appearance off object. Great. Now, if

21:57 news that, then we can say my deed association problem can we now

22:04 as not only solving for association by the closest distance off the previous object

22:15 to the new object position, but by maximizing good match off the color

22:24 off pixels within the bounding box off detected object. So let's say the

22:31 color history grams for each of P each of Q R color Instagrams for

22:37 two objects. P and Q. of course, what I want to

22:40 is to match not only the distance , you know, between Ah PT

22:49 one and the new detection. Am and Q T minus one in the

22:53 MJ. But I also want to the distance. Uh, that I'm

23:01 for the history Graham off p instagram am I blessed instagram of Q on

23:09 A for him, Jay. So we want to make sure that

23:11 similarity between the history Grams doing data or maximum strikes and we have one

23:17 the stroke itself information now that being so are me an association based on

23:24 Israelis color match. No, this be quite interesting. Of course,

23:31 still left with the issue off a and false alarms when we're during

23:36 Right? So we talked about this , so I'm a potentially get a

23:40 production, and I may actually not able to observe one of the objects

23:45 one particular friend. Great. I mean the skin stilyan issue,

23:51 we have some robustness in the sense on doing data association and it is

23:57 step to make sure that our tracking you know, more robust algorithms right

24:07 . Of course. Um, what kind of information can be news than

24:13 attracting objects? Well, since we're objects, you actually know Stempel

24:20 The different positions the object has taken time. Now there's the object is

24:27 moving in space. Potentially, we ability to compute the velocity at which

24:35 object is moving. So if I that the object is is it a

24:41 location at let's 18 subzero then of , the object Each of them will

24:48 some velocity with which they will move as a result, potentially come to

24:54 new position at a next time. right. So the velocity, let's

24:59 it. But with the vector V now, of course, if I

25:04 that Vector V, then I have ability to kind of know where the

25:10 will be at a time except say right, going from 0 to

25:15 And the way I can know that estimate that is simply by solving,

25:21 , the simple equation where we know velocity is just a change in position

25:27 time Eso I can lead. I say it will be one which is

25:32 position of time 0.1 building close to position of zero by position at

25:40 You know, plus t one minus 0 13 multiplied by the velocity.

25:47 so this gives me an ability to of know bear, the object will

25:52 at time pointy one. Well, is great. Now, if I

25:57 this, then I don't actually have search my end that image for potentially

26:05 the object is. It actually gives a more bounded search space to

26:11 Hey, look for the object. , Within this area off the image

26:17 said this is helpful because it will help reduce the false detections that I

26:24 be getting from my detect. So a more confined search space can help

26:30 eliminate false protections and as a of course, held and improve our

26:36 accuracy. Do we get better were able to do better data

26:41 and as a result, hopefully do better job of track in a

26:46 As I do this, I can updating not only the position of the

26:53 . But now, if I know where the object is by doing a

26:58 job of detective addiction, I can use the new location to go back

27:04 update my velocity information about the So I have a better idea on

27:10 to predict its next location. So we have a dig, a strategy

27:17 data association strategy that potentially is becoming and more powerful, which has been

27:24 only just doing association but proximity, doing association by appearance rate. So

27:30 a matching colors could be other features well that we can use, but

27:35 way off matching information that we extract the pixels within the bounding box of

27:41 detective object. And then, of , Ah, you know. Now

27:45 are also saying that they are going improve our did association by using

27:53 right? So by not only the point match, but using the prediction

28:01 saying what is the closest detection to predicted location? Are there that object

28:10 expected off beer based on the velocity that we have about the art?

28:16 , so this now becomes quite powerful in fact, using this they are

28:22 did association. We may even be to deal with partial exclusions or temporary

28:34 Zoff objects. So let's say we unable to detect an object at one

28:39 time point. But I can still able to predict potentially where that object

28:47 appear at the next time point. my velocity information, I may be

28:54 to continue doing prediction. Using the known velocity and previously known position often

29:03 even in the case when an object to be detected by the detector at

29:09 particular time point. So this becomes even more robots straight. So here's

29:15 example. Right again, we're using quite simple. We are looking at

29:22 color information off this particular ball in example. But if you think about

29:28 , we are able to actually track continue tracking the ball, even burn

29:33 ball actually goes outside off the frame observation. Right. So this is

29:42 is a simple example of how we do tracking by detection, uh,

29:47 some simple strategies. Okay, no, if you think about

29:51 are tracking algorithm as you ward We are using one sort off the

29:58 detection, right. Some kind of shape information over the object of

30:03 We're using the appearance. That's a information or color history Graham off the

30:08 within the bounding box. And then also using the dynamics whore some motion

30:16 about the object in order to continue with objects successfully. Right? So

30:23 using multiple types of information in order solar tracking problem. Okay, so

30:30 , tracking Bye bye. Um, this information gives us a much more

30:36 a solution to doing tracking. of course, keeping moment. Some

30:42 this is based on prediction rates are is based on our understanding off the

30:49 of an object. And prediction um is our model and using detection

30:58 is a really for the still verified our model is telling us by essentially

31:07 what is happening in the space. we're gonna be had these two objects

31:11 the scene. The objects are at particular location. A time 0.0.

31:17 . So that's it is blue Of course, we have our

31:21 And based on the velocity, we predict there, that object will be

31:25 time 0.1, but at the same , given an image that we get

31:32 time 0.1, we have our detector our detector gives us an estimate of

31:37 bounding box off where the object And given that bounding box, we

31:42 a better estimate of their exactly that is so the yellow dots signify the

31:48 position. Right now, once we all three, we actually also have

31:55 ability to know potentially what are detection is or what our prediction. That's

32:02 . And that's given by the difference the green dark and the yellow

32:07 So the difference in the prediction and is important. Tees off information that

32:12 need to account for is we kind get ready to apply are tracking framework

32:19 the next time point. So we to do some correction when we are

32:25 using dynamics for tracking. So here's base case. Let's assume that we

32:31 some initial prior that predicts that state the absence of any of its rights

32:35 your no observation as yet, But course we are inst enshi ating,

32:42 you will, a couple variables that us information about the object now?

32:47 course. What are those variables? , in this simple example, we're

32:51 about the position in the velocity off object that we're interested in tracking

32:57 Tell us what? Is the state the object? No. When we

33:03 the first frame, we have our observation, right? And the minute

33:08 ever observation, we don't our We actually get the correct opposition off

33:14 objects were given. Ah, some . And for ritually actually have what

33:20 consider to be the correct value. we given the correct value and give

33:26 our ability to predict that value gives now an ability to correct our estimate

33:35 the given time point. Right? given something that we actually have

33:40 we can now use that information to something in the future. Off course

33:46 in the future, when the future present, right? So we have

33:49 observation. A time 10 point, one. They can now have the

33:54 to correct our information and then predict time point D plus two. And

33:59 keep repeating this process forward and over now. So let's come back to

34:07 looking at this and thinking about this a very, very simple sort of

34:11 model system that said, We have ball that were instruments train tracking the

34:17 . Is that position eg zero. we said that, you know,

34:21 gonna move with velocity, the zero constant velocity. Okay, So based

34:26 that, we certainly can predict where ball will be at time point except

34:32 . So at time 10.1, it'll a position acceptance. Here's the predicted

34:38 . Now, of course, Ben . One happens, we say.

34:44 , great. Now I have some , and I expect it to be

34:49 disposition when I concert within that and OK, let me find it.

34:53 , okay. Turns out I found the ball at this location by someone

34:59 this becomes my measurement now, of , given my measurement, I need

35:03 somehow correct. You know what I've and sign to. Thanks. My

35:09 . Extra one and my observation. one right. So which one is

35:14 ? I have to somehow reconciling That an ex wonder. Why one of

35:19 . If if I don't correct, will end up, sort of drifting

35:26 eventually lose track of where the ball . Right? So I hope you

35:31 a way of correcting what this is and the idea of doing this correction

35:36 proposed Ah ah in a free work the mechanism first back in the 19

35:43 by Hari Coleman and they if he apply what he proposed. Uh,

35:51 know, this is confined within the of what we call the Kalman

35:55 It's a It's a simple recursive data Albert that was proposed. And the

36:01 is to generate optimal estimate off the states of the quantities, given a

36:07 off measurements, given a set of right now, optimal in what

36:12 Well, optimal in a linear system which is we're talking about state variables

36:18 can be predicted, uh, within linear framework based on observations.

36:26 of course, it's recursive in the that does it acquitted process. The

36:31 values are updated after every measurement is . OK, and we'll look at

36:36 in the context off our tracking So let's come back to our simple

36:42 problem, right? Tracking a We have all we have the

36:47 eg zero. And we have a V zero. Okay, now,

36:52 course, this is our initialization at zero. And we will also have

36:57 we call a measurement are Sorry. estimation error. Right. So we

37:04 call that e estimate Time 00.0. estimation error. And similarly, we

37:09 . Instead, she ate another quantity the measurement better, which is any

37:15 . Okay, now, EG zero zero are our state variables. So

37:20 gonna happen is that let's say the this time point remind a swan the

37:27 that our ball was at some position e minus one. And it got

37:33 a point, except at time T its velocity v zero and off course

37:43 . In doing this estimation, we an estimation error from nine point B

37:49 one, and this estimation enter. , is, you know, that's

37:55 rough estimate. Time point, t one. Now, remember, our

37:59 model is very simple. Italy, model. It says that X will

38:04 at point. It's all your Pointy. It will be an extra

38:09 , which is nothing but its evolution accepting minus one it's being this position

38:15 the belt rt the time difference Multiplied by the velocity, the

38:22 Now, of course, at time , we're going to search, but

38:26 the vicinity are estimated point. And say my observation tells me that the

38:32 is that Why septic? Now, order to apply Kalman filter to the

38:39 problem that are three steps that beer actually talk about your address. First

38:43 foremost were compute what is called a on gay Kalman. Gain is signified

38:49 or denoted by key sub G, it's equal to the ratio off the

38:57 error at the previous time point divided that estimation error plus ni measurement.

39:05 , now what does really tells us that in general, the value off

39:12 G is going to be between zero one and off course. If the

39:18 estimation error is actually zero, then Kalman gain is same as you.

39:25 police going zero divided by some corn matter to you. On the other

39:30 of the estimation, error is actually greater than our measurement error. Then

39:35 Kalman gain actually approaches close to Okay, the destination error from the

39:41 . Time step is is much larger the measurement error. Then Kalman gain

39:47 be approximately one. Now, once compute the Kalman gain, the really

39:56 for the position estimate for the which is X. Its position at

40:03 Point team is basically reconciling between our at wife's empty and our predicted position

40:14 times that right, which is p t. Which is predicted and

40:20 . Subtitles are measurement. So the consol those is the X predicted a

40:27 point, plus the difference off the position right and that the predicted position

40:36 by the Coleman game. So what this do? It does a couple

40:41 one. Remember if our estimation error the previous step is basically zero

40:47 So, in which case, the Kane is zero. In which

40:50 of course, our predicted point is the same is our actual point at

41:00 , teeth right, which basically says if my estimation is so good,

41:04 I don't need to worry about my being different than my production.

41:11 So my prediction is the correcting. the other hand, if my estimation

41:17 is actually much larger than my measurement . Then Coleman Gina's equals one.

41:24 which case we're saying that well, know, I should really rely on

41:29 measurement rather than my estimation, because estimation error has been large.

41:36 of course, if it's in between two, you come up with some

41:41 off senior combination of the predicted in observed point, and you'll generally lie

41:47 between the predicted and observed points. we might get a correction, which

41:52 us that this might be the actual position off the ball at time

41:58 Okay, you know, of once we have resolved this, the

42:03 step is to compute the new Error on the new estimation error is

42:10 given by one minus the Kalman gain by the previous estimation. So this

42:17 how the updated on estimation error and repeat this process over in order

42:23 So common filter actually is very, simple on at the same time very

42:29 in addressing the tracking problem, So it requires prediction. Step,

42:36 is the previous position, plus dealt multiplied by the velocity, gives us

42:42 predicted position of course, once we that, we actually, once we

42:47 a measurement at Time Point T. , update the necessary elements of Kalman

42:53 , which is the Kalman gain. actual position estimate at nine point

43:02 And of course, the new estimate our in our, uh, time

43:08 . And use these quantities now to ahead and go back to your prediction

43:14 the next time point. And we repeating this process. So this is

43:18 Kalman filter is on bits used extensively simple tracking problems and especially in the

43:27 where the dynamics of the object we're Ah, our senior, this is

43:34 important to keep in mind that the of the object be attracting have to

43:38 linear in the context off applying the Kalman filter. All right, so

43:46 don't we stop here for today? , and we'll continue our discussion next

43:52 in terms of looking at some of other elements that are necessary also.

43:58 , are it's not not that they're , but they're also utilize quite extensively

44:03 Zoe. Think about how we address tracking problem in the context of computer

44:08 applications. All right, but that will stop here for the day.

44:14

-
+