Transcript of gr-l06 ========== _0:11_: Books. Hello again and welcome _0:15_: to lecture six of the GRE course, _0:18_: and before we get going, _0:20_: a couple of things about notes and things. _0:25_: One is that thank you for those who filled _0:28_: in things about the the emphasis things, _0:31_: some very useful feedback there. _0:32_: One of the things that was mentioned _0:34_: was which I think it's very easy _0:37_: to just think about audibility. _0:39_: And I think the rumours against us there _0:43_: because there's quite a low ceiling here _0:46_: of soft material fabric at the back, _0:49_: so there's not A and also there's _0:51_: think because a shelf there a _0:53_: bit of a poor impedance match. _0:55_: So I think it is quite possible _0:57_: that I'm yelling down there. _0:58_: You can't necessarily be very well here, _1:00_: so I should aim to project. _1:03_: But if I'm failing. _1:06_: At the back, _1:07_: just shout out volume or something. _1:08_: Just yell. _1:10_: And I think we're going to handle that. _1:14_: No, another thing that was mentioned was _1:16_: the issue of just the volume of stuff. _1:20_: And I have a sleep and and and how _1:22_: how one navigates around those and so _1:26_: I'll recap a bit and and remind you. _1:29_: They believe each chapters aims and _1:32_: objectives aims at the high level things. _1:35_: The you know the point. _1:37_: The objectives are the things which _1:39_: are also useful but are less exciting, _1:41_: but are accessible in the sense that _1:44_: there are things which you can I _1:46_: can ask you to do and in an exam, _1:48_: and those structure the you should have _1:52_: those in mind when reading through _1:55_: reprocessing the notes after the lecture. _1:59_: If you look at the exercises _2:00_: at the back of the chapters, _2:02_: a lot of those are keyed to _2:05_: fairly specific objectives. _2:06_: So you you, _2:07_: you, _2:07_: you can see all this exercise is in _2:10_: the service of that objective and look _2:12_: through things with that in mind. _2:14_: Another issue is that it's _2:17_: sometimes difficult to. _2:22_: It is. It is a defect of the _2:23_: notes that there are a lot of it. _2:25_: It's something a little hard to _2:26_: see what the the key thing is. _2:28_: So what I've spent some time last _2:31_: weekend doing is adding to the notes _2:33_: some sort of key points in selective _2:35_: sections and in the slides that _2:38_: you'll see now at the end of these _2:40_: sections that are point, point, _2:41_: point of just the highlights of that. _2:44_: Now having said that, _2:45_: I'd encourage you not to look at _2:48_: those because I encourage you. _2:49_: You go and look at and and and and _2:51_: do the same exercise for yourself _2:53_: because picking a section and going _2:54_: what am I supposed to be getting out _2:56_: of this and writing that down is by _2:59_: itself a very useful exercise because _3:01_: seeing standing back a bit and seeing _3:04_: what was what was the point of that, _3:06_: I get it is a useful exercise _3:08_: but you see my version there as _3:10_: well in in the in the slides. _3:13_: I've also decided to put the slides _3:15_: up ahead of time rather behind. _3:17_: There's barely, _3:18_: there's there's essentially nothing. _3:20_: The slides that are. _3:23_: That isn't in the notes. _3:24_: But it might be, depending how you _3:26_: want to do what you would scribble on. _3:28_: You may want to use that. _3:29_: And I've also put up the as _3:31_: I said in the model posting. _3:34_: I've also put up the compendium _3:36_: of all the exercises. _3:38_: Usual thing is not useful to do these _3:39_: to look at the answers too quickly. _3:41_: But your Honor, students know. _3:43_: You know that and you can do _3:45_: what you like with those. _3:46_: You can undermine yourself if you want, _3:49_: and I'm sure you won't. _3:51_: And so that I I put that up just to _3:55_: remind me that those things are there. _3:59_: Not entirely obvious. _4:00_: You have to click on that _4:02_: tiny little arrow to see the. _4:05_: The contents are rearranged a bit _4:07_: just to make it look less huge. _4:09_: There, there, there are still there. _4:10_: They're just basically three-part _4:11_: three sets of of things in there, _4:13_: but in a couple of different formats. _4:17_: Any questions? _4:21_: OK, there there were other useful _4:24_: points in the the feedback. _4:28_: Which I will. Moreover, _4:31_: and I think of at various points, _4:35_: so that's not the only feedback. _4:37_: Other things you can think of to _4:40_: see mail me or it's all good. OK, _4:43_: where we got your last time was the end of. _4:49_: Section of of three one and _4:52_: we're just encroaching on. _4:55_: The differentiation we basically _4:57_: ran out of time, so if I can find. _5:01_: Five slides few. You moved. _5:08_: And we got. _5:14_: But so far there so there's a you _5:15_: know one of those key point things _5:17_: that that that that I thought the _5:19_: the key point from that section. _5:21_: So what I'm going to do this time is talk _5:24_: about how you do how do differentiation. _5:28_: In our of bases, in a case where the _5:34_: basis vectors that that were using _5:37_: to create our coordinates where _5:39_: they are changing over the space, _5:41_: we're first going to do that in flat space. _5:45_: And then we're going to do _5:46_: it in curved space. _5:47_: And the surprising thing is _5:49_: that that second step turns out _5:51_: to be the easier of the two. _5:53_: You think it was the other way around. _5:57_: So you have done this. _5:60_: This is to some extent another _6:02_: case where you've done this before, _6:04_: but not in this notation. _6:06_: So there's a slight notational issue. _6:07_: And there's also slight stepping back _6:09_: and seeing what it was you did before, _6:10_: because what you did before was _6:13_: things like you've seen the Laplacian _6:15_: in spherical porous bit of a mess. _6:18_: But it's a bit of a mess because R, _6:21_: Theta and Phi basis vectors are _6:23_: changing as you move over the _6:25_: the space and so the when you _6:28_: differentiate the components of a _6:30_: vector in those coordinates you have _6:33_: to do all sorts of stuff to get the. _6:37_: The the coordinate independent _6:38_: change in this vector as you move _6:41_: as it moves around the space. _6:42_: So, so, so this is about _6:44_: differentiating a vector field. _6:46_: You've got a vector field of _6:47_: for example the electric field. _6:49_: Are you asking how does that _6:51_: change as I move around the space? _6:54_: And that and and the Laplacian _6:55_: is part of the. _6:56_: It's part of the answer. _6:59_: It was simplified set of things _7:00_: a bit and talk instead just _7:02_: of plain polar coordinates. _7:03_: Let's start off simple and work up _7:06_: so the plane polar coordinates. _7:09_: Are um. _7:14_: Near Project source camera. _7:23_: Not that far. Not that far. _7:27_: Playing public coordinates are. _7:33_: ER. And E Theta. And we want _7:39_: to ask and and those are. _7:44_: Derived from the. _7:48_: The basis vectors? _7:49_: The Cartesian basis vectors. _7:54_: ER. Is. Costita. Having this _7:58_: done just to get the signs right. _7:59_: Ex plus. Sine Theta EY&E, _8:05_: Theta is equal to minus R. _8:08_: Sign. Theta EX plus. _8:12_: Are Cos Theta EY? No. Uh-huh. _8:17_: But you may say I don't remember _8:20_: that being there. And you don't. _8:23_: That's because the basis vectors of _8:25_: the the the plane polar basis vector _8:28_: you're used to are orthonormal vectors, _8:32_: where specifically the R is removed _8:34_: in order that these both be squared. _8:37_: However, this is in some sense the more _8:40_: natural one without that correction. _8:42_: And we can go into what? _8:44_: Into why that, why there is, _8:45_: but that are as they're _8:48_: deliberately and inconsequentially. _8:49_: It doesn't really matter how I define my, _8:52_: my, my, my, my, my basis vectors, _8:54_: just in this case, _8:55_: these ones are orthogonal but _8:57_: not orthonormal. For reasons. _8:59_: Which we can talk about. _9:01_: If need be. _9:02_: Now we want to ask how do _9:05_: these change as they go, _9:07_: as we move around the space? _9:09_: And that's not hard. _9:10_: We do things we we can say D by DROFER is. _9:18_: Well. _9:21_: Yard doesn't vary. _9:21_: It doesn't depend on our at all. _9:25_: D Theta is this honest with me? _9:29_: Either. You know, funny echo, _9:32_: I'm not sure it's just. _9:34_: Whose speech anyway D by D Theta. _9:37_: Of, ER, it's going to be. _9:48_: Yeah, it's going to be. A minus costs. _9:54_: Theta EX plus sine Theta E. Why? _10:03_: The other way. You have school. _10:09_: Thank you my sine Theta EX plus Cos _10:14_: Theta EY which is just 1 / r E. _10:21_: Teacher. And so on. _10:24_: So we we we can just just walk _10:27_: through those four possibilities, _10:30_: differentiating the the radial and _10:33_: tangential basis vectors with respect to R _10:36_: and Theta and get expressions which are. _10:39_: The remaining two are D by Dre. Theta. _10:44_: Equals 1 / r, E Theta and D by D Theta. _10:50_: E Theta is equal to. _10:53_: Made our ER and and and one of the _10:55_: reasons why orthonormal coordinates _10:57_: are good is because you your your void _10:60_: these extra factors of factors of R. _11:05_: So no surprises there. _11:08_: Now what happens if we pick? _11:10_: You pick a vector V and ask how does that? _11:15_: Change as we move around, _11:18_: as we move radially. _11:19_: So as you move away from the origin, _11:21_: how does how does the vector V change? _11:26_: Um, that's uh. Ebitdar. _11:31_: Of VR plus. Breathe easy. _11:37_: Peter. And which is? The DVR by _11:48_: Dre R + V RDERBYDR plus and and so on. _11:52_: And notice, by the way, _11:53_: that I'm I'm slipping in a _11:58_: notation here that. I'll use. _12:01_: Occasionally I'm indexing the basis vector. _12:05_: And the component with the symbol _12:07_: R rather than an index 123. _12:10_: So that's sort of a slightly _12:12_: slangy way of of indexing the arc, _12:15_: the arc component and the Theta _12:17_: component of of the basis vector of _12:19_: put that on on that slide as well. _12:26_: OK. And? _12:31_: Or we can just write that _12:33_: in in index notation as a _12:39_: DDIBYDREI plus. _12:43_: The IDE I by Dr. And. Ohh question. _12:53_: With respect to our like in this case, _12:56_: because it could be. Or or or or Theta. _12:60_: So what we can do is maybe more _13:03_: generic and exactly I think _13:05_: you suggest and say DV by D XI. _13:08_: Is going to be the VP of _13:12_: called JD VI by DXJE I + V. _13:19_: ITE. _13:22_: Uh. You know. _13:28_: Right. Aye, aye. By DXG. _13:35_: Yeah. So is that what you meant? _13:38_: Yeah. And we could have written _13:41_: that down from the outset. _13:43_: I'm just sort of easing us into _13:45_: into that expression from me _13:46_: because I just written that down. _13:51_: OK, but notice. That's a victor. _13:56_: But asking how does that vector _13:58_: change as you move around and and if _14:00_: the vector starts off here in this _14:02_: position and here in this position, _14:04_: then there's there's there's _14:05_: a change in that vector. _14:07_: So that change in the vector is a vector. _14:11_: Which is a number. Times a vector. _14:15_: A number times a vector. _14:17_: In other words, this DEIBYDXJ is, _14:20_: not entirely surprisingly, _14:21_: also a vector. _14:24_: Which vector is it? It's a vector. _14:27_: Then we can express that vector _14:29_: in terms of the basis vectors. _14:33_: So if you're right, DE. _14:36_: I by DX J. It's some some. _14:42_: Components. _14:46_: In multiplying the basis vectors and _14:50_: we write the components. Gamma K. GI. _14:57_: I think I've written IG. Sorry IG. _15:02_: And gamma here is called the Christoffel _15:04_: symbol or christophel symbols. _15:06_: No one seems to be quite clear _15:08_: whether a symbol or symbols, _15:09_: the crystal symbols are nothing more than. _15:13_: The one second, nothing more than the _15:15_: components of that vector in that basis. _15:21_: Basis the same as. Yes it is. _15:25_: So. So we're just, _15:27_: there's just we're picking a _15:29_: different, a different index. _15:32_: Yeah, so it's it's, it seems, _15:33_: a bit of vectors. This, this, _15:35_: this is a vector in that space. _15:37_: And so we're seeing if it's _15:39_: a vector in that space, _15:40_: then it's expressible in terms of _15:42_: the basis vectors in that space. _15:46_: And we could also have just written _15:47_: that down from scratch with the there. _15:49_: There's nothing was stopping us doing that. _15:52_: But this is a this is a motivating. _15:59_: This is mogamma. 50%. _16:04_: So good. It's. It's a, yes. _16:08_: So this is a number. Exactly. _16:10_: It's a set of N by N by N numbers. OK. _16:18_: Is that number, and yeah, _16:22_: yeah, so this isn't a tensor. _16:25_: It's it's not the components _16:27_: of a geometrical thing, _16:28_: it's just a set of N by N _16:31_: by N numbers. Thank you. _16:36_: So we can write so, so if we. _16:42_: If we go back a bit and and ask. _16:47_: DERBYD. Theta I wrote. Will make that _16:54_: EDEDEE 1 by DX. Two calling RX1 and Theta. _17:02_: And Theta equals X2. That will be. Um. _17:11_: Gamma one. 12 E 1 plus. Gamma 212 E 2. _17:22_: I'm just illustrating what the _17:24_: what what that some looks like. _17:27_: And from above and and and we could _17:30_: look back a page discovered that. _17:33_: The the the the ER by the Theta is 0 * E. _17:40_: 1ER plus 1 / R. _17:44_: He 2IN other words, gamma 1/2. _17:48_: Is equal to 0 gamma 212 is equal _17:52_: to 1 / R and that's how you you _17:55_: you we we calculate what the these _17:58_: Christoffel symbols are for a _18:00_: particular set of basis vectors. _18:02_: It's a turning the handle thing. _18:06_: A bit tedious, but it's the sort of _18:09_: thing it's very easy to test, like, _18:11_: very easy to do, and several of the the the, _18:14_: the question of of the exercises are _18:17_: encourage you to just turn that handle and. _18:20_: There are no thrills in turning that handle. _18:22_: It's just a matter of of practicing _18:25_: doing so and not getting lost. _18:27_: OK. _18:31_: And and. _18:34_: Uh. So that. _18:41_: That means that in, for, _18:42_: for, for plane pollers. _18:46_: Gamma One woman two is 0 Gamma 2 and two. _18:51_: Is it same as gamma 2 to one and gamma? _18:55_: 122 If you could monitor and we'll _18:59_: also write that sometimes as gamma. _19:02_: Are. R Theta equals 0 gamma Theta _19:07_: R Theta equals gamma Theta Theta _19:10_: r = 1 / R and gamma R Theta Theta _19:14_: equals minus R and again this is _19:17_: a slightly slangy notation which I _19:20_: hope is is clear but by our I mean. _19:25_: These would match up. _19:30_: So I'll, I'll just. _19:35_: So does that mean so any questions on that? _19:40_: That's mostly notational. _19:41_: Another notational section _19:43_: telling you something you. _19:44_: Again, the idea is this is _19:46_: telling something you do know, _19:47_: but in different notation. _19:51_: But it allows us to define _19:53_: the covariant derivative. _19:54_: Because what we have, _19:56_: if we if we go back a bit we we have the. _20:02_: V by DX J is equal to D _20:07_: VI by DX JEI plus. The. _20:16_: IDEI by DX J but we know _20:19_: that that is equal to gamma _20:25_: KIJK. _20:27_: And so if then we decide to _20:30_: renew, relabel that as. Uh. _20:38_: DVI by DXJ. He. Aye. And. _20:47_: And instead of eyes right keys instead _20:50_: of keys right eyes so VK. Gamma. I. _20:59_: KGEI. No, all I've done there. _21:03_: Is that these? _21:07_: Dummy these repeated indexes are _21:09_: dummy indexes, the eye and the key, _21:12_: so there can be anything. _21:14_: So I've decided to rewrite them _21:16_: simply with different letters. _21:18_: There's no that that's exactly _21:20_: equivalent expression, _21:20_: but what it means is I can take _21:23_: this EI out of there and get _21:28_: DVIBYDXJ. Plus Ek gamma _21:34_: IKJEI. And discover. That. _21:42_: Right, right. So. _21:43_: So what we have there is this is a. _21:46_: A vector. With components. That. _21:52_: And I'm going to write those _21:54_: components in a particular way. _21:55_: Would write those components as VI, _21:58_: semi colon, J. EI. _22:03_: And equal to VI comma J. _22:08_: Plus VK Gamma I KKJJ. _22:15_: EI where this notation VI semi _22:17_: colon subscript semi colon G refers _22:20_: specifically to that expression. _22:23_: And the the the notation of _22:25_: just introduced the VI comma. _22:27_: J refers to the the plane. _22:31_: Usual derivative of DVI by the exchange, _22:34_: so that's DVI by DX J. _22:37_: And like that. _22:40_: That's the last notational bad surprise. _22:43_: OK, punctuation in subscripts. _22:46_: I'm sorry, _22:47_: I didn't make it up. _22:50_: And. G is a little tricky to write neatly. _22:55_: I I I agree. My handwriting improved _22:58_: massively when I started doing this. _23:03_: No. _23:07_: Umm. _23:12_: Uh, you know, I have a quick question here. _23:19_: This illustration of that as that. _23:23_: And. So the the key point of the of _23:26_: of previous section were that the _23:27_: basis vectors vary over the space. _23:29_: We knew that. And that variation can _23:32_: be characterised using this notation _23:34_: using the Christoffel symbols. _23:38_: Um, so. A quick question. _23:41_: What sort of thing is _23:44_: determined brackets in the _23:46_: expression and that expression? _23:48_: Who's he with scaler? _23:51_: Who said it was a vector? _23:54_: Who said it was a one form? _23:56_: Tensure. A matrix. _24:01_: Have a brief chat about. _24:46_: OK. With that reflection in mind. _24:55_: Who would say that was a scalar? _24:57_: OK, a vector. One form. Tensor. Amatrix. _25:07_: Two of those answers sort of are correct. _25:11_: In one sense, yes. _25:12_: This this this is a scalar because there _25:15_: is the for pick and I pick an IG and key. _25:19_: Pick an iron key, and yes, _25:20_: there's a number which corresponds to this, _25:22_: so yes. But at the same time. _25:26_: For a reason which I'm about to elaborate on, _25:28_: this turns out to be a tensor as well, _25:31_: because or the components of a tensor. _25:35_: Because. _25:39_: What we have here is our. Thing _25:43_: which linearly depends on the. Um. _25:49_: Right. _25:52_: I'm going to go through the, the, the. _25:54_: That's actually in the way I expressed on _25:57_: in my notes because rather than busk are _25:59_: possibly confusing answer, but it is. And. _26:06_: So um. _26:15_: The key thing is. _26:18_: That this this vector, this vector here. _26:22_: Is. Proportional to the. _26:26_: Basis vector. Each egg. _26:30_: If you made EG twice as big, _26:33_: you make all the components half the size, _26:36_: and that vector would would would change _26:38_: inside inside the cornely accordingly. _26:40_: So there's a a proportional _26:41_: relationship between those things. _26:43_: So this is a a thing which _26:46_: depends on the variation of. _26:48_: V around the plane and the size of the _26:53_: depends on the size of the vector, _26:56_: which corresponds to D by the XJ. _26:60_: Which is the basis vector. _27:03_: So what we can do? _27:05_: Is we can define. _27:08_: Are A11 tinger. _27:15_: Nabla V. _27:18_: And we'll define that by saying that _27:21_: the action of that on the vector. _27:24_: East. How do I call it EG so the _27:30_: 11 tensor so it takes a vector _27:34_: argument and A1 form argument? _27:36_: And the action of it on on there. Yeah. _27:43_: Right. _27:47_: I'm, I'm, I'm, I'm, I'm, I'm gonna _27:49_: write it in a way other way around, _27:50_: in a way to just make sure _27:51_: it's consistent with my notes. _27:53_: Just just the the the _27:55_: distinction is important but. _27:56_: The action of that one one _27:58_: tensor when we give it when we _27:60_: give it 1 vector as argument. _28:01_: One basis vector as argument is DV by DX J. _28:09_: As a victor. So. _28:12_: We're seeing let there be a tensor. _28:16_: Which is related to V which we're going _28:19_: to call the covariant derivative tensor. _28:22_: And our definition of that tensor is through _28:25_: is through the slightly indirect way. _28:29_: Remember that the tensors A11 tensor _28:31_: takes A1 form and a vector as argument. _28:35_: And if we give the vector argument to _28:37_: that covariant tensor, I'll get a moment. _28:39_: If we give a as the vector _28:42_: argument to that tensor, _28:43_: one of the basis vectors, _28:45_: then by Fiat we say the value of _28:48_: that tensor is this vector here, _28:50_: and being a vector is something _28:52_: which takes A1 form of argument. _28:53_: So there is a missing one form _28:55_: argument in both in both places. _29:04_: Yep. _29:07_: The acoustics in this room are not good. _29:13_: Right. And I it's we could _29:17_: pick any basis vector here, _29:20_: So what we've said. And and. _29:26_: If you remember the basis vectors. Are. _29:32_: EI equal to D by DX I. _29:36_: And so we'll just pick a random _29:39_: basis vector in this case, _29:41_: EG and we're seeing if we apply _29:43_: EG could be anyone picked, EG, _29:45_: then we get the derivative of _29:48_: that vector V with respect to _29:51_: the corresponding coordinate. _29:53_: Coordinate function. OK, right. And so. _30:01_: What we have here then? _30:05_: That's as I've said this this notation _30:08_: with the dots and and and and missing _30:11_: arguments is that your conventional, _30:13_: the there isn't really a completely _30:14_: conventional way of writing of writing these. _30:16_: But one Commissioner, _30:17_: we are writing these is to _30:19_: write that as nabla, EG. V. _30:24_: Where we're rate the. _30:26_: This this vector argument _30:28_: as a subscript there. _30:33_: And. We that we write this visually often, _30:38_: that of course we want to abbreviate _30:40_: it so we end up writing we now blog. _30:44_: V. And that is it. It tends to be a _30:48_: short version of that, and that is. _30:51_: Just means that it means that this tensor _30:55_: nabla V with one argument filled in. _30:58_: In a in a slightly funny place. _31:01_: Written down is like money. Please. So. _31:08_: So if we add so we have a tensor nabla V. _31:12_: Want to ask what other? _31:14_: So stepping back a bit, what other _31:16_: components IG of that tensor? _31:21_: And. What we do is we now _31:25_: have a V and fill in Omega. _31:28_: I egg. But we know that that is _31:34_: also written as nabla, EG um. _31:46_: V and the ith component of _31:48_: that by filling in the. And. _31:54_: So so by by filling in this Omega I we're _31:58_: extracting the ith component of this. _32:01_: Vector here of this vector here. _32:04_: So again the ith component of the of this. _32:07_: Napa V EEG, which we also _32:11_: write as an Apple ID. P. _32:17_: So G. Aye, I'm and that and in _32:21_: particular by comparing it with _32:24_: on the previous sheet that is. _32:26_: VI. G. So it's important to look _32:31_: at to to to be clear what we're _32:33_: looking at the very stages here. _32:35_: So Navisa is a tensor. _32:38_: We're asking what are the IG _32:40_: components of that tensor? _32:42_: How do we do that? _32:43_: We plug in Omega and and and and egg. _32:47_: You know the two are the two one form _32:50_: and vector arguments of that thing. _32:52_: But we know what that is. _32:54_: We can we just notationally change. _32:57_: That and there. To this. _32:60_: The Omega I is extracting the _33:02_: ith component of the result, _33:04_: so it's the ith component of this _33:07_: vector because so this is by now. _33:09_: This by now is a vector. _33:11_: We rewrite that as. _33:15_: For convenience, not nabla EG, _33:18_: but just nabla G. _33:20_: And we know from from this. _33:24_: That the other way we write that _33:27_: is with this VI semi colon J. _33:30_: Which expands. To this. _33:34_: So there's quite a lot _33:35_: packed into that line. _33:38_: It's we'll we'll take a _33:40_: couple of goals through it, _33:41_: but I think it's important that you _33:43_: understand what different thing, _33:44_: what different things are happening _33:46_: in each of those equal signs. _33:48_: So make sure that you're that _33:50_: you have a story in your mind for _33:52_: what that equals sign is doing. _33:53_: That equal sign is doing, _33:54_: that's equal sign is doing, _33:56_: and that equal sign is. _33:59_: Question. _34:02_: It's it's a semi colon at the end, yes. _34:05_: Yeah. What the meaning of _34:07_: ohh that that semi colon is. _34:10_: Have this. Yeah. So this. _34:17_: the I semi colon G. _34:19_: Is this expression here? _34:21_: So is VI comma G where we've _34:24_: introduced the rotation? _34:26_: Iconology is just that, it's the derivative _34:30_: of the ith component of the vector. _34:34_: With respect to the XJ, nothing, nothing. _34:37_: Covariant just goes straight forward. _34:40_: That's real four thing plus the other term. _34:42_: So that's and and that is the. _34:46_: The the components of the _34:47_: 11 tensor which is nabla V. _34:53_: How is it the one one? _34:54_: If you vote, you buy the vector space, _34:56_: the vector position, so so. _35:01_: With both of those empty. _35:03_: But there's a on one format and and _35:06_: a vector thing. If we fill one in, _35:09_: then that thing there becomes a vector. _35:13_: Because that thing had it has _35:14_: a single one form argument, _35:16_: so shouldn't it be a 0? _35:18_: So, so so that is a 01 tenger. _35:23_: That is a. But. _35:28_: It's a it's A10 tensor. _35:31_: It takes a single one form of argument, _35:34_: so that takes up one form of _35:36_: argument that takes A1 form _35:38_: and a vector as argument. _35:41_: So, so, so the nabla. _35:44_: The the tension. Is it? _35:47_: That thing is 11. It won't. _35:49_: Whole thing is 10 once we _35:51_: fill in one of its arguments. _35:55_: Then what we what we're left _35:58_: with is 1 unfilled argument. _35:60_: And therefore it's a vector. _36:04_: So remember that if you have a, a. _36:11_: A11 tensor. It's a move. _36:14_: Takes 1 vector one, one form _36:16_: and one vector as argument. _36:20_: So sitting there with two holes at the top, _36:22_: ready to have things dropped in _36:24_: handle turned a number come out. _36:26_: If we put me into one of those holes. _36:28_: There's only one hole left over. _36:31_: In this case this this thing here. Nabavi. _36:35_: Is the thing with two holes at the top. _36:38_: Had one form and a vector hole one, _36:41_: but we have filled one of those holes. _36:44_: So what we're left overall? _36:47_: Is a thing with one form whole on. _36:52_: Open as it were. _36:53_: In other words, a vector. _36:57_: Which vector? We're seeing it's this, victor. _37:00_: And I'm carefully writing here _37:02_: the argument of this vector. _37:05_: It being a vector, _37:06_: it has a single one form hole. OK. _37:10_: So. This is, it's like we've sort of _37:14_: reversed into this definition of what? _37:16_: Of what the tensor is. _37:17_: We've said there shall be a tensor. And with. _37:23_: Attention with attention with two slots, _37:24_: yes, one form and and and and and a vector. _37:30_: It is empty, yes. _37:31_: In other words, this thing here is a vector. _37:34_: This thing here is a vector, _37:36_: which we've illustrated by showing _37:38_: that there's a single one form shape _37:40_: slot which which we filled in. _37:43_: And we know what that is. _37:45_: And so this whole business here is. _37:48_: It is doing a bit of a notational dance to _37:52_: to discover what the components of this. _37:55_: Tensor. _37:56_: There's one tensor. _37:57_: It corresponds to this thing that _37:59_: we've just that we worked out earlier. _38:04_: OK. That might need going back _38:07_: back through again, but a bit of _38:09_: digestion you will have a little bit. _38:11_: Well, it's another sort of thing that _38:12_: you can sit back in the bathroom, _38:14_: think, think, think your way through. _38:15_: I think you have to stare at it for a _38:18_: while and get some illumination that way. _38:21_: Just to clarify implementation, _38:23_: so if you ever give us a nobler, yeah, _38:26_: then something after the Nova and then you _38:29_: either superscript or subscript something, _38:32_: that means that it's you have added _38:35_: a basis something onto the and the _38:37_: thing that comes after the number. _38:41_: If that makes sense. _38:42_: I know where you're going, _38:44_: but I think that in this case. _38:49_: You should think of that. _38:51_: I think here Nabila isn't _38:53_: really being an operator. _38:54_: I mean, it's not with an operator and isn't, _38:56_: but in this case there isn't a _38:59_: nabla operator that acts on things. _39:02_: We we ask what is the _39:06_: covariant derivative of of V? _39:08_: How can we talk about how V changes _39:11_: as we move around the space? _39:13_: Then. There's a tension there. _39:16_: We've said there's a tensor which gives _39:18_: that which which gives that, that, _39:20_: that, that, that, that, that answer. _39:21_: We could call it Q, we could call it Omega. _39:24_: We call it anything we're like, _39:26_: but the way we the the name we give it. _39:29_: It's a 2 character name if you like. _39:31_: There's a number of. _39:33_: OK, so I can have like A1 form, _39:35_: that's nabla A1 form, _39:36_: and then I have an I substitute _39:39_: because I have added the one form. _39:43_: We're going to complete the command _39:45_: derivatives of 1 forms and momently. _39:48_: But I think right now the useful _39:49_: thing to think about is is is is _39:51_: not to think of this as an operator. _39:53_: This is A2 symbol name for the tensor. _39:58_: Which corresponds to V. _40:00_: Which is the, _40:02_: which describes how that tensor varies. _40:04_: How how how that vector varies. _40:05_: That vector field varies as _40:07_: you move around the space. _40:09_: So I think the next bit may _40:11_: answer your question, I I think. _40:17_: Keep this is moving less quickly _40:19_: than I'd rather had hoped, but. _40:24_: Another thing that you might see is no _40:28_: might about it you will see is if you ask. _40:38_: Specifically annotation like. And. _40:43_: Nabla X number subscript X or V? _40:47_: And what does that mean? _40:51_: It. It means as we can see here. _40:55_: This tensor nabla V. _41:00_: With this X argument. _41:03_: X is a vector is important X _41:07_: i.e I so that is going to be. _41:12_: Excuse me? XI, Tableau V. _41:16_: Till the E. Aye. Which is. XI. _41:26_: VI semi colon GE. _41:31_: G. To. _41:35_: And is X IVG semi colon IE. _41:43_: Gee. I would have done there. _41:46_: So here this this tensor what _41:48_: character is is it's. It's. _41:51_: It's vector argument is written _41:53_: conventionally subscript. _41:54_: In other words, _41:55_: it's the it's the vector argument _41:56_: we supplied to this one one tensor. _41:58_: Then you need your argument so we can _42:01_: take the XXI out and and have this, _42:03_: but we know what to do with that. _42:04_: You're turning the handle here _42:06_: and we get X IVJ semi colon I _42:09_: e.g so this is is the. _42:11_: So this is the comment derivative of. _42:16_: Of of of V with with argument X _42:20_: rather than one of the basis of _42:23_: vectors is asking. How does the? _42:29_: The house how what this is doing _42:31_: is is is essentially asking how _42:34_: does the the vector field V vary _42:38_: as you move in the EEG direction. _42:40_: This is asking how does the vector feel _42:42_: very as you move in a more general, _42:44_: more general, different direction. _42:46_: So we started off talking _42:47_: about the support terms. _42:48_: That's just a slightly more _42:50_: general remark based on that. _42:54_: I want to get into this section um. _42:59_: So. _43:02_: What we've done it remains to define _43:05_: a tensor field associated with the V. _43:08_: So V is a as a vector field, _43:10_: it takes a different value at _43:12_: at each point in the space. _43:14_: We've managed to find a tensor field, _43:16_: and there was a 11 tensor which takes a _43:17_: different value at different points of space. _43:19_: We've called that tends to field _43:21_: the covariant derivative and would _43:23_: define it in such a way that the _43:26_: answer gives us when we put in its _43:28_: two arguments is the amount that the _43:30_: speed at which the vector field V _43:32_: varies at that point in the space. _43:34_: Remember vectors you put in _43:35_: a vector about tensors, _43:37_: you put in a vector in one _43:38_: form and you get a number out. _43:39_: In this case, _43:40_: the answer to the question when you _43:42_: put these two things in is how fast _43:45_: is what's the ith component of the _43:48_: variation of V as you change X. _43:52_: G. _43:56_: So so when I when I talk to the beginning _43:59_: about this idea that Victor was our _44:02_: attention was a a real valued function _44:04_: of vectors in one and one forms. _44:07_: This sort of thing is the thing I'm _44:09_: saying attention is the put vector. _44:11_: Wonderful. Then turn the hand it _44:12_: comes in a number and answer. _44:14_: In this case that's what the answer is. _44:16_: Now we can also in the last five minutes. Uh. _44:24_: Umm. _44:27_: The point of a covariant um. _44:31_: A covariant derivative is that _44:33_: its coordinate independent. _44:35_: We've described this. _44:38_: Using arbitrary coordinates. _44:40_: But the answer is not a _44:42_: coordinate dependent thing. _44:44_: That's the point of what's _44:44_: the point of all this? _44:45_: We're we're we're able to _44:47_: answer question how do vary in _44:50_: a coordinate independent way. _44:51_: But if you remember. _44:54_: If we had a function. _44:58_: That's a scalar field. Then when we. _45:03_: Obtaining the gradient of _45:05_: that scalar field. We got. _45:09_: A1 form answer which was was also _45:12_: because a scaler is coordinate _45:14_: independent which is also the the thing. _45:17_: Also here is coordinate independent _45:19_: so for functions. The coordinate? _45:22_: The covariant derivative. _45:23_: Of a function is just that, _45:26_: it has it been just that. Great. _45:29_: Operator, we we've seen before. _45:33_: And um. _45:40_: I'm not going to go through. _45:42_: I invite you to look at. Equation. _45:46_: The discussion just don't really _45:47_: have time to go through in detail. _45:49_: Equation 3242526 where we discover that _45:53_: the covariant derivative of one form. _46:01_: Is Pi semi? G is equal to Pi comma G minus. _46:12_: Gamma. And. Key IGP. _46:20_: OK, which looks just like the _46:24_: expression for the derivative _46:26_: of the derivative of a vector, _46:28_: except there's a A minus sign. _46:32_: Here where there's a plus sign for _46:34_: the for the vector and there's a _46:36_: quick derivation of that there which _46:38_: I'm not going to explain effort _46:39_: on and and similarly with with. _46:41_: With the sensitive higher rank you you _46:43_: end up with one plus and 1 minus sign _46:46_: for each of the one form and vector. _46:51_: Are they? Are they measuring _46:53_: on the on on the next page? _46:55_: And the last thing to to remark is _46:58_: that the Leibnitz rule. Does apply? _47:02_: Well, the key to make use of this PK, PK. _47:06_: That's that's the commander _47:08_: of a vector of a scalar, _47:10_: which is a contraction of those things, _47:12_: and it does end up being indeed P. _47:17_: He semi colon GVK. Plus PKV. _47:25_: The key thing called G. _47:28_: Actually the library's rule in that _47:30_: case and I I mentioned that just to _47:32_: reassure that still is is the case, _47:34_: it would be a strange and disturbing _47:37_: derivative operator which _47:39_: didn't where that wasn't true. _47:41_: So the key thing here I think. _47:45_: Blah blah blah. _47:47_: Not over that. _47:51_: Measuring why the work _47:53_: very is used is useful. _47:55_: Which will put over this and _47:57_: ohh yeah that's the point. _47:59_: When we take the derivative _48:00_: of a higher rank tensor, _48:02_: we get one plus and 1 minus for each of the. _48:07_: Of the indexes, _48:08_: I think it is useful to to just quickly _48:11_: go through these key points because _48:13_: we've covered a lot in this section. _48:16_: It all, it is a bit of a hair dryer _48:18_: lecture this one which is disturbing, _48:21_: but there are. _48:25_: All the time the question being _48:27_: answered had been had been _48:28_: afraid of straight forward one. _48:30_: Given a vector field, how does it vary? _48:33_: How can you talk about how it _48:35_: varies as you move around the space _48:37_: in a coordinate independent way? _48:39_: And we'll talk about those defined _48:41_: what the components of that, _48:43_: we'll talk how that and there is _48:46_: a tensor A11 tensor which gives _48:48_: you the answer to that question. _48:51_: The question then comes how do we _48:53_: find the components of that tensor, _48:54_: the IJ components of that tensor? _48:57_: We discovered by slightly reversing _48:59_: into it that we could do that _49:01_: by simply considering the, the, _49:03_: the, the, the, _49:04_: the steps involved in differentiating _49:06_: the components and then the basis. _49:09_: Picture of our vector. _49:12_: And we had this interesting rather _49:14_: strange notation with semicolons, _49:15_: discovered the covariant derivatives of _49:17_: vectors of scalars and one forms to be _49:20_: something we had sort of seen before. _49:22_: And although I had rather hoped to _49:23_: get on to the to the next section, _49:26_: I have managed to get through this _49:29_: section without rebellion on your part.