Well, folks. This is the the, the,
the first of I think 2 supervisions,
but there's also tutorial.
Now, I've never been,
never been clear to me what the
difference is between these things
because it's not as if we do anything
particularly different in each of these.
I have various times trying to think
is there an exciting thing we could
do in in in this context, and I don't.
I've never really thought
of one that convinced me.
Because the point of all all those exciting
things to do in attribution computational,
is to find some way of
encouraging a conversation.
Because the the basic idea is you have
there are things you don't understand.
Good to ask and I can try and sort out those
problems which might be from last lecture,
I might be from lecture 1 because
there might be some of the there's just
being a block at the very beginning,
the whole thing, or maybe there's a
little bit of tuning at the end that
you that you just smoothed over.
So this is the latest version of
of doing the useful thing here.
Which is I want to put up a questions
in a bit of the padlet and so thank
you to those who did that.
So there aren't any of those.
No one liked any of them,
which is a pity, but that's not
actually quite in this context.
That's quite a useful button to like this.
You know, this is an idea.
I have this question too.
So maybe when we do this for the
next supervision of the tutorial,
I'll encourage you again to to do that,
but I think they are
useful way of proceeding.
Because always easier to ask a
supplementary question than it is to ask.
So the initial question but your
hand up I think you're proceeding
is to talk about these questions
here which I I think are.
I think that the the illustrate
quite small but important,
you know, Rd bumps here and I think
though we'll use those I hope,
as a way of asking for the for the questions,
digging into other things
that are a bit obscure.
And if you have any,
you feel free to add to add things
to that even while we're talking or
just put your hand up and see ask
question you the old fashioned way.
Just I have this question.
OK, let's go through those
vector in one form thing.
Yes, and thinking about this it is the
way that I introduce vectors 1 forms,
vector one form and tensors does
seem a bit self referential.
It seems as if it's it's
building on building on itself
before it has an opportunity.
And I think that's partly because the. Um.
Because the definition.
Of of a tensor is something
which takes N1 forms and vectors
as arguments, blah blah blah.
Is presented to you before
I've said what vectors in one
forms are. So it feels a bit.
You know, which is an inconsistency here.
It's something missing this
in this definition and I
think it's just a matter of.
I think it's just a matter
of terminology in that.
I think it's just a matter of terminology,
so let's let's go through that.
I think in, in, in in an order
which doesn't confuse things.
Vision. Can we see that? Nope.
Oh God.
So let's see a. I think the first thing.
Is to say that there are there
are there exist sets of.
Things which we'll call centers.
And there are multiple sets of these tensors.
Or rather, there are we which have
the structure first of all that
they exist in the vector spaces,
so each of those tensors of those spaces,
so we'll call it tensor.
And M. Uh. Is a.
Is a vector space.
And if you remember from the beginning,
Part 2, by simply saying that
I've said quite a lot about what
these things called tensors are.
OK, I said you could add them
together to get another tensor.
You can multiply by scalar,
there exists a a unit.
No, there's an inverse,
and there exists a 0.
So I've said a lot about these
things just by writing that down, OK?
And I'm saying that there are multiple
sets of the so for each of this NM,
and haven't said LMR yet,
this thing called tensor is a vector space.
And then I'm going to,
for reasons which will become,
you know, which will emerge
as important later on.
The definition, I'm going to say,
going to have special names.
To two of these spaces. So um.
10. Tenters.
Are called.
Victors and 01. Called one.
Go 1 forms and that's just I'm doing nothing
other than giving a conventional name.
To those two vector vector spaces ohh and.
00.
00 the 00 vector space are,
we'll call those the the the members
of that vector space fillers.
We've done no math there.
We've just called.
We've labeled things.
And then and that and it's at
this point that we can in a sense
legitimately give the the the
definition that we sort of started off
with in in the notes that a tensor.
Is a function.
OK. So. I'm at that point seeing already
giving quite a lot of information is a
thing which takes things and turns them
into other things, and that sounds like
another baby definition of function.
It's usually that's trivial and silly,
but it's a very general thing.
So so you we are used to
thinking of functions.
We learn functions in school.
There are some of you.
There's something which things like X ^2,
you take a number and it comes
out with another number.
And so it's easy to fall into the
rut of thinking of a function as
something which manipulates numbers.
But a function in mathematical terms,
I think, which manipulates
things have done to other things.
So it's a very general notion.
So we're seeing a tensor.
It's a function, obviously.
Component.
So how to write the components of
potential in terms of the metric
that's weird on the line here.
So that's that we've because
you've several several a couple of
chapters at like half a chapter to
go forward before we get to there.
So we haven't got nearly as far
as talking about components yet.
So we know that there will be components.
Yeah, yes. So in in Part 2 we we go on to
talk about our components quite quickly.
So the fact that we are saying this
is a vector space means that we're
importing all that technology.
So we know that we will be at some
point talking about components
and basis and and bases.
So there will be a basis,
a basis which spans each of
these of these spaces.
So yes, so I think that yes,
we will be talking about
components eventually because
we've talked about vector spaces,
but that's still a little while yet
because we haven't yet even really
got a notion of what a tensor is.
And.
But it's important to be
able to do that, but not yet.
So it's a function.
Which is something to do other things.
So the question is.
What does it turn into what?
What does it turn into what?
And the domain.
Sorry, the range. Of each of all tensors.
Is. The real line.
So from all from all the things
that the attention could be,
we're now down to things which are.
We mapped to the real line.
We mapped R, not RN, just R.
So tension thing which takes things
and gives and turns them into numbers.
What's what's the domain of
of of of tensors? Um. The.
To mean of. Over. In. And.
Is uh.
Well, and and and I'll say put it.
I'll write this this way so that I can.
Link to to that question is.
And.
Papa bomb.
View one.
Cross.
Now I live in perpetual terror of
getting you the wrong way around, OK?
And and the wrong way around.
I I may I may correct myself in a moment.
So we're seeing that that's a that's a,
a, a formal way of saying that the.
Attention or not just has the
the range of through line,
but the things that go into it.
The things that turns into a real number are.
These things would say to
call 1 forms and vectors.
You said that vectors and one forms are just
functions that take something into a real.
Yes? But how can you have a denser
which is something that there is a
real into a real but? It doesn't.
It's very weird that you say that
the domain is waveforms in one form,
which are theoretically real numbers.
But you can't put the scale all right,
because yeah, make something into it, yes.
To another if if if I'm following you this,
this isn't the the result of of the
application of the of the of the of the
tensor, it's the the function itself.
So This is why I'm being quite out.
So this is a function which takes
functions as arguments. Yeah.
That's why. That's why I was being.
Very general about things and things.
Not scale when you apply them to.
So what this means?
A10 tensor. It's called a vector.
By this definition,
that means that 10. Tensor.
It's something which maps.
The set rather informal notation.
01.
Suppose the set of.
Yeah, we call that a vector that maps
the set of 1 forms to the real line.
So attention is something which maps.
It's not not the not the result
of applying that effect function,
but the function itself.
So that sounds very strange.
A function which takes function as argument.
But that's in a sense is why we introduced.
It occurs to me, not just,
not just now talking about it, that
perhaps is why we we introduced this this,
this terminology vectors in one forms
quite promptly because what we want to
immediately start thinking of those of
those vectors 1 forms as as pointy things.
So in this sense,
what the attention is doing is it's taking.
And you know that.
Yeah, that's why we jump to.
Not this rather formal
way of thinking about it,
but we jump to the terminology where
we were talking about vectors in
one forms because at this point we
want to immediately start thinking
of those as geometrical objects.
So in that case I wanted you
to answer some of which takes.
A vector, no one form, sorry.
Enter into number.
Um.
So.
We did does that.
You're still not looking.
Totally convinced because you
said that it takes a function.
Yeah, but the scalar is a function.
But they can't put a scalar
into an argument as a no.
0. Potential is a function.
Yes, but they can put it as
an argument to attention.
You can't know.
So, so sorry. Yes.
So, so this definition of tensor
says that we're going to restrict
ourselves not to potentially don't
take arbitrary tensors as arguments.
We're going to, we've said not only the,
the range of the changes in the real numbers,
but the.
Argument of tensors are going
to be not just uproot tensors,
but specifically.
One forms and vectors,
so of all the things,
so you don't get 10 get A20 tensor
as an argument to a tensor.
We say, we declare.
So it could be.
Otherwise you could imagine a structure,
and quite probably there are in the head
of mathematicians structures where where
you where you have what looked like tensors,
which can take 10.
You know other tensor tensors as arguments.
We don't do that,
so we we so so this is quite
a narrow definition of.
We've narrowed this definition general a lot,
first by restricting the range through line.
Secondly by restricting the arguments
to only vectors in one forms.
And we have divided up the
sensors into things which take.
And one of the other so,
so these so and so,
so the different so the 20.
20 and two, one and 02 tensors
are all separate vector spaces.
In principle,
they have nothing to do with each other.
At this point I'm sorry to I'm I
think I'm wondering worrying I'm sort
of starting to you talk around in
circles enough there's making it so
more confusing than it than it is.
I think the thing I want to
go over is that yes,
the definition as presented in
in the notes does appear to be
uncomfortably self referential,
and it seems to start talking about
vectors before we say what vectors are.
And and. I think that what this might
illustrate is that if you try and avoid that.
By avoiding the word vector until
a little bit later,
you end up with an explanation,
which is possibly a little you know,
possibly more formally correct.
But a bit more confusing.
Does that. Feel better in a sense.
I'm not seeing any nods or
shakes of heads or tears, so.
I think because I mentioned
this this situation here with
with with the other products,
I think it's fairly natural to go on and
talk about the the tensor product here.
Because again, that's another
thing that I think is looks more
more exotic than it really is.
And it's a way of.
Building up a tensor from other tensors.
And you know, the other product
or the tensor product appears in
multiple contexts in mathematics.
It it basically means just
putting two things together.
Most of the cases where you you,
you you are an outer product,
you're you're just in jamming things,
you're not into leaving them,
you're not doing anything complicated,
you just going 1/2 and put them
network each other and and dealing
with the parents together.
That's the intuition that often
is behind the notion of the tensor
product or the product in mathematics.
So in this case.
The.
So if we had.
See function F of X is X ^2.
The function G of. X is equal to.
Um to. X + 1 for example.
That each of those is a function which maps.
I'm making this up as I go
along so many time myself.
Notes here and which maps
the real line through line.
Nice simple function,
nothing exotic about them.
But what about the function F?
Outer product G. As a function.
What does that mean?
We're going to see that the definition of
that outer product there in this context
for functions some, you know, similar.
It's similar to using other things is that?
Is a function. Which has two arguments.
And the. The way in which the definition
of the what this function does to
these two arguments is straightforward.
It is defined. As the first linear
product applied to the first argument.
Multiplied by the 2nd.
Applied to the other argument.
Which in this case is X ^2.
Times two y + 1.
And there's nothing more to it than that.
So the outer product in this context,
when you apply it to functions,
is just a way of composing,
because there's multiple ways you can compose
functions you know that you could talk about.
If after G. And that's.
Of X, which is F applied to G,
of X, and so on.
So there's multiple ways you can you can
you combine function into other functions.
You have you seen that notation after?
Yeah, so this is just one of the ways you
can make what you can combine functions,
and it means nothing more than that.
So. You make a two argument function
from 2/1 argument functions.
Specifically,
by multiplying the first argument,
the first function to the first argument,
the second argument,
the 2nd to set the first function to,
the to the first argument,
and the second function to the second
argument, and the numbers together.
And similarly,
that means that if you have a.
Vector V.
Which is is I'm which is A1 form of argument.
Outer product. Um, P for example.
Which is A1 form takes the vectors
argument and that is. A function.
Is a function which can take.
A1 form. And a vector as argument.
Let's call that that them, um,
Q&A. Which is such that by definition.
We apply the first function.
To the. First argument.
The second function to the
second argument and multiply the.
These are both numbers multiplied together.
So we're not doing anything there,
but we're not doing there except
we're just doing this case.
We're doing it with specific functions
which are vectors of one form,
and you can do this we don't.
But you can do this with
higher order tensors,
higher rank tensors as well.
So you could have a V cross T and that
if T is a is A2 argument tends to,
then V cross T would be a
three argument that tensor.
So this is the way of combining
lower first rank one object
into a rank two objects.
And So what? Yeah.
And and what this is what's happening
here with outer product here is
that also the idea of you know,
just in jamming multiple spaces together?
And and becoming the domain
of the of the Nam tensor.
So it's the same notion.
The sort of straightforward German.
And.
So if we look at that a
particular. Question there.
And. So what we're doing here
is just what we're doing here.
But for the special case where the
vectors we're talking about are.
Why is that blue, I wonder?
Oh, the special case where the vectors
in question are the basis vectors.
So there this this T.
Which now the you know,
there's a couple of things
going on here, one of which is
the question of of components.
This.
And. The the The this object El Cross E.
M. Cross EN. That's our function.
Which takes what at arguments? Someone.
31 forms, yes. So let's call them and
P1P2. P. Three and, and and and that the.
Value of that. Function. It is therefore
A&I A31A30 tensor. Just purely because
it takes 31 forms of argument.
That's the definition and the and
and the value of that function
applied to those arguments is El
applied to P 1 * E M applied P.
2 * E and apply to P. 3.
But we're doing a little bit
more here by saying the 10th were
actually want that's that's so.
So we're defined at a tensor
there AA30 tensor for each of
every possible value of LM&N.
The change that we actually
want in this case.
Is a linear combination of those.
So it's. E 1 cross one, cross one plus,
E some multiple of E 1 cross
one cross E2 plus plus plus E 1
cross one cross E 3 and so on.
So it's a large number.
Uh. Ohh sorry. The way is is is different.
That's OK. OK, switching switch,
switching to pressure in the 1st Place Elm.
NOFEL cross EM cross.
Omega. And. That's a linear
combination of n * n * N terms.
Where?
There are. Yeah, so? So there's n
* n * N terms in that sum. What?
Thanks. It was intended on that thumb.
And that means that when you
apply T to things. We apply.
We just drop these arguments.
You want one form vector argument into the.
Argument positions of each of
these things and straightforward.
Real number of application of
the results between them added
the result and in this case.
So there's there's two things happening here.
First of all, in top think we're
defining this tensor as the sum
of N by N by N outer product.
With a with a different coefficient,
a different a different multiple of each
of those which we can choose there. And.
Just because. I mean,
I mean the same thing in both,
in both cases. It just is.
It's it's the same thing, yes.
So I I I'm rather than writing.
And we'll have to avoid to to
to slightly avoid confusion.
I'm, I'm. I'm changing my mind
between right bringing across.
It looks a bit like an X. So yeah.
So there's two things happening here.
First, that's the.
At most places, and it's true.
It's just simple multiplication
of real numbers.
So that there's nothing, nothing,
nothing exciting there, that's just 2 * 3.
OK. Over there?
If so,
that's our sum of multiple terms with
coefficients given by this matrix here.
Essentially,
we'll call it a matrix in that way.
Key elements of
TLM&N with isolation convention.
It's a man. Well, I think right there
there it's we're thinking as a matrix what
we're going to discuss since when we.
OK, so temporarily we're going
to think of it as just matrix.
OK all all is there is a matrix
of numbers of north by N numbers.
And then when we ask what is the value
of that tensor, what the new make?
What is the number to which this
evaluates? We.
You know, we just drop in.
Section 1 forms if we specifically drop in
basis one forms and basis vectors. Then.
By the definition of product, we get this.
Well, these are simple.
Numerical applications.
The next step here is to recall.
That our choice.
Of 1 forms by choice or one form basis,
one forms are always going to be dual.
To our basis vectors, meaning that we
have we choose that the basis one forms.
Will always have.
In this situation,
will always evaluate to the Chronicle delta,
so that would be 0 unless L is equal to I.
When it will be 1.
So we can replace each of these terms
by Delta IIL, Delta LM, Delta NK.
At which point we can do the the three sums.
So if we now do the sum over over,
well, say L.
So we add so that is.
It is the summation sign in
front of that which sums for L.
M&N. From one to N. When we do that
for L that term there will be 0.
Except when L is equal to I.
So the only 10 that will survive from that?
Sum is when L is equal to I.
Who do the sum of M exactly the same.
The only term that survives is when M is
equal to J and we'll do the sum over N.
Go in terms of survive is the
one where N is equal to K.
Should we discover?
By virtue of the definition
of the outer product.
That when we do specifically this and
give not just arbitrary what one forms
of vectors as arguments for the tensor,
but specifically basis vectors in one forms,
what we get out turn the handle.
What we get out is this.
The Matrix was started off with.
Up there. And it's at this point we see ohh,
that's not just a a matrix.
We're going to see if that's
the components of the tensor.
And that's and and and
when we in that context,
we write this matrix carefully with
indexes staggered and and and it's so
although we started off saying well.
We discovered that this,
this set of numbers actually had a, a, A.
We could we think of it in a
slightly enriched way by thinking
of it as the the component,
because that's what they are of,
of the center.
So all tensors will have components.
So for any tensor.
That you do this to,
you could do the same thing
and get that answer there.
So that this doesn't depend on that.
For, that's true for any danger.
What's special about this?
Is that a particular way of
calculating a higher rank tensor?
Which we can see to be consistent,
because when we do this to that tensor,
what we get out is what we started with.
So there's more than so. Uh.
Now let's see could this right around?
Not all tensors are the outer
product of lower rank tensors.
But all tensors can be decomposed into
a sum of outer products like this.
Below the 2.9 equation,
but this is some of them by by.
M by M. Or because these are just.
Indexes was run from 1:00 to
1:00 to end, so if you recall.
So when we see T KLM.
T = t LNEL. Because he. And cross.
When we go and what we mean is.
L = 1 to N. And.
OK, that's confusing, right? Tea.
LMNEL cross E. And cross E Omega
N so remember that the the the.
And the summation convention is
taking the summation for granted,
so there's north. And so I I see yes.
So the problem is I've I've I've
I've used in two different ways.
They have brought.
So I've used in two different ways, right?
Didn't even think of that.
So so yes, this this end here is a
dummy index as opposed to this end here,
which is obviously the the dimensionality
of the of the vector species.
Sorry, I I yeah,
I make it myself to rewrite that.
Questions.
Answer.
Space. Right. But which has more
dimensions than all, right? So if the.
That yes that's another restriction
here to the to to to what we.
Our initial definition definition
of tensors as functions to
take things and other things.
We restricted it in various
ways on the previous page,
but I another restriction is that
the vectors that you add. That you.
Put his arguments in here.
They have to have the same dimensionality.
That should be in our.
Vectors in an N dimensional vector space
the same dimensionality of the tensor.
So all of these one forms and vectors are
all in a four dimensional vector space.
4 dimensional vector spaces,
same formation vector space. Umm.
No, they are all vector spaces
and each so each of the N&M.
Usually sets of M tensors.
Corresponds to a vector space.
And they all must have the
same number of dimensions. And.
Alright, this doesn't work.
But that, that, that's,
that's a good point,
which hadn't really occurred to me and now.
In our case. We are all we are.
We are. We're doing GR here, not math.
So we are primarily interested in and.
The case where the vector space
in question is Minkowski Minkowski
space or or the four dimensional
space-time were interested in.
So I think so I think I've sort of
in some parts of this explanation
I may have implicitly assumed that
energy equal to four all the way
through so but that's that's important
point is yet another way in which
we have the from the void range
of possibilities of tensors we've
narrowed that we've we've you chopped
off possibilities here here and
there and that's further constraint.
So if you.
Cancer, yes. In three dimensions.
How would you feel?
As you've chosen faces, right? Yeah.
If you have like a choice
of three basis vectors.
Ohh, is he right? Yeah, right.
Well, I think here if, if, if,
if I have heard you properly then the.
The way that you would.
So what you're doing here is this is for,
you know, arbitrary I GK.
So pick an IG K when you're when you
apply those three, the Omega one,
Omega 2E4, whatever you apply those
then what you get is is T 11124 here.
So if you were doing this by hand,
as it were, you'd never do this by hand.
You would work your way through
all of the the end dimensions here
and get the NMN numbers here,
and that would allow you to reconstruct
that that, that, that tensor.
But if you change your mind about what
the basis vectors and what one forms were.
You have to do the calculation
all over again.
And you get different numbers here,
which would apply to.
That would be right for the
different set of basis vectors.
1 forms in there so that and.
I think this might be touching on the the the
question on your question that This is why.
The illustrating in a way or
touching on the way in which the.
The components are basically dependent.
You change your mind about the basis.
So you change your mind but need arguments.
So you end up getting different numbers here.
But.
Potential is the same in both cases.
And that's what it you know, in a way, what.
Is into an answer to the other question.
Geometrical objects.
So you can see something has an underlying
geometrical object associated with it.
I think there's a couple of things.
I, I there's a couple things I mean there.
One is. When I see that,
I mean this is not a a basis dependent thing.
There's a thing which isn't basis dependent.
Clearly that discussion there
would highly be dependent.
The point being that if you change your mind
about the basis of 1 forms, you change the,
the components you got out, but the.
What this ends up supporting is a notion
of the of the tense underlying tensor the
the the key on the left hand side there,
which isn't based dependent.
And so one sense of geometrical.
Is just it is some negative sense that by
geometrical I mean not basis dependent.
So it it, it, it's it's the same.
It has the IT has a sort of continuity
irrespective of the obesity you choose.
But the other sense in which I'm talking
about things be geometrical is that for
all these things there is the the way
that this man knowledge is built up.
There is a fairly natural interpretation
in terms of pointy things and and and and
and planes that we can use to think with.
And so the idea of a vector as a direction
and another size is a geometrical notion.
The idea of 1 forms as.
Are foliation of of the space which
has a direction basically and A
and a size in a geometrical notion.
And that matters because what
we where we would end up with,
we want to end up is talking
about generativity in a in an
approach where we're focusing
on the shape of space-time.
And the way we can do that
is it is in a way which.
Um. Are routes to that are route to
that which which doesn't involve
getting bogged does involve.
So the same old fashioned way
of introducing generativity and
different geometry is, you know,
start with components that beginning.
It says a vector,
something which transforms like
a vector and I never liked that
definition because it seemed that
that really seems self referential
in the in the sense that what the
the the idea is there is that.
Let's let's let's let's not let's
not go there because I don't want
to start you know teaching you.
I think we I think even talking
about I think is a slightly
mad way of approaching this.
But that was the same old
fashion way of doing it.
And then in the 70s it was with Mr Stone
Wheeler in the vanguard of this this
change to the way different different
geometry generativity was taught.
They are the ones who emphasized
the the geometry first approach.
And talking about and finally of the
things like tangent vectors and and
tensors defined in this rather abstract way,
I'm talking about privileging
them and interest them first,
and then from that just discovering
components and so the behavior of
components near the transformation vector,
like we it comes out,
comes out in the wash.
If you're like, it's a consequence.
And there's another way you
can talk about these things
called geometric algebra,
which is.
I don't want to see fashion,
but it's a bit of a hobby in some circles
just now and that's the way it was.
Really. Does see geometry at all
and find a way of talking and and
and talks about the the How you can.
Define an algebra for directions.
How do you combine two directions
to give another direction?
And so on. And it's very lovely
and possibly very powerful.
And there's some folk could bang on the
table and say this is how George be taught,
but you know, there's quite a lot of
mathematical knowledge which has to
be rewritten and rethought and so on,
you know, for that to work.
And this and this approach is OK.
So if you are feeling
under under underemployed,
if you're feeling bored,
then Google Geometric algebra and
you'll find some sort of lecture
notes and things about that.
It's very nice.
Do not distract yourself with that, OK?
But we always mention it because that.
Very, very much.
That's really going all the way
along a line which says Geometry
1st and and how do you?
How do you?
Calculate with geometry
with geometrical objects.
Components are one way.
The other way we do, we do,
we do in this, in this case,
we the way folk do in this context.
So, so yeah, I suppose in that
sense I do mean two things.
Geometrical, I mean, I mean there's
this general approach which is.
Shapes first, directions 1st.
And when I see this is geometrical
object, I mean this is a
component independent object.
This is a basis independent object.
And yes. That's your question.
I think ohh. I've seen.
Can we go over like? Like basis.
Because I understand like.
OK.
More than.
Alright, OK. You know is that there is
a particular part of the of the of the
notes that you're thinking of there.
Ah,
OK. It's relating to.
So.
OK.
OK, so.
Is it? And this is it the point
before or after that statement
that we're talking about here?
I didn't read. Ohh yeah I think so.
Like it?
Considering.
Right, so Section 2 to 7, let's see.
Let's go back and see what I see there.
Changing business. OK, so.
I don't know the famous words.
It is easy to show. Um. Right, I I'm.
Thank you.
It's easy to show.
I'm going to try and come up
with that easy show live.
I'm not convinced I am.
Well, how do we do this? We. Um.
Well, let's let's have a go
and see what happens. So um.
Grade A. Ibar Lambda I, bar I. AI.
BJ bar equals Lambda J bar J.
PG. So if then we say T. Um.
Etcetera or I. Let's say we write.
T is our.
If you 20 tensor, so it
takes 2 vectors. So, um T.
IG will be equal to. Um.
TE i.e. G. Right.
OK, but this might work now if
that's we if we rewrite that as E.
I bar Lambda I. Bar i.e.
G bar Lambda G Bar G.
What I have done there is simply done this.
For the basis vectors EI and e.g,
SOEI will be EI expressed in the other basis.
We have an expression like this but.
We said that ohh yeah.
Another restriction on the on on the
functions of the vectors are the tensions
are is that they're linear in each argument.
We said that that's crucial.
And what that means is we
can take these numbers out.
So we can write that as.
Lambda I bar I. Lambda G Bar G.
Thank you, TEI bar.
EG bar.
Which is the Lambda I bar I Lambda G bar.
GTI Bar G. Both. Which I
think is what was required.
So all we've done there is use this
general property of expanding 1 vector.
In terms of I know the basis using
the transformation matrix there.
And. Use the linearity.
To take these numbers out.
Leaving expression there which we
recognize as a way of getting the
T in the components that the IGIBJ
bar components of T in the bar.
Precious. And I haven't.
I didn't draw my memory for
any of these things.
In each of these cases,
I Simply put in the eyes and
eye bars in the only place that
could go that was consistent
with the Association Convention.
The answer to. Getting very direction.
Invite previous. OK. I finally I
finally better be quick. I'm sorry.
Right, because the part of the
definition of the of the tensors
was they were functions which
are linear in each argument.
And what linear means what linear,
So what? Just to recap there,
what linear means F? Is linear.
Means that F of a X is equal to a. F of X.
So Lambda here are so each of the.
Each of the components of that matrix.
So that's that's an end by matrix.
For each of the components of it is a number.
So in this thing here each of the.
Lambda is a number and
therefore can come out.
OK, I'll see you. I think.
I see you next Wednesday and I
think we're in a different room.