1:09

In one direction we are going from lower lens scales to higher lens scales,

and information passed in that direction is called homogenization.

And in the other direction we are going from higher lens scales to lower lens

scales, and that information is typically called localization.

In homogenization, we are largely dealing with

affect to properties that need to be communicated to the higher length scale.

In localization on the other hand, we typically concern ourselves with how

changes at the higher length scale or conditions imposed at the higher length

scale affect the spatial distribution of fields at the lower land scale.

Right away we can see that both homogenization and

localization involve substantial computational effort.

1:57

And there is a great need or the critical need

to find efficient manner, efficient ways to do accomplish this task.

Also before we go into to many details, it's important to recognize that

homogenization is essentially a special case of localization in many ways.

Because the localization is the heart of the problem or the two problems.

And if you can really solve the localization problem efficiently,

you automatically have embedded in there a solution for homogenization.

The theory focuses initially on localization, but as part of developing

the homogenization theory you will recover the homogenization solution.

So let's go into some details, but

before we go in some details we need to establish convention.

The conventions that's used the notations that are used in standard intensive theory

and in particular solid mechanics.

So for example if bold face symbol like v here represents a vector,

it has components.

The basis vectors are presented by bold face symbols.

The components themselves are represented by normal symbols,

and so they're the scale of components.

And also all tensors, vectors and higer length vectors and

can be expressed as some sort of components.

3:21

We also use Einstein notation where in lighting this we hardly

ever write the summation sign, the summation sign is omitted.

And the rule there is that whenever an index is a repeater there is an implicit

summation sign.

We just don't have to write it, it's already there or

we've written that it's already there.

So using the Einstein notation makes writing the equations a little simple.

And for example, the most common equation in solid mechanics

is the field equation which is called the equilibrium equation.

And the equilibrium equation is expressed as shown here.

In this, sigma is the stress tensor, sigma ij is a second rank stress tensor,

and b is a body force by unit volume applied on the body.

The stress tensor is defined by the schematic,

it tells us the traction on three orthogonal planes.

4:19

The traction vector on three orthogonal planes and

it turns out that the stress tensor is symmetric.

That's why you're able to switch the indices i and j.

Now this is very basic background in terms of notations, conventions, and

field equations.

Now if you want to apply this to the homogenization problem,

we now look at a heterogeneous microstructure.

In this case, we have two phases, and each phase has a constructal law and

material law that describes how stress in that phase is connected to

strain in that phase.

Would this constraint be lost and the elastic constraint be lost in this case?

For plasticity one has to suitably modify this equations and

these equations are expressed here.

5:39

Now at the higher landscale there is a very deep mathematical period that

allows us to formulate the concept expression at the higher landscale in

a way that looks very similar to the expressions at the lower landscale.

Except that the affective stiffness stands in a elasticity is

simply not the volume average of the [INAUDIBLE].

However the effective stress and effective strain indeed are nothing but

volume averages.

Again the mathematical theory that leads us to these results is

beyond the scope of this lesson.

And therefore the results are just presented without any derivations.

So the goal in most homogenization problems is to find the effective values,

knowing the behavior of the local quotients.

In other words this is given, C1, C2 are given, the microstructure is given,

6:40

The solution methodology for these, there have been many solution methodologies,

it's not an easy problem.

But most solution methodologies use some sort of perturbation theory.

In this perturbation theories what one does is expresses the field of interest,

for example, strain field is written as an average perturbation.

And of course, all the average of the perturbation has to go to 0.

That's for the bottom the pop represents a volume average field.

Now it turns out once you write this perturbation

one can express the perturbation in terms of a fourth rank localization tensor

applied on the average strain tensor.

This localization tensor is very important to us in homogenization.

It essentially has all the information we need for homogenization.

Because one can show that the effective tensor can be written in

this form where essentially there's a reference value of the stiffness tensor.

And then there's an average of the perturbation and the stiffness tensor.

And then there's a volume average of the perturbation projected

by the localizations tensor, this is an exact result.

In essence, if you somehow know the localized tensor a, you actually

know the rest of the quantities there and you should be able to compute the Ceff.

Again, this result has not been derived here,

the interrogations are mathematically very dense or detailed.

And then need to be followed through some standard textbooks than this field.

So the challenge of the homogenization theory essentially comes down to finding

this localization tensor or sometimes it's also called qualitization tensor.

Again, the solution to the polarization tensor without going into the derivation,

turns out that one way to get the solution is to use this Green's function method.

And if you use the Green's function method, you get a implicit equation or

expression or a recursive expression.

Because the quantity of interest a is on both sides of the equation,

so it's not easy to explicitly solve for it.

In some way, you are to guess the value of a.

Feed it into this equation, find a nu get, a new value, and

then use it again in an way until you find the solution to this implicit equation.

So that's the theory, once you have the expression for

a, we can put it back in the series expression that we had on the previous

slide and recover this expression.

One way to write this, because it's not a implicit equation,

the equation here is not implicity equation.

One way to write this expression is to keep subfuting it in a recursive manner,

and essentially recover the series.

This is an infinite series, has an infinite set number of terms.

But if the perturbation, the reference medium for

c is selected properly this could be a convergence series

at least in particular problems it could be a convergent series.

If it is a convergent series, we had the advantage that you can truncate it at some

point and have an explicit equation, and then you have an explicit equation.

Now let's look at one of these terms in

because we are interested in using this expression.

So if you look at this term,

the definition of the term looks like that's a volume integral.

One can recast this term in this manner.

The difference between the two expressions is one would call this

a Riemann integral and one would call the other one a Lebesgue integral.

10:44

So it's the same term expressed in two completely different ways.

It allows us to use the 2-point statistics

concept we have learned in previous sections.

So for example, you should recognize that this is nothing but

the 2-point statistic expressed in the continuous form.

So its a probability density but then when you multiply it by dhdh prime dt,

it now becomes the two point statistics.

The probability associated with finding h and

h prime are 2-points separated by t within the intervals dh prime and de.

Then one day there will be a second expression that the discretized version.

Once we have a discretized version of this,

it looks it has the information we like or we know how to use.

Because the two point statistics up here in a discrete fashion, using

the symbol that we are familiar with, but we have been learning in this class.

And the rest of the terms,

the terms that are not included just end up in this coefficients.

The advantage of this particular expression is that this term

captures the underlying physics for a given material system.

It depends only on the material system and

not on the microstructure,

does not depend on microstructure.

12:19

Because the terms that depend on the microstructure are here, and

they have come into this part.

So we have a set of coefficients that depend only on underlying physics, but

not the microstructure.

And then there are a set of coefficients that exclusively capture

the microstructure attributes and not the physics.

So this has tremendous advantage, this expression has tremendous advantage.

So the advantage of the expression we showed in the previous slide is that

now we can express the term of interest.

Simply in this fashion where alpha i are nothing but the principle components.

Again, remember that the transformation between f and

p and r to alpha i is a linear transformation.

So when it's a linear transformation the expression still looks like a linear

expression.

Now that was only for one term, but

now we also know that every term in that series has a very similar expression.

So if we combine all those expressions that are linear in the principal

components, the effective property should also show an expression

that is simply linear in the tensorial components.

A very complicated homogenization theory can be simply cast in this simple form.

Where again to remind the A coefficients depends purely on physics,

the capture physics and the alpha coefficients capture the microstructure.

13:50

So the goals of the data signs of for a test formulated in this class are to

identify the important terms that make the dominant contributions to this equations.

Because again, remember that this could be a very large series.

There could be many terms in this summation, because there are many,

many possible combinations of n-point statistics that can enter this equation.

We really would like to have an efficient protocol that will source through our

combinations and find the most important task, the most dominant terms.

And then once we know what the terms are,

we also need a protocol test to estimate the values of this coefficients.

We're going to call this coefficients inference coefficients.

We want to know a protocol test to add the values of this coefficients to some

sort of regression technique or some sort of the machine learning technique.

In summary,

the homogenization approach that we will follow in this class has three main steps.

In the first step, we generate a calibration dataset.

And in this dataset we have a representation of the microstructure and

the property, the affective homogenized property of of interest.

And one road therefore is one data point.

15:08

And there would be J such data point, J such data point, so

that's your calibration dataset.

We generate this dataset,

in step two we reduce the dimensionality of the micro structure.

So we now have a principle component, a presentation of the micro

structure where R tilde is significantly smaller than R and therefore,

this is a much better data set to work with.

And finally in step three, we're going to establish a function

between the principal components, between this as the input and this has an output.

15:54

In summary, we have learned that the concept of

homogenization is useful in getting effective properties.

We also learned that user perturbation theory and Green's function produces

a series expansion for homogenization, which could be very useful.

The main use of this homogenization concept is in establishing structure

property the linkages in multiscale materials design.

In the data science approach,

this homogenization can be broken down to three steps.

And these steps involve generating the calibration dataset,

getting a reduced order representation of the microstructures.

And then finally,

finding the structure-property linkage in a low-dimensional form.

Low-dimensional, but very useful form.

Thank you.

[MUSIC]