3:28

So, it's not really helpful for science in general in some ways.

Another dimension of universality is universality of methods.

For example, classical financial models such as Geometric Brownian Motion,

originated in physical models of Brownian motion.

These models turn out to be mathematically equivalent to quantum mechanics.

This means, among other things that we can use in finance,

many computational methods that work for quantum physics.

Also, some methods of applied mathematics,

such as singular perturbation theory and large deviation theory,

correspond to the so-called quasi-classical or WKB approximation of quantum mechanics.

Universality and unification of

different methods is also a current topic in machine learning itself.

For example, many choices of

regularization can be thought of as specific Bayesian priors.

Now, if we keep regularization,

any clear separation line between parametric and

non-parametric approaches becomes blurred.

In this sense, I actually do not believe in truly model independent approaches.

Any regularization immediately brings model dependence back to some extent.

The same way, the modern ML approaches often become very pragmatic rather than

dogmatic in deciding on choosing a Bayesian

versus Classical Statistical methods for each particular problem.

Next comes the deep part.

It became a trend recently in the literature to develop

deep learning versions of various classical machine learning algorithms.

For example, people developed Deep Kalman Filters,

Deep Variational Bias and other deep algorithms.

So as we discussed a few times,

Deep Learning is very powerful but it also has tons of limitations.

In particular, it's very data and energy hungry.

It's also hard to interpret sometimes,

and therefore is risky to use.

Finally, it may not be the shortest path to the victory.

We spoke about Reinforcement Learning and about how it can

offer at least conceptually a different paradigm.

Reinforcement learning focuses on the main task of optimizing the final objective.

You may indeed have many situations in which

the environment is highly complex and hard to model.

Yet, the action policy is very simple.

Therefore, the question of what paradigm should be used as the most appropriate one,

forecast focused methods of ML or performance

focused methods of RL is still an open question at least to me.

Finally, as we saw in this course,

methods of Reinforcement Learning can be used not only to compute specific quantities.

For example, optimal execution policy,

but we can also apply those methods for inference of dynamics of the market as a whole.

This produces RL-inspired market models for stock price dynamics.

Machine learning and reinforcement learning can be used as both a language and tools.

I often hear stories about how people in finance are

sometimes skeptical on using machine learning in their work.

They say prove me that it works.

I think while I understand what they mean by that,

still such replies are misguided.

To me it sounds like prove me that the second-order Taylor expansion works.

The second-order Taylor expansion is one of

the most useful and universal tools of analysis in mathematics,

physics, and applied science.

It always works provided a function where approximate is

smooth and a third-order term and higher terms can be neglected.

The same holds for machine learning and reinforcement learning.

They always work provided data does not contradict

their main assumptions such as IIG or approximate stationarity of data.

So, these are the same assumptions as used in classical statistics.

I leave aside for a moment nor my IT cases such as for example Time Series.

So, as long as all of

classical statistical methods are

just special cases of more general machine learning methods,

they give you a kind of lower bound for your machine learning algorithms.

Now, all you need to do is to increase this lower bound,

pretty much like the EM algorithm that we used a few times in this specialization.

This is where you can experiment with different demo algorithms,

all the way up to deep learning,

deep reinforcement learning and so on,

as long as you have enough data.

Yet, most of the time in real life,

you have limited and noisy data,

the data that are not quite stationary have missing values, and so on.

This is where some prior information,

or a general guiding principles can,

in fact, be very useful.

So, because both finance and machine learning have so many similarities to physics,

and to a large extent are rooted in physics,

I thought it would make sense to summarize our course in some way as follows.

Many methods that we use in both finance and machine learning

have roots in physics and if we track these roots,

it often helps to better understand the assumptions of models.

In particular, classical financial models,

such as arithmetic or geometric Brownian motion, originate from physics.

They can be viewed as linear approximations of

more general non-linear dynamics or Langevin type.

In this course, we consider the simple non-equilibrium and

non-linear model of market dynamics that I called quantum equilibrium,

non-equilibrium, or disequilibrium model,

and this may be only a toy model,

but a more general point here is that any right financial model cannot be a linear model,

it has to be non-linear.

Non-linear models lead to far richer dynamics than a linear models.

In particular, they may even lead to

a stochastic chaos and other highly complex dynamics.

Another useful lesson is about choosing a right parametric family of models for learning.

If the right model is non-linear in the state,

but we instead try to look for non-linearities in predictors,

we might miss the whole story altogether.

Few other highlights from physics that I think are very useful and

possibly are under utilized in both finance and machine learning,

are approaches based on the free energy.

I remind you that this notion originated in statistical physics,

and since then found many applications,

first in supervised learning,

and then in reinforcement learning,

and then in self-organizing systems, and biology.

Another very useful concept for physics are symmetries.

In physics, symmetries and

conservation laws drive amazingly large number of physical phenomena.

Inclusion in particular transitions between different phases of matter.

Another highly useful concept is analyticity and complex domains.

Physicists are very comfortable with taking

their models into a complex plane or variables or parameters.

They do it all the time for example in quantum mechanics,

or when modeling phase transitions.

It might be quite a useful method for

analysis or financial and machine learning models too.

12:58

Now, a very nice paper by Pedro Ortega and Daniel Braun has suggested that

thermodynamics can be interpreted as a theory of decision-making under information costs.

They believe that this is a very useful approach

that links to some other very interesting topics,

such as perception-action cycles,

briefly mentioned before, and they can be used to do better feature selection.

I think this will be for our course and this specialization,

I thank you for your time and hope you will find

things that you learnt in these courses useful.

In one of his interviews,

Andrew Young said that one of the place he got feedback,

he got from students of his course was that

the most useful thing some of them learnt was MATLAB.

So, paraphrase and I would say that even if Jupyter Notebooks

will be your most useful takeaway from this specialization,

your time was not wasted.

I hope that there is more than just

these that you can take away and start using tomorrow.

Most of concepts in finance and machine learning are simple,

but sometimes this simplicity is not straightforward to see right away.

It can be masked by tons of factors including overly complex math,

or too many irrelevant details,

or by the absence of a unifying principles.

But, most of these unifying principles are simple at the end,

and moreover there are lots of tools for ML and RL that are freely available.

So, once you know what you want to do and what you can do with machine learning,

you can start your next interesting project

on a melon finance right after completing this course.

So, good luck with your future projects and good luck with your course project.

Thank you again and all the best.