Welcome back. Now that we have the data in the format needed to pass it through to our Keras model, let's go about actually setting up our Keras model itself. So what we're actually going to do here is, create a function that allows us to play around with a different number of cell units. It will take a bit of the flexibility away in regards to creating deeper networks. But for the purposes of what we're working with here, it should work well, and we are still going to be able to work through the process of building up the sequential model, or leveraging the sequential model in order to build out our recurrent neural nets. So we're going to pass into this function, our train_x, our train_y, the number of cell units in our network, and then the number of epochs we want a train for. So we initialize our model, with just being a sequential model. We add on our simple RNN cell, with the cell units just being the number of cell units that we pass into our function. Our input shape's going to be equal to train_x, and we purposely said we are only working with one feature, so the first value here is actually just going to be the number of time steps. Zeros, the number of samples. Shape 1 is going to be the number of time steps, and then by one. Then we're going to add on that dense layer, because at the end of the day, we want to predict a single value and that value shouldn't have any range. So we're not going to pass that through some sigmoid function, but let that be some continuous value. We're then going to compile the loss that we want to measure on as a mean squared error, since we are working with those continuous values. Hence the optimizer is going to be atom here and then we can just fit on our train_x, our train_y, the epoch specified. We are going to keep the batch size at 64. I'm actually going to change this here. This will depend on your preferences. This is whether or not you want to see the progress as you go through, as it starts to fit your model. I like to see that progress, so I'm going to switch that to one. Now we have our simple RNN and now we can call that just for 10 epochs, or a cell unit is equal to 10, and it will start to train here, it's a quick train. We see that, that worked. But again, in order to create those predictions that we actually want, what we're going to need to do, is create one prediction. Once we have that prediction, we're going to create another sequence that uses that prediction, as well as, if we're looking at 12 time steps, so as well as the prior 11 values so that we still have a sequence of 12, predict the next value, then we're using the 10 from our initial values, and then the two new predicted values. If that doesn't make much sense as I walk through it, as we walk through the code, hopefully that makes a bit more sense. So in order to create our prediction, we need to pass in what is our initial x, that we're going to start fitting our model on, or trying to make predictions with. We're going to say the number of time steps out that we actually want to make a prediction. Then what is the model that's already been trained that we're going to pass through. So we're going to ensure that our x_init is the right shape. If we go back up here, you recall that the x_init is actually of shape 12, and we need that to be of shape 1 by 12 by 1. Still needs to be three-dimensional, as we have been working with throughout our Keras samples. So we're going to reshape that, then our preds are going to start off as this blank list. Then for I in range the number of steps serve, for as long as we want to actually make our predictions go. That's defined here. We call model dot predict first on our x_init, on our initial x_init, we append our new prediction. So now we get a prediction of a single value, and then we change what our x_init actually is. The way that we do that is we replace those first 11 values. So this is all the way through to the last value, those first 11 values, with the second through to the end, so the second through the 12th. Then for the 12th value, we replaced that 12th value with our new prediction. Hopefully that helped clear up this idea of ensuring that we're still working with a sequence of 12, but we're removing that first value and adding on our new prediction. We continuously do this, so that we can keep coming up with new time steps and predicting the next time step. We do this again through the number of steps that we have, every time we are appending on that prediction to our list of preds. Then we have ultimately our full prediction, we make an array of the correct shape. Then to plot that out, we just need our x_init, the y our actual model, and then whatever title you want. Our y preds are going to be using that predict function that we just created there. Passing the test x_init, the number of steps here. We want that equal to the length of y, so that we can actually see how well we did, our model equal to model. Then our start range in order to differentiate between the x_init and the actual predicted values, we're going to ensure that we're starting with, first our x_init range, being from one, through to the length of the x_init. Then our predict range is starting at the end of x_init, and then going through the number of test hours that we actually want. That test hours was actually defined earlier. So we could probably make this function a bit cleaner to say that's equal to the length of y. Then we plot out the start range. First part of our plot is going to be just that initial values. What we're trying to initially train on, and then our predict range will have both the actual values as well as the predicted values. Then we have our legends, which will just be initial series, target series and predictions to let us know which one of the different plots referred to one. So we can call predict and plot. We see here the blue is going to be our initial series, that's our test x in it. The yellow or the orange is going to be the values we're actually trying to predict, and the green is our actual predictions. We see here that we didn't actually do very well. That's probably due to the fact that our cell units may not have been large enough. We worked with 10 cell units and the number of epochs that we ran on probably didn't allow us to optimize our model. Now we're going to run it again, increasing the number of cell units, increasing the number of epochs. When I run this, it'll start training. This will take a bit of time, so I'm actually going to pause the video and we'll come back once this is done running. Now looking at the plot compared to before we see that we were able to actually do much better and we see that green line falling much more closely to the target series. Then we get the model summary or the number of parameters that we trained on and we see that that was 30 cells and then it has that single output. Then now that we have this system in place and all the functions that we created, if we wanted to run through that same process for the PM, for a different county. We run this. We can look at the plot for the past 42 days. We can set up our data so that it is Keras friendly. So we pass in the number of days we want the input hours, the test hours and then we can actually run this again for 1,200 epochs. Again, I will probably have to pause the video and come back once this is done. But once we do that, we can get to the bottom and again see that predicted plot given a different series. Again here we can see how well we were able to model the series and you can test this across multiple tests sets to see how well you did, but we see that it's able to pick up some of that trend moving forward as well a bit where it plateaus. Now, the last thing that I want to show you, is just fitting the LSTM model. All you need to do compared to the RNN model is switch out the cell that you're working with. We just change the simple RNN to LSTM and everything else here is the same. There are no differences between what we did before and what we have here, aside for the fact that we are specifying that as LSTM. Then we can leverage all the same functions that we used before. Now our model's going to fit LSTM rather than fitting the RNN. What we're going to do this for quite some time. Here also I set verbose equal to zero. So I'm not going to see, as we did before, all these different lines. Then we can call the predict and plot with the model that we fit to the LSTM. We just pass in model and still use that same predict and plot. I'm going to run this, This is running for 3,000 epochs. This is going to take quite a bit of time. It might take some time on your ends and we'll see the results once we come back. That actually took quite some time to run. We do see here that we are testing out many more hours than we did before. We see the test hours is equal to 96. But the reason it took so long as we're doing 3000 epochs, 70 cell units plus LSTMs in general are going to be more complex. So do be aware that it did take quite some time. We can get the summary here. We can see that we are training many more parameters and that closes out us actually building out our time series and building out our predictions here. Now we worked with very simple models. I would suggest for you to play around further if you are interested in using deep learning for time series, work with some longer chunks. You don't need to be working with the sequences being equal to 12 hours. They could be longer hours. Maybe you can find more complex patterns if you're not just looking at 12 hour chunks. You can also increase the cell units, train for more epochs. Then also something that I mentioned earlier in the lesson is you can actually add on features as well that are matched up through the same time steps. But if you do make sure that you adjust here to include the number of features that you want to work with. That closes out our notebook here on leveraging deep learning for time series. In the next video, we get back to lecture and we change topics over to survival analysis. We'll make clear exactly what that is when I see you back in lecture. I'll see you there.