This concludes the course on Functional Program Design. Over the past four weeks, you have learned how to combine the fundamental elements of functional programming in more interesting and advanced higher level combinations, which in turn then are useful tools for the design of more advanced programs and systems. In particular, you have learned about lazy evaluation and how you can use it to define infinite data structure. And you also have learned about the very important distinctions between computations and values, and the fact that computations can themselves be values and function of programming. That manifests itself in many different contexts. For instance, a random value is really a computation that gives you values every time you demand an value, or sigma is really a computation that can be sampled at a particular point in time. An important method to abstract the properties of a computation where monads, and you'll see monads applied in many different contexts from randomness, to delays to general effects. An important strand of many modules in this course was the interaction and relationship of functional programming and mutable state. You have seen what happens when you mix the two. In particular, the substitution model is no longer valid and the question of when two values are equal becomes a lot more subtle then it was before. You've seen some of the powers of mutations in the example of digital circuit simulation. And also how to encapsulate and contain that power using a range of techniques from laziness, for instance, because laziness essentially encapsulates a mutation from undefined to defined. But you can't really use that directly, you use rather the idea of a lazy value indirectly. The second way to encapsulate state mutations was functional reactive programming. There again signals internally are defined by variables which from the outside you can't see that. You just interact with values that are signals that essentially time varying functions. And finally, you have encountered monads. As a useful way to encapsulate state mutations and to define already in the type where you have side effects rather than purely functional programming. So you've seen in the last week of this course, futures from a consumer perspective as essentially events that happen at some later point in time. Of course, the question comes up, well who will actually compute the thing that gets evaluated and sent later in time? And the answer is, invariably, well, it has to be some sort of parallel computation using threads. So, it's important to learn about how these threads work. And also how to abstract over them, because threads by the themselves are a very low-level and dangerous model. Useful abstractions beside futures reactive streams and also actors. The second question then is well, how can we use multithreadedness to gain more efficiency using parallel data structures and parallel algorithms? If you want to scale parallelism further than what a single computer can do, you arrive at distributive programs. A particularly important topic for distributive programs is data analysis and big data, for instance the very popular Spark platform for data analysis is written in Scala and can be programmed in Scala. It actually turns out that our ideas of collections that we have already seen for sequential and then in the future we'll also see that for parallel, Scala bright into distributed collections. So in a sense, Spark can be seen as a framework for distributed Scala collections. You'll find out more about this in future courses of the specializations. And I hope I'll see you back for them in class.