Stochastic systems analysis and simulation (ESE 303) is a class that explores stochastic systems which we could loosely define as anything random that changes in time. Stochastic systems are at the core of a number of disciplines in engineering, for example communication systems and machine learning. They also find application elsewhere, including social systems, markets, molecular biology and epidemiology.
The goal of the class is to learn how to model, analyze and simulate stochastic systems. With respect to analysis we distinguish between what we could call theoretical and experimental analysis. By theoretical analysis we refer to a set of tools which let us discover and understand properties of the system. These analyses can only take us so far and is usually complemented with numerical analysis of experimental outcomes. Although we use the word experiment more often than not we simulate the stochastic system in a computer and analyze the outcomes of these virtual experiments.
To get a better feel of what this class is about you might want to check the slides for the first two lectures that study a simple stochastic system. This example considers a certain game in a certain casino where your chances of winning are slightly better than your chances of loosing. More specifically, you place 1-dollar bets, with probability you are paid 1 dollar and with probability you loose you bet. The good news is that . The catch is that you have to keep playing forever or until you loose all your money. If you have dollars is it a good idea to play this game or not? You have to balance the fact that while you are more likely to win than loose, there is always a chance of getting unlucky a few times and losing all your money. And since you have to keep playing forever the latter possibility cannot be ignored heedlessly. To try things on your own here is the Matlab code to simulate one experiment. This other code repeats the experiment a hundred times and the numerical analysis of outcomes is undertaken here.
The class is roughly divided in four blocks. The first block is a quick review of probability. As part of this block we study some commonly used probability distributions including normal, uniform, binomial and exponential. We will also talk about the definition of limits in probability theory. We’ll close this block with the introduction of the concept of stochastic process. This will consume six lectures.
Not to get too technical about it, a stochastic process is a function that assigns a function to a random event (compare this with the definition of a random variable as a function that assigns a value to a random event). These are complicated entities, and we usually restrict our attention to cases that have a more tractable description. In the second block of the class we encounter such first tractable class: Markov chains (MC). MCs involve a discrete time index and a time-varying state taking values in a finite or countable space. The defining property of MCs is that the probability distribution of is unequivocally determined if is known. This is called the memoryless property as it implies that the past history of the process, that is, what happened until time is irrelevant for the future of the process (what will happen after time ). The block on MCs will close with a description of the algorithm used by Google to rank web pages. We will use nine lectures for this block. You can download slides for the lectures and Matlab code for examples covered in class.
We will then consider a continuous time index , but still restrict our attention to finite or countable states . We will also keep the memoryless property which in this case is stated as requiring the probability distribution of given for any to be independent of for all . Processes with this property are unimaginatively called continuous time Markov chains (CTMC). Albeit more mathematically challenging, CTMCs are very common in practice. In rough terms, we can say that any system that can be deterministically modeled with a differential equation can be stochastically modeled as a CTMC. We will close this block with two examples: (i) A description of queueing systems and their application to communication networks. (ii) Simulation of chemical reactions with small number of reactants and their application in biochemistry. We will devote twelve lectures to this topic.
The natural progression of the class is to eliminate the restriction on the countable number of states and the memoryless property. This will be covered in the fourth block of the class on general stationary random processes (SSP). Simple examples of SSPs, including Brownian motion, geometric Brownian motion and white noise will be introduced. The most important tools in the analysis of SSP are the autocorrelation function and the power spectral density, the latter of which is a generalization of the Fourier transform to random settings. This block will close with a discussion of Black–Scholes model for option pricing. Nine lectures will be used for these digressions.