So today we’ll be talking about stochastic processes, and in particular, the stochastic process of the Markov chain, and maybe I should have covered the Makov chain before I covered Markov Chain Monte Carlo but hmmmm … well, let’s just cover the Markov chain now. It’s basically how you can determine the probabilities of something happening based on your current status.  So for example, say I’m basking in the sun in Hawaii at the moment, but tomorrow, there is a 33% percent chance I end up at a A-list Hollywood party tomorrow and a 33% percent I’m taking in the opera at the Met in New York City, and a 34% chance I’ll be working at a computer repair store in Indianapolis.  Maybe not as fancy as the other options, but not bad.  Not bad at all compared to some other places where I could end up.  Might actually be nice to if there’s a Chipotle next door.  But moving on … let’s say I’m in LA instead and have an equal chance of ending up in Hawaii, NYC, of Indianapolis the next day.  Or I’m in NYC and have an equal chance of ending up in Hawaii, LA, or Indianapolis the next day.  In these cases, Hawaii, LA, and NYC are transient states since there’s always a probability of ending up in another place the next day depending on where you are now.  Now, say I’m working at that computer store in Indianapolis and assume that I stay there, having no other chance of ending up in another realm the next day. So we would call Indianapolis an absorbent state.  Now why would I ever choose to stay in that okay place and not go to any of those other fabulous places?  Well, you’ll just have to find out from Tina’s reasoning when you read Final Orders.  What?  I’m pimpin’ again?  You can tell again?  I still have to work on my smoothness, huh?  Well, anyway here’s to hoping I end up in a realm in Acapulco like this little kitty in a sombrero.

Yeah, that’s not happening either.  Oh well, I’ll go for the next best thing — Chipotle for lunch!