Imputing new worlds with multiversal appeal

Monthly Archives: May 2014

So today I’m talking about the Newton-Raphson algorithm and it is when we update our parameters using the ratio of a particular function over the derivative of the function and I lost you already, haven’t I?  Hmmm.  Well in other words, it tells you where you will end up if there is any change in your current situation.  So say, as a little girl on a trip to downtown Chicago, which we natives eloquently define as the area south of Lincoln park, I am taken to Adler Planetarium by my dad.  Seeing all that planet and star and science-y stuff might get me to thinking about becoming a physicist or something.  Never mind that my physics grade in college were somewhere on par with my organic chemistry grades and maybe I shouldn’t be writing science fiction books on theoretical physics and inter-dimensional travel but never mind that … moving on.   But let’s say there was a change of plans and my dad was too busy on his business trip doing business-y stuff to take me to the planetarium so instead my mom took me to the Art Institute.  And seeing all the paintings and sculptures and artsy stuff, I then decided to become an art teacher when I grew up.  So there you have it — the gist of the Newton-Raphson algorithm where the outcome depends on the sensitivity to the change in our plans.  Just like with Jane, I mean, with me during my trip to downtown Chicago. Wait … it’s too late, isn’t it?  You heard me say Jane, didn’t you?  And you suspect I’m talking about her alternative childhoods I wrote about in Revised Orders, don’t you?  Um … okay.  Still need more work on my subtlety.  Ah, well.  Until next time.  Speaking of the Art Institute though, here’s one of my favorite paintings from there from C.M. Coolidge’s Dogs Playing Poker series.


I mean, it’s dogs.  It’s poker.  What more do you want?

So today we’ll be talking about stochastic processes, and in particular, the stochastic process of the Markov chain, and maybe I should have covered the Makov chain before I covered Markov Chain Monte Carlo but hmmmm … well, let’s just cover the Markov chain now. It’s basically how you can determine the probabilities of something happening based on your current status.  So for example, say I’m basking in the sun in Hawaii at the moment, but tomorrow, there is a 33% percent chance I end up at a A-list Hollywood party tomorrow and a 33% percent I’m taking in the opera at the Met in New York City, and a 34% chance I’ll be working at a computer repair store in Indianapolis.  Maybe not as fancy as the other options, but not bad.  Not bad at all compared to some other places where I could end up.  Might actually be nice to if there’s a Chipotle next door.  But moving on … let’s say I’m in LA instead and have an equal chance of ending up in Hawaii, NYC, of Indianapolis the next day.  Or I’m in NYC and have an equal chance of ending up in Hawaii, LA, or Indianapolis the next day.  In these cases, Hawaii, LA, and NYC are transient states since there’s always a probability of ending up in another place the next day depending on where you are now.  Now, say I’m working at that computer store in Indianapolis and assume that I stay there, having no other chance of ending up in another realm the next day. So we would call Indianapolis an absorbent state.  Now why would I ever choose to stay in that okay place and not go to any of those other fabulous places?  Well, you’ll just have to find out from Tina’s reasoning when you read Final Orders.  What?  I’m pimpin’ again?  You can tell again?  I still have to work on my smoothness, huh?  Well, anyway here’s to hoping I end up in a realm in Acapulco like this little kitty in a sombrero.

phoebe sombrero

Yeah, that’s not happening either.  Oh well, I’ll go for the next best thing — Chipotle for lunch!



So I wanted to start something about Gibbs sampling with a catchy title and so I stole the APS March meeting’s dance party theme of Gettin’ Higgy with it which I take it that they stole from Will Smith’s Gettin’ Jiggy with it.  And when I googled Gettin’ Jiggy with it, I came up with a lot of pictures of dancing dogs and cats.  Thus, the picture above.  There you go.  Yes, I’m still working on my delivery.  But anyway, as I mentioned last time, Gibbs sampling is a specialized form of the Metropolis-Hastings algorithm and the idea is that we sample values from a conditional distribution than from marginal distributions.  So basically we have a bunch of variables, making up a multivariate distribution (multivariate = multi variable, get it?  Okay, so that’s not my best work, I’ll admit) and we want to get the 411 on one variable based on the info we have from other variables.  So like we have this character in my books named Tina.  In some dimensions, she’s living the high life  in big mansions, wearing flashy clothes, going all these A-list parties in Maui or LA or Manhattan and don’t you want to be her right now?  I don’t blame you.  I kinda want to be her right now too.  In some realms though, she lives a relatively quiet life working at a computer repair store or a printer shop somewhere in the Midwest or on the East Coast.  So how can we determine the probability of which world she is in and what life she is leading?  Well, there is one major factor, or should I say, another character, that allows us to determine that probability more clearly.  But I’m not saying what … or who … just yet.  But I will say that without knowing if she is with this thing or person, then, depending on the context in which she is introduced, it may be tougher to determine in which dimension she is in.  So that’s basically Gibbs sampling, sampling based on conditional probabilities.  But that’s all for now.  Tune in next time when I discuss um … uh … well, still have to think of a topic.  But I’ll try to make it good … promise!  Until then … let’s just boogie, woogie, woogie, Gibby down.  Yes, I do realize that was just awful too.  I’ll work on that as well.


So I decided to post a pic from the Metropolis movie here to grab your interest.  Hope it worked.  Did it?  Hope so.  But I dunno.  Actually, never cared for that movie but then again, what do I know about fine cinema?  Like I thought Anna Faris was robbed of an Oscar for her work in House Bunny and the cute, sleeping baby in Ill Manors was robbed for being, well, a cute, sleeping baby.  Again, what do I know?  I dunno.  But onto today’s topic of the Metropolis-Hastings algorithm.  So this is a MCMC method (yes, MCMC like we covered here), where we sample values and decide to accept or reject those values depending on whether or not a probability is less than the a ratio of the distributions we are interested in.  And the probability used in this comparison can itself be drawn from a uniform distribution between 0 and 1  (so that any value between those two numbers can be equally drawn).   Now you’re lost, aren’t you?  That’s okay.  Let’s move on to the example then.   That should help you.  And me too as I’m kind of lost by what I just wrote.  Wait … what?  Um … you didn’t hear that, okay?

So anyway, say we consider Maggie Upton/Zelov again and she just enters the children’s ward in one dimension where she’s in Atlanta.  But as she opens the door, there is an intersection of dimensions and she just entered a dimension where she again is a kid doctor but in say, St. Louis instead.  Or in Detroit.  Or in Indianapolis. Or just remains in the realm in Atlanta.  So we could give any of those possibilities an equal chance of happening.  So we could accept any of those events as happening based on a probability that we randomly draw.  By the way, an interaction of dimensions is described in my book as accompanied with a humming noise and a flash of lightning and a whole big thing — you’ll just have to read the books.  But moving on — say she grows through the door and comes out from a luxurious Hawaiian place as Tina, another character in my book.  Now, as nice as that would sound for Maggie, we would reject that possibility as the ratio involving the probabilities of turning into another character in another dimension is zero.  What?  Why?  Because that’s how I wrote my story, that’s why!  Because I said so.  Look, if you want to write a book where one character can change into another character, go right ahead, but that’s not how I wrote my story.  And if you check it out, you’ll know why.  But anyway, that’s roughly what the Metropolis-Hastings algorithm is.  I hope.  I sent a link to this blog to my comp stat/Bayesian professor/dissertation advisor so if something’s not right here that he reads, I’m sure he’ll let me know.  But anyway, that’s all for now but join me next time when I cover the Gibbs sampler, which is a specialized case of the Metropolis-Hastings algorithm.  Or is the Metropolis-Hastings algorithm a generalized case of Gibbs or … well, we’ll talk about that next time.

So what is there to say about the random walk? Well, it is something that I learned about in Computational Statistics class for one.  And, by the way, my Computational Stats professor was also my Bayesian statistics professor and he also turned out to be my dissertation advisor.  And he turned out to be my dissertation advisor because he happened to be on faculty at the school where I decided to pursue my doctorate degree.  And I decided to pursue my doctorate degree in biostatistics because way back to college (okay, never mind the ‘way back’ part), I became interested in biostatistics after first looking into bioinformatics because I did better in my statistics class than in my C++ class.  And I was looking into bioinformatics after considering medical school, because, how should I say this, my organic chemistry grades were not exactly on par with what was expected in medical school.  So anyway, what the heck is random walk?  Well, I just gave you an example!  It is a series of events that happen one after another based on probabilities of the current state.  So it’s kind like an MCMC except that we often assume that observations are correlated with MCMC so that we have more ability to predict the future.  A random walk on the other hand is more like a Missing Persons song, on the other hand.  Like this song.  It’s in your head now, isn’t it?  Yeah, hate when that happens too.  Sorry about that.  Anyway, is there really no way to predict the outcomes of a random walk?  Like if I actually aced my organic chemistry class, would I actually be a pediatrician instead?  Say at a major medical center in Atlanta or Philadelphia or Houston?  Kind of like another character in my books known as Maggie Upton/Zelov who’s s kid doctor in different cities, the city depending on which dimension she’s in?  What?  I’m pimping my trilogy again?  You could tell?  Crap — need to work on my smoothness.  But anyway, yes, there could be ways where we could examine where a walk is going and whether or not it will converge, as I discussed before.  But we’ll get into that later.  Meanwhile, I tried to humor myself by imagining myself as a pediatrician so I googled ‘kid doctor’, I got this.


Yeah, that’s exactly how I imagined myself also.  Well, see ya next time — and hopefully, I’ll get this song out of my head by then too.

So we’re covering randomness today.  And Wikipedia (yeah, not very original, I admit, but it works) defines randomness as “lack of pattern or predictability in events”.  And how do we know how random or how predictable something is?  Well, one way is to look at the distribution of the data and the probabilities associated with it.  So let’s say tomorrow as I’m getting ready for work and I can either put on my blue suit, my dark pink suit, or my peach-colored suit that OMG looks exactly like the St. John at Neiman Marcus (never mind I got it at JC Penny for a tenth of the price).  Although I’m partial to the last one, let’s put an equal probability of me picking either one to wear to work the next morning.  In that case, my probabilities follow a flat distribution like the uniform distribution and we could say that such a scenario would be an almost completely if not completely random case since you probably couldn’t predict what I’m going to wear.  Okay, I’m lying and you know I’m going for the peach-colored one but again, lets assume equal probabilities for the sake of our example.  Now, say you wanted to predict what time I get to work and you knew I usually get in around 8, give or take 15 minutes.  You would put higher probabilities on times between 7:45 and 8:15 in that case and lower probabilities on times earlier or later than that interval.  And your distribution of probabilities might look symmetric like a normal distribution.  So in that scenario, you would probably be right if you predicted that I indeed arrive at the office between 7:45 and 8:15.  So, while there is randomness involved, it’s much less than in my wardrobe scenario, my partiality to the peach-colored suit notwithstanding.

Now, for something completely different, lets say I’m not going into the office tomorrow because … wait for it … George Lucas is flying me into Los Angeles where we will be discussing a possible Order of the Dimensions movie deal.  Now, the distribution of probabilities associated with that is most likely very skewed, like a gamma distribution would be, where the probability of that not happening is much, much higher than of it happening.  So if you predicted that it’s not gonna happen, you were probably, most likely, unfortunately right.  So we could say that this scenario would constitute a least random case of all the cases I predicted.  So there you have it.  Just a bit about randomness and how it can be related to the distribution of probabilities.  Join me next time as I continue this discussion, talking about random walk, after which we may introduce the Metropolis-Hastings algorithm and how it has nothing to do with the Metropolis movie.  Okay, maybe it does.  We’ll see.  Now what to wear to work next?  Hmmm …



See?  Just like the St. John at Neiman Marcus!  Well, okay, it would more likely look like this if I looked like that model — but you get the picture.

Well, first of all, you’re probably wondering what MCMC is, aren’t you?  Hmmm.  So I guess I’ll explain that first.  MCMC methods, or Markov Chain Monte Carlo methods, involve sampling from a posterior distribution.  And what the heck is a posterior distribution?  It’s a distribution derived from a prior distribution and something called the likelihood obtained from our data.  And what the heck is a prior distribution?  Are you still awake?  Are you still with me?  Just stay with me a bit longer and we’ll get to something cool.  I’m promise!  So a prior distribution gives you the 411 on any parameters you have based on any prior information you have.  But once you get more data, the way you see the parameters could change so your posterior distribution may change.  And that’s the basis of Bayesian inference, by the way.  Now, how the heck does any of this relate to my books or the multiverse theory in general?

Well, I remember having dinner with my friend, Lynn, once and she was saying that one of the problems she had with the multiverse theory is that we simply cannot exist in all dimensions all the time.  And you know what?  She’s right!  How’s that, you say?  Well, I’m gonna tell ya right now, you say. Well, I’ll tell you right now, I say.  Great, you say.  Okay then, I say.  So let’s say Person A gets together with Person B in one dimension and they have a Kid Z.  Why Z, you ask?  Well, I’m getting to that too, I say.  Okay, you say.  Now, let’s say Person A doesn’t get together in Person B in another dimension but with Person C.  Because we already know that A and B are Kid Z’s parents, we can deduce that he or she could not exist if A got together with C instead.  So A and C could have Kid X or Kid Y together, but not Kid Z.  In this instance, the parents make up the prior information and the probability of what kid will be born in a certain dimension could then be determined from the updated posterior distribution of information.  Like in my books, there is this character, Anton Zelov.  By the way, remember that name.  You’ll hear it all over the place in a few years.  And remember the guy who will play the role of Anton Zelov.  I believe Robert Pattinson might send him a fruit basket with a note saying, “God bless you, man!  They’re your problem now!”  No, I shouldn’t say that.  That’s sorta mean to fangirls.  Might be true, but mean.  Personally, I would love all my fangirls … if I ever get any, that is.  I’m a fangirl myself of Brian Greene, Lisa Randall, and Michio Kaku.  And of my husband, Joe Manganiello, of course.  Husband?  What?  Wait … was that last one aloud?  Um … anyway, there’s some dimensions where Anton’s together with one chick and they have a son.  And there are other dimensions where’s with another woman with whom he has one or two daughters, one or two depending on the realm they’re in.  And there are dimensions where he is married to the protagonist (of the first book, anyway), Jane Kremowski, but they have no children together.  So it’s sorta like an inter-dimensional Maury Povich episode.  Now, once we have our posterior distribution, we could randomly draw values according to the probabilities associated with that distribution.  But how about we cover randomness next time.  Now, if only I could randomly enter that dimension situated in Bermuda.  Did I mention how rough this Chicago winter has been?  Ah well — I guess I’ll just have to settle for this image now.



Until next time — wishing you margarita wishes and pina colada dreams!