Tuesday, February 5, 2013

Algorithmic probability: an explanation for programmers

Suppose you do a double slit experiment using low luminosity light source, and a night vision device for the viewer.

You see a flash of light - a photon has hit the detector. Then you see another flash of light. If you draw the flashes on grid paper you will, over the time, see something like this image on the right:

Suppose you want to predict your future observations. Doing so is called 'induction' - trying to infer the future observations from the past observations.

A fairly general, formalized way to do so is the following: You have a hypothesis pool, initially consisting of all possible hypotheses. Each hypothesis is a computer program that outputs a movie (like computer demos). As you look through the night vision device, and see flashes of light, you discard all movies that did not match your observations so far.

To predict the observations, you can look at what the movies show after the current time. But those movies will show different things - how do you pick the most likely one? Note that among the 1024 bit demos (yes, you can write a graphic program in 128 bytes), an 1023 bit demo will appear twice - followed by 1, and followed by 0 (it does not matter what you have after a program); a 1000 bit demo will appear roughly 16 million times, and so on. Thus the movies resulting from shorter demos will be more common - the commonality of a movie will be 2-l where l is the bit content of the demo.

This is the basic idea behind algorithmic probability with a distinction that algorithmic probability utilizes a Turing machine (to which your computer is roughly equivalent, apart for the limited memory).

It is interesting to speculate what those demos may be calculating. Very simple demos can look like a sequence of draw_point(132,631); skipframes(21); draw_point(392,117); and so on, hard coding the points. Demos of this kind which aren't thrown away yet will grow in size proportionally to number of points seen on the screen.

We can do better. The distribution of points in the double slit experiment is not uniform. Demos that map larger fraction of random sequences of bits to more common points on the screen, and smaller fraction of random sequences to less common points on the screen will most commonly produce the observed distribution. For instance, to produce Gaussian distribution, you can count bits set to 1 in a long sequence of random bits.

This sort of demos can resemble quantum mechanics, where the solution would be made by calculating complex amplitudes at the image points, and then converting them into a probability distribution by applying Born rule.

Can we do better still? Can we discard the part where we pick just one point? Can we just output a screen of probabilities?

Not within our framework. First off, Turing machines do not output or process real numbers at all; real numbers have to be approximately encoded in some manner (If you ever hear of Solomonoff induction code 'assigning' probabilities, this works through the code converting subsequent random bit strings into specific observations, producing, in the limit, more of some observations than the others). Secondarily, we haven't observed a screen of probabilities; when you toss a coin once, you don't see a 50% probability of it landing heads, you observe heads (or tails). What we have actually seen was flashes of light, at well defined points. The part of the demo where it picks a single point to draw a flash at is as important as the part where it finds the probability distribution of those points. Those parts are not even separate in our Gaussian distribution example, where the probability distribution is not explicitly computed but instead generated by counting the set bits.

Can we output a multitude of predictions with one code and then look through them for the correct ones? We can't do that without losing predictive power - we'll end up with a very short demo that outputs sequence from a counter, eventually outputting all possible video sequences.

[This blog post is a work in progress; todo is to take photos of double slit experiment using a laser pointer, and to make other illustrations as appropriate]

Saturday, February 2, 2013

On cryopreservation of Kim Suozzi


With the inevitable end in sight – and with the cancer continuing to spread throughout her brain – Kim made the brave choice to refuse food and fluids. Even so, it took around 11 days before her body stopped functioning. Around 6:00 am on Thursday January 17, 2013, Alcor was alerted that Kim had stopped breathing. Because Kim’s steadfast boyfriend and family had located Kim just a few minutes away from Alcor, Medical Response Director Aaron Drake arrived almost immediately, followed minutes later by Max More, then two well-trained Alcor volunteers. As soon as a hospice nurse had pronounced clinical death, we began our standard procedures. Stabilization, transport, surgery, and perfusion all went smoothly. A full case report will be forthcoming.
The hidden horror of this defies imagination. There's suffering, there's anxiety, there's fear, and there's burden of choice - anxiously reading about the procedures, second-guessing yourself - are you deluding yourself, are you grasping at straws, or are you being rational? When is the tipping point where you'll deny life support? This kind of decision hurts. I really hope that she simply believed it to be worth a shot and did not have to suffer from the ambiguity of the evidence, but I don't know, and its scary to imagine facing such choices.

Then if she is awakened - sadly, it seems exceedingly likely that almost everyone she knew will be dead, or aged beyond recognition, it is exceedingly likely that she will largely not be herself. No one ever looked if vitrification solution reaches the whole of the human brain - in the parts it won't reach, will be shredded, the parts it does reach, proteins denature. Tossing a book with pictures into a bucket of solvents is not a good way to preserve it's contents when you don't know what the paint is made of, especially if parts of the book not reached by solvents are then shredded.

If you're an altruist, well, there's a lot of people to save via more reliable, not so mentally painful, vastly cheaper methods, to which the majority of the population lacks access. Altruism doesn't explain cryonics promotion. Selfishness does - you believe in cryonics and you want confirmation, cryonics is your trade so you'll pitch it, or you're signed up and you need volunteers to test methods for your own freezing, or you want to feel exclusive and special... A lot of motives, but altruism is not one of them. Promoting cryonics, like any expensive, unproven, highly dubious medical procedure, is not a good thing to do. As far as beliefs go, "you'll almost certainly die but there's a chance you won't" is not the most comforting one.

Speaking of brains. Currently, among other things, I develop software for viewing serial block-face microscopy data, on a contract. This is my private opinion, of course - and I am not a neurologist, my main specialization is graphics, I look at neurons to tune and test the software, I don't quite know what all the little bits around are - I look at a bit and I'm like, what is it? And then I go to re-read description by one of other people at the project, and I am like, ohh, I think it's a mitochondrion inside a dendrite. And then I wonder - why is it here? What does it matter where it is? What is this thing that connects it to the wall? Is it some weird imaging artifact? I do not claim to speak for everyone. I'm doing my part which, among other things, can help to figure out how to preserve brains or how to digitize them.

And my opinion is, in TL;DR; form: "do not promote cryonics for use by humans now". If you want to promote something, malaria vaccines are a good idea, if you want to defend something controversial, there's DDT to kill mosquitos, if you want to defend something overly fancy, there's the mosquito shooting laser. And that's just malaria. There's a lot of other diseases that have well proven cures which aren't available to everyone.

When we have better understanding of the brain, the preservation will almost certainly be cheap and chemical, rather than cryogenic. Cryogenic preservation requires pumping brain full of vitrification solution which prevents ice from forming even at the slow cooling speed of an object as big as human brain. Those concentrations denature proteins, distort things, likely detach things. It is more rational to find right set of chemical fixatives, than to use solvents at denaturing concentrations. Especially if solvents do not even reach the whole brain. Liquid nitrogen is wrong, too - liquid nitrogen is much too cold, different parts of the brain have different thermal expansion coefficient, the whole thing cracks as it cools from the glass transition temperature down to liquid nitrogen temperature. One could write pages about such issues.