Monday, December 31, 2012

Yudkowsky's friendly AI, and signalling.

 Here's Eliezer Yudkowsky, a somewhat well known blogger, outlining his beliefs about the artificial intelligence.

The author's unawareness of just how much he falls into the categories that he himself rejects is rather amusing. For instance, at one point he says that good guys would not implement communism into an AI.

Yet, what's about him? He grew up in a democracy, and with parents that thought they knew better than him what he'd want when he grows up (I think most of us can recall parents say that you'll appreciate something they force you to do, when you grow up). And his FAI concept follows mixture of those principles, with addition of very simplistic form of utilitarianism. Politically, it is of course about as neutral as hypothetical Chinese perfect socialism AI. Most people at that point would notice that they are implementing a political system, and try to argue why their political AI is better than hypothetical socialist AI; not Yudkowsky. This guy appears blissfully unaware that his political views are at all political; he won't cooperate with your argument by defending his political views prematurely.

More of that here , where he disses with great contempt other AI researchers of comparable competence to what would naturally be assumed of him.

This appears to be a kind of signalling. Dissing a category that you naturally belong to is a very easy and dirty way to convince some of the listeners and perhaps yourself that you do not belong to this category. Think how much hard work someone would have to do to positively convince you that he is above typical AI crackpot, let alone a visionary with a valid plan for a safer AI! He'd have to invent something like an algorithm for a self driving car, a computer vision system, or something else that crackpots can't do. He'd have to have a track record as a visionary. Something that actually works, or several such things. That level of ability is quite rare, and that's why it is hard to demonstrate. But one could just bypass this and diss typical AI researchers, and a few followers will assume exceptional competence in the field of AI.