Monday, December 31, 2012

Yudkowsky's friendly AI, and signalling.

 Here's Eliezer Yudkowsky, a somewhat well known blogger, outlining his beliefs about the artificial intelligence.

The author's unawareness of just how much he falls into the categories that he himself rejects is rather amusing. For instance, at one point he says that good guys would not implement communism into an AI.

Yet, what's about him? He grew up in a democracy, and with parents that thought they knew better than him what he'd want when he grows up (I think most of us can recall parents say that you'll appreciate something they force you to do, when you grow up). And his FAI concept follows mixture of those principles, with addition of very simplistic form of utilitarianism. Politically, it is of course about as neutral as hypothetical Chinese perfect socialism AI. Most people at that point would notice that they are implementing a political system, and try to argue why their political AI is better than hypothetical socialist AI; not Yudkowsky. This guy appears blissfully unaware that his political views are at all political; he won't cooperate with your argument by defending his political views prematurely.

More of that here , where he disses with great contempt other AI researchers of comparable competence to what would naturally be assumed of him.

This appears to be a kind of signalling. Dissing a category that you naturally belong to is a very easy and dirty way to convince some of the listeners and perhaps yourself that you do not belong to this category. Think how much hard work someone would have to do to positively convince you that he is above typical AI crackpot, let alone a visionary with a valid plan for a safer AI! He'd have to invent something like an algorithm for a self driving car, a computer vision system, or something else that crackpots can't do. He'd have to have a track record as a visionary. Something that actually works, or several such things. That level of ability is quite rare, and that's why it is hard to demonstrate. But one could just bypass this and diss typical AI researchers, and a few followers will assume exceptional competence in the field of AI.

1 comment:

  1. I couldn't agree more. This guy's writings belong more in the Annals of Abnormal Psychology than in any putative outline of practical AI. He has other crank beliefs as well, cheifly among them his "theories" on human nutrition. I have no doubt this is motivated by a Kurzweilian desire to live long enough to merge with the AI's he's proposing via some sort of cybernetic augmentation or even direct mind "uploading". His "understanding" if I can deign to call it that of Godel's theorems is also driven by the same perversion in that he denies well known consequences of said theorems -ie. refusing to acknowledge that the human mind may not be a turing machine and might not be replicable via algorithms of any type. Even reading EY's convoluted and disorganized ramblings if enough to give me a dull headache after about five minutes. The word illucid comes to mind. All of this would be bad enough on its own but the fact that MIRI (the cult he co-founded) is funded to the tune of over a million dollars a year is proof enough that if you can tell desperate people what they want to hear and appear to play the role of the eccentric genius then you too can be a successful con artist!

    ReplyDelete