Monday, December 26, 2016

Superintellligence


This talk, by Maciej Cegłowski, which reached me through a number of links, is one of the most interesting things I've seen on the web in a long time. I warn you that it's long, but it's not boring. The topic is superintelligence, the idea that computer-based intelligence is growing so fast that it will soon be dominant in our society. The proponents of this idea think this is a wonderful thing. Our author is not so sure.

Here are some short excerpts:
AI Cosplay
The most harmful social effect of AI anxiety is something I call AI cosplay. People who are genuinely persuaded that AI is real and imminent begin behaving like their fantasy of what a hyperintelligent AI would do.
In his book, Bostrom lists six things an AI would have to master to take over the world:
  • Intelligence Amplification
  • Strategizing
  • Social manipulation
  • Hacking
  • Technology research
  • Economic productivity
If you look at AI believers in Silicon Valley, this is the quasi-sociopathic checklist they themselves seem to be working from.
Sam Altman, the man who runs YCombinator, is my favorite example of this archetype. He seems entranced by the idea of reinventing the world from scratch, maximizing impact and personal productivity. He has assigned teams to work on reinventing cities, and is doing secret behind-the-scenes political work to swing the election.
Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don't like to be manipulated. You can't tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.
I've even seen people in the so-called rationalist community refer to people who they don't think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.
So I work in an industry where the self-professed rationalists are the craziest ones of all. It's getting me down.,

Incentivizing Crazy

This whole field of "study" incentivizes crazy.

One of the hallmarks of deep thinking in AI risk is that the more outlandish your ideas, the more credibility it gives you among other enthusiasts. It shows that you have the courage to follow these trains of thought all the way to the last station.
Ray Kurzweil, who believes he will never die, has been a Google employee for several years now and is presumably working on that problem.

There are a lot of people in Silicon Valley working on truly crazy projects under the cover of money.

Religion 2.0
What it really is is a form of religion. People have called a belief in a technological Singularity the "nerd Apocalypse", and it's true.

It's a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith.
The AI has all the attributes of God: it's omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.

Like in any religion, there's even a feeling of urgency. You have to act now! The fate of the world is in the balance!

And of course, they need money!

Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.


No comments: