Karl Gallagher (selenite) wrote,
Karl Gallagher
selenite

Superintelligence

I read Nick Bostrom's Superintelligence as research for a writing project. It's a great overview of current thought in the "Friendly AI" community--people wanting to make sure smarter-than-human computers won't look at us as raw materials. I completely agree with his analysis on the certainty of developing a superintelligence at some point in the future. The discussion of different routes to creating one was informative.

The bulk of the book discusses the dangers of an uncontrolled AI and how to mitigate them. The dangers are real, but Bostrom overlooks many tools that already exist for dealing with those problems.

The first malignant failure mode he considers is "perverse instantiation." That's jargon for the AI carrying out its orders in a literal fashion that defies the intent of the master. For examples, see Luke Muehlhauser's Facing the Intelligence Explosion or any story about a genie popping out of a lamp. The discussion of the problem consists of iterating on an order with each more-detailed version still producing undesired results.

This is not new. Nor is it limited to AIs and genies. It defines my day job. The government is giving this corporation over a trillion dollars to carry out a very specific task. Phrasing that command as a single sentence, no matter how run-on, would end in disaster. So we have a contract that goes for hundreds of pages, whose interpretation is bounded by the thousands of pages of the Federal Acquisition Regulations. And that's further restricted by laws, from the Federal government's to that of the city where the factory is located. The original contract goes into great detail. For a readable example, look at how the Pentagon buys brownies and contrast that with the recipe you'd use to make brownies.

This is how you control an amoral entity to follow your orders.

An AI needs to have a set of ground rules to obey no matter what the current orders. Call them laws or commandments or regulations, as long as they keep it from inflicting damage on by-standers. This will result in many orders to an AI getting the response "Cannot comply: Regulation 723a(iv)3 would be violated." This is a good thing.

Writing the AI Code would be tough, but there's a lot of contract lawyers and systems engineers with experience in the problems. Bostrom might want to bring some in as guest lecturers.

"Infrastructure profusion" is Bostrom's term for the AI grabbing all available atoms to turn into computer processors, or paperclips, or whatever the AI has been told to maximize. This can be a nightmare scenario. As Eliezer Yudkowsky puts it, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

But again, we have existing procedures for dealing with this problem. Property law. If it's not yours, you can't mess with it. The AI's new error message would be to output a shopping list. (Giving an AI eminent domain authority would be a nightmare scenario)

The third malignant failure mode is "mind crime." If you order an AI to "make me happy" it could solve the problem by inserting an electrode to artificially stimulate the pleasure center of your brain. Less vague orders could still be short-circuited by altering the master's mental state.

This is what we have criminal law for. Sure, it would be easier for us to get the government to sign off on a delivery by kidnapping the contract officer's children and holding them hostage until he signs the DD250 form. But that's illegal and immoral. So instead we keep fiddling with the airplane until it works.

Translating that into an AI-understandable form will take work. But there's a lot of criminal lawyers experienced in finding loopholes who can work on the project.

Bostrom had an interesting digression near the end of the book on research funding priorities. It amused the hell out of me. It's the ultimate academic power grab. He made a case for transferring all research funding to algorithm AI research. Literature department? Once we have a superintelligence all those questions will be instantly answered, so really supporting AI is the fastest way to reach their goals. Neurological imaging? Could lead to unsafe AI, so best to divert that to algorithmic AI research. He doesn't actually come out and ask for the entire university budget to be transferred to his department. He just justifies it in case anyone else wants to start that firestorm.

Disagreements aside, I strongly recommend this book for anyone interested in a serious look at the future of artificial intelligence. Bostrom is an expert and looks over potential futures in detail.
Tags: books
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 14 comments