fbpx

Scientists Aren’t Sure Humans Could Control Superintelligent Machines, Even if We Built Them

Image Credit: iStock

Humans creating a computer or some other type of artificial intelligence that eventually gets too smart and wily for us to control, thereby creating a war between humans and machines that all but destroys the earth, has been the plot of many a science fiction film.

Is it likely, though? Would humans win? Are we doomed to eventually slave away for the superintelligent robots we accidentally designed and unleashed?

Recent projections say…most likely, yes.

Image Credit: iStock

A new study out of Germany’s Max Planck Institute for Human Development found that any “superintelligent” AI would be impossible for humans to contain

A superintelligent AI, according to Berlin’s Institute for Human Development, is one that exceeds human intelligence and can teach itself new things that humans cannot fully grasp.

Mathematicians, for example, use complex machine learning to help solve outliers for famous proofs, and scientists use machines that are smarter than they are to come up with molecules that could be candidates for treating disease.

Image Credit: iStock

The bottom line is that it just makes sense to use computers to get through billions of calculations in a few days, whereas the same work could take human brains a decade or more.

That said, the existence of these machines has bothered researchers for some time, for obvious reasons, and the press release from the Planck study revealed that they’re right to be concerned.

“There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it.

The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

Isaac Asimov’s Three Laws of Robotics have become instrumental to how we think we could protect ourselves from rogue AI, or AI that began to come up with evil plans of its own. The laws dictate that a robot can’t harm people, or be programmed to harm people, but humans remain fearful of this type of AI being able to teach itself whatever it wants.

Because, of course, we don’t have any real way to enforce these laws across the board.

“We argue that total containment is, in principle, impossible, due to fundamental  limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine or input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.”

In short, we don’t have the brain capacity to contain a super-intelligent IA, and we also have no assurance that we’ll be able to figure out how to talk to the dang thing, or meet it on any kind of battlefield we’ve ever imagine.

The programming language itself won’t be made by humans, so will we be able to understand it? To communicate?

Image Credit: iStock

All of this sounds scary, but hopefully the information will provide scientists the opportunity to come up with some sort of backup plan.

If it doesn’t work, well…maybe start making nice with your Roomba now. You never know when you might be cleaning its floors instead of the other way around.