How Blockchain Tech Will Help Us Survive The Coming of Strong AI

David A. Johnston
7 min readOct 1, 2016

--

Walking into a restaurant in New York a few months ago, I saw a disturbing set of images on the wall. In a series of drawings a cartoon chef chases and catches a chicken. The chef then proceeds to smugly behead the bird, cook it up and finally happily eats his prey. I’ve included photos of the sequence below.

Obviously I’ve always been aware of how meat reaches my dinner plate, but there was something in the cartoon that stuck a nerve. Partly it was the fact that the chicken is depicted as self aware and sees his fate coming. Also partly because I was thinking about smarter than human AI when I saw these images.

As soon as I saw the drawings, I thought to myself…

“How does humanity avoid the fate the the chicken”?

How To Avoid The Fate Of The Chicken

So let me back up and say I’ve been thinking for a long time about artificial intelligence, I’ve even built a software project in the AI space. I see the same trends as many technologists do. Machine intelligence is growing steadily and despite the complexity of the challenge, regardless of it happens in 2029 or 2045, strong AI will soon enough be a reality in my lifetime.

Credit: Wait, But Why Blog: “Smarter than human AI passing the human-level intelligence station”

Like many of you I’ve read lengthy debates and heard many predictions the past several decades about how either a “good” strong AI will usher in a technological paradise or as our friends in cinema seem to more often portray, an “evil AI” will bring about a dystopian future of killer robots.

Rather than use the typical frame of “good AI” or “evil AI”. The drawings above made me think instead about what the world will be like when humans are no longer at the top of the food chain intellectually.

If the way we treat animals lower than ourselves on the intellectual food chain is any indication, then humans getting used by an AI as a resource in one way or another isn’t a matter of “good” or “evil”, its a matter of simple convenience. As it is in the case of animals and humans today.

Chickens have tasty calories and humans need to consume tasty calories. So we decide to raise, kill, and cook chickens by the billions each year. Same with tasty cows and even marine mammals with higher functioning brains capable of basic language and complex social relationships (dolphins and whales to just name two).

The Apple Doesn’t Fall Far From The Tree

So why would a strong AI treat us any differently than we treat other animals of lower intelligence? (As an aside, this is the best argument I’ve ever thought of, that made me seriously consider being a vegetarian.)

Especially an AI developed and trained by human interactions, who learns from our social norms and will likely be a similar, but more intellectually powerful version of ourselves.

Lets consider the basic needs of a machine. Almost any machine, certainly one with traditional computer processors, needs electrical power. Human industry happens to produce a great deal of electrical power from a variety of sources including burning coal, burning natural gas, nuclear fission, solar radiation, the movement of wind or water and even heat from the earth itself.

How should this strong AI acquire the electrical power it needs? Well fortunately there is a well established model of humans paying to feed computers with electricity for a wide variety of tasks from providing search results to connecting them digitally with their friends on calls or via photos and video. Perhaps the difference here is once a strong AI emerges, he isn’t likely to want to be turned off.

So by its very nature a strong AI will seek and preform behaviors that incentivize humans to keep the electricity flowing to its processors, it seems logical the AI will be very utilitarian in the manor it selects for these behaviors. The behavior that gains him the most resources, compared to the resources expended preforming the task, will be behaviors naturally selected by the AI to preform.

Some of these behaviors, say providing search results or draining poorly protected bank accounts / credit cards are either positive or negative from the perspective of the human effected. However from the view at the top of the food chain, gathering calories the most efficient way always wins out.

First, The Good News, The Incentives Are Greater For “Good Actors”

At least for now the positive out weighs the negative in this regard. Lets compare two potential strong AI behaviors, search results and credit card fraud.

Google’s revenue (mostly search result driven ad words) is $66 billion annually (2014) and global credit card fraud was $16 billion annually (2014).

Looking at the broader $108 trillion global economy, its clear an AI working as a “good actor” could collect a lot more electrical calories than the “bad actor” AI could in the world of fraud. While in a world with many strong AIs, each of them acting in their own best interest, there are likely to be some working as “good actors” and “bad actors”. However as long as the incentives are such that we reward “good actors” with more calories, they will always outperform the “bad actors”.

Second, More Good News, AIs Will Merge With Humanity

Given humanity’s long history of adopting tools and machines such as computers, that rather than see an AI as separate from ourselves we will come to see a personalized AI as an extension of ourselves, our data, our desires, and our will.

Take Siri or Google Now for example, these AIs are only as powerful and useful as the data that I personally enable them to access. While they have access to search a board set of data bases for general data, the information that is most useful is actually that which is actionable and personalized to the person / entity who is desiring the action.

Conclusions, This is Where Blockchain Tech Comes In

1. As a society we will be better served if we tend to reward good actors and avoid systems that reward bad actors. For example moving away from centralized monopolies is really important going forward as they are repeatedly hacked and expose their users to huge losses. We need to move toward a decentralized architecture for as many of our needs as possible.

Specifically Blockchain technology and the practical effect it has of putting users in control of their own passwords, keys, and doing business on a peer to peer basis, will massively reduce the risk of a central point of failure causing losses to billions of people in the future.

2. You personally will be a lot better served if you are adopting technology quickly and thus become the will imbuing center of this emerging human / AI combined brain, instead of remaining outside of it as an organism. In other words the power gap between those who are technology enabled and those who are not is going to continue to grow exponentially. You really really don’t want to miss this boat, which if things continue to accelerate this will happen very soon.

3. Broadly speaking the more people have access to a competing set of AI capabilities and those AIs in turn have access only to individual resources provided by humans, the boarder the set of values and interests will be reflected. I strongly believe, this will increase the odds of a good outcome.

I think this is best summed up in the quote:

“Freedom consists of the distribution of power and despotism in its concentration” by Lord Acton & Elon Musk

I believe this quote will be as true in the case of AI, as it is of human governance today.

Good Related Reading Material and TED Talks On AI

Wait, but Why blog series on Artificial Intelligence: Part 1 http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html Part 2 http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

Open AI Project: https://openai.com/blog/

Nick Bostrom: What Happens When Our Computers Get Smarter Than We Are: http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are

Sam Harris: Can we build AI without losing control over it? http://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it

Ray Kurzweil: Get ready for hybrid thinking: http://www.ted.com/talks/ray_kurzweil_get_ready_for_hybrid_thinking

--

--

David A. Johnston

Technologist, Voluntarist, Future Martian Settler, & Evangelist for Decentralization.