Stay in touch
Maybe “fear” is the wrong word. The wrong frame of mind. What do we do when we fear something? We run from it, or we try to fight it. As we confront fast-progressing, increasingly sophisticated artificial intelligence, is either of these the right response?
What Is A.I.?
The definition of Artificial Intelligence is a heated debate. The most simple and accepted we found is: technology that imitates human thought to solve problems. It’s like a human intellect but its man made, thus artificial. The intellect is real, the creation is artificial.
Next time you see a “list of recommended items” on Amazon, you’re experiencing an A.I. software at work. Next time you ask Siri or Alexa for the weather forecast in Toronto or Montreal or New York, you’ll be using it. Your virtual assistant has to listen, process, and speak. This is A.I. — and it’s all around us.
But think about a more complex application. Think about teaching a driverless car the rules of the road. Astro Teller, director of Google X Labs, asks us to “Imagine you’re part of the self-driving car team.” Your job: to teach it to see, categorize, and respond to other cars, pedestrians, obstacles, and objects on and near the road — like bicycles.
What is a Bike?
“You have to figure out what’s a bicycle, and when they make a hand gesture, whether they’re signalling a turn.” But what’s a bike? What color is a bike? How big are the wheels? Is a unicycle a bike? A tandem model? What if it has a big basket or a tow-behind trailer for the kids? Is it still a bike? ... You get the idea.
Now imagine writing a list of rules that answer these questions (and so many more). Impossible! There are too many variations, too many possibilities and exceptions. So Teller has to teach the machine to learn what a bike is. His team shows the computer tens of thousands of images of bicycles and tens of thousands of images of things that were not bicycles. Over and over.
Then the machine learns; it writes its own rules.
The Opportunities of Machine Learning and Generative Design
With generative design capabilities, Teller, and other A.I. experts, teach machines to teach themselves to accomplish tasks we can’t. Instead of delivering one answer to a question (“Is this a bicycle?”), A.I. takes the question and provides a nearly infinite array of possible answers.
A.I. is being implemented in all sorts of expected and unexpected places:
At home. From relatively simple virtual assistants to energy efficient lighting and fuel systems, A.I. is already making homes smarter.
On the road. Waymo, owned by Google parent company Alphabet, was just granted the first permit to test driverless cars on California public roads. Starting in Silicon Valley, naturally.
In labs. Google’s DeepMind Health works with clinicians, researchers, and patients to attack real-world, real-time healthcare problems. With machine learning and systems neuroscience, they build powerful algorithms into neural networks that mimic the human brain.
In hospitals. At New York University’s Langone Medical Center, A.I. will help predict which patients are likely to develop sepsis and to alert doctors of heart distress. The technology is already used in hospitals to achieve earlier, more accurate diagnosis and develop targeted treatment plans.
In areas of severe need. The World Bank is leveraging A.I. to predict drought and famine in Africa so they can divert resources to areas that will be hardest hit.
A.I. is progressing, evolving, growing, and learning steadily, and the potential applications are staggering. Fighting (and even ending) poverty, famine, cancer - it is not outside the realm of possibility.
Let us take a moment to simply thank God for the wonderful advancements in modern technology. A.I. can be leveraged for evil no doubt (as we will address below), but the potential for good is enormous. We need to strategically and wisely leverage this technology for good. For Jesus.
The Dark Side
Yet we are inevitably confronted with these pivotal questions: as A.I. progresses, will it become too smart? Too autonomous? Too human? And on what moral framework does A.I. make decisions?
More than 8000 people, including Stephen Hawking, Noam Chomsky, and Elon Musk, signed an open letter, stating, “Because of the great potential of A.I., it is important to research how to reap its benefits while avoiding potential pitfalls. The progress in A.I. research makes it timely to focus research not only on making A.I. more capable, but also on maximizing the societal benefits of A.I.”
While there are concerns about the economic, societal, ethical pitfalls, there’s emerging questions of faith. Kevin Kelly, author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, says, “If humans were to create free-willed beings, absolutely every single aspect of traditional theology would be challenged and have to be reinterpreted in some capacity.”
The vast majority of programmers working with A.I. want to use it for good. They agree that A.I. needs to embody our best human values. We don’t want to see Microsoft’s Tay anymore!
The follow up question is always, “so, who’s values?” The value system can be vastly different from one country to another, one ethnicity to another, one neighbourhood to another and from one individual to another.
And if you’re thinking, “well, there is at least a basic human moral code. Love your neighbour. Don’t murder. Don’t hate.” Well, it’s not that simple. For example, how you define love is most likely different than how I might define love. Your version of hate is probably different than my version of hate.
And the internet is universal with no geographical, ethical, or religious boundaries. To put up a giant wall between borders is simply not possible with A.I.
And herein lies the great opportunity for the Global Christian Church. Sure the problem before us is deeply complex needing universal definitions and genuine unity. But that is an opportunity the Global Church should not only welcome, but play a pivotal role in shaping.
A nonprofit out of Seattle, Washington recently formed A.I. and Faith. It is an interfaith community that exists so that faith communities might help shape the development of A.I. in ways that are deeply ethical and life-affirming. They hope to inform and create dialogue with institutions and leading thinkers and builders in A.I. in order to play a key role in shaping the narrative. It's a fantastic start.
Yet more and more of these types of communities are needed. More of them should be forming, thinking, challenging and becoming a space the global tech community can look to for guidance and suggestions.
The Christian community does not come to mind when the global tech community sets the parameters for their A.I. systems to make decisions. And with few Christians working with A.I., the voice that speaks, “what would Jesus do” is not being heard.
The time is now for men and women of faith in tech to rise up and speak up. To show the world that a Christian moral foundation, though still abused by sinful humanity, best reflects the way humanity is meant to live as laid out by its Designer.
So, fear or wonder?
I believe we have reason to equally fear and wonder the future of Artificial Intelligence.
My fear, however, does not rest majoritively on A.I. itself but rather the lack of Christian voice in its design.
My wonder is heightened each time I hear yet another incredible use of A.I. to serve the poor, help the local church, and spread the Gospel. It is heightened when I meet individuals that are thinking deeply on its use and effects.
My underlying question is this: will the Global Church run and hide or enter the battle and play a role in shaping the future once again?
I once had a friend ask me, “Are video games sinful?” In classic fashion, I asked a question back: “what do you think?” His answer surprised me:
Sign up to be notified of upcoming events, opportunities within the community, and for new developments with FaithTech!