Should you be concerned about facial recognition technology?

Imagine a society where a criminal could be tracked down almost immediately by police and taken into custody, all by using his face as identification. Or a world where you no longer need to carry your ID card, insurance information, or even your credit or debit cards because you could pay for your meal using facial recognition, as many do today in the eastern city of Hangzhou, China.

While all this might sound futuristic and far-fetched, it’s already creating a stir in our society. Governments and businesses around the world are thinking through innovative uses of facial recognition technology and are entering into lively debates over its merits and downfalls. And its uses seem to grow more controversial each day.

A few weeks ago, for example, San Francisco became the first major American city to ban the use of facial recognition surveillance technology. While some argue it could be used by police for safer communities, opponents argue that it can reinforce bias and discrimination, as well as invade residents’ privacy. China is an example of some of its troubling uses. The government has used it to strengthen the authoritarian hand of the Communist Party, cracking down on dissidents, especially those of certain faith communities like the Uighur Muslims.

In light of the ban in San Francisco and the abuses of facial recognition worldwide, the U.S. House of Representatives is hosting a hearing to think through this technology and how it might affect the future of security, surveillance, and privacy. Should companies be able to develop, promote, and sell this often-controversial technology? What privacy measures must we develop in order to use facial recognition wisely, in a way that shows respect for every human being and his or her privacy? And how do we balance the real tension between security and basic human freedoms such as freedom of speech and assembly?

Real-time tracking

We live in a world where each step we take and purchase we make is being tracked, analyzed, and stored, and it might be used in ways that we can’t imagine right now. While we already enjoy many benefits of these data-driven tools, privacy issues abound in the age of artificial intelligence (AI) and facial recognition.

AI is the underlying technology that drives the ability of cameras to employ facial recognition. Facial scans can be done proactively by users who sign up for or use certain commercial services, but these scans can also be done through mass-video capture without one’s knowledge. Even certain airports are using facial recognition for check-in, pre-screening, and boarding passes. But who holds that data, and how might it be used in ways that would undermine the public’s trust?

And facial recognition is not just used on the public level. We already unlock our phones with our faces, and many people have home security cameras like my family’s Google Nest Hello video doorbell, which can detect faces, learns over time for greater accuracy, and even produces visitor announcements when someone arrives. We use facial recognition every day, but many of us haven’t begun to think through the ethical and moral implications of this powerful technology.

What’s the path forward?

While this technology can and will be abused, why would city supervisors in San Francisco seek to ban it? Do the bad applications outweigh the good? When we approach tools like facial recognition and AI, we often do so from a place of fear or misunderstanding. It is true that this technology can be misused and wielded in a way that seeks to digitize humanity by treating our neighbors as pieces of data rather than flesh and blood image-bearers. Fears of a Big Brother state or overly-invasive marketing schemes are present-day realities. This is the case with the Uighur Muslims in China and will likely be the case worldwide in the near future. But rather than overreact and miss the benefits of how this technology can be harnessed well, we must seek to engage the debates rationally. So, is an outright ban the right path forward?

On one hand, it’s easy to understand why San Francisco city supervisors seek to ban facial recognition surveillance technology, but a true path forward is much more difficult than a blanket ban that misses the nuances of reality. Similar bans on technology have recently been sought on things like autonomous weapons. Because of the ways that we tend to react to new possibilities in technology, we can miss the need to do the hard work of regulating and protecting our basic freedoms.

There will be many points of view when it comes to the best way to deploy and utilize this revolutionary technology. And this is a good thing for our society. Lively debate over important matters is vital to a flourishing society, which is why I am encouraged that the House is beginning to engage this technology. But these discussions must not be limited to the halls of Congress or to local governments. They must take place in the hallways and living rooms of homes throughout our nation, because technology doesn’t wait for us to decide how to use it properly before its impact is felt widely.

Facial recognition can be a bit overwhelming for those without a deep understanding of how it works and the issues at stake. But engaging these matters doesn’t require a government position or even advanced degrees in computer science or surveillance. As we enter these discussions, we must not lose sight of the fact that this technology is here to stay, and no type of ban will stop the development and use of these tools. We must decide as a society what value we place on feeling secure while maintain privacy. We must decide how to use these technologies with wisdom and care, rather than punting the issue down the road for future generations.

Denying basic human dignity to our neighbors in the name of security is not an adequate basis for developing and utilizing facial recognition. While this technology may lead to safer communities, it can also lead to a surveillance state like we already see in other nations. The challenge before us, especially as Christians, is to develop clear guidelines driven by fundamental principles rather than responding to issues as they arise. And we must make sure to stand for the vulnerable and voiceless among us and across the world. Nothing short of human dignity is at stake.

For more information about the data and privacy issues surrounding the use of technology like artificial intelligence and facial recognition, read a new statement of principles developed by ERLC and top Evangelical leaders at ERLC.com/AI

This article originally appeared at ERLC.com