A Redemptive Thesis for Artificial Intelligence

A call to repair and redeem through AI.

Praxis
The Praxis Journal

--

Discussion at the Redemptive AI Forum in New York City.

The Praxis Redemptive Imagination Studio is a “design and innovation studio” for redemptive entrepreneurship that aims to discover and articulate the major issues of our time and creatively respond with venture-building, community-forming, content-shaping activity. Our initial list of 34 Opportunities for Redemptive Imagination (ORIs) acts as the organizing scheme for the Studio. This ORI Thesis is a working document to spark the imagination of redemptive entrepreneurs and builders in artificial intelligence.

This is a thesis for how redemptive entrepreneurs — venture builders and venture-oriented investors who seek to combat exploitation, adopt ethical practices as a baseline, and pursue restoration of broken systems through creative and sacrificial innovation — can approach the suite of technologies commonly called “modern AI” (from now on, simply “AI”).

Six Assumptions

We begin with the following assumptions. We are well aware that each requires a measure of faith — that is, each is based on a certain amount of evidence that warrants our belief, though none can be proven beyond a reasonable doubt.

1. The enduring image

Human beings are, and for the duration of the human story will uniquely remain, entrusted with bearing the image of God.

Our ability to discover physical and mathematical patterns in our cosmos, and to develop science and technology that make the most of those patterns, is one of the intended fruits of this image-bearing vocation, meant for the glory of God, the good of our fellow human beings, and the flourishing of creation itself.

2. A pattern of deceptions

In the course of the human story we have become captive to a pattern of deceptions that have corrupted the divine image, compromised our ability to pursue or even know what is best for us, and distorted our application of mathematics and science.

We have acted as if we can live independently of God, and the life of love that God offers and commands, with no harm and indeed great benefit to ourselves (this is the most fundamental pattern, known as the sin of pride). More specifically, the “modern” world has been founded on the quest to secure good things for ourselves through some form of pure technique that does not require relationship — with others or with God (this is the ancient allure of and quest for magic, uniquely enabled in modernity by what we call technology). Likewise, modern economies have effectively subordinated all other goods, to the extent they are acknowledged at all, to the pursuit of financial wealth that purports to give us abundance without dependence (the seduction of Mammon).

Insofar as we are all caught up in these patterns of false belief and behavior, God is dishonored, human beings are degraded and violated, and creation is exploited and diminished.

3. Very good and also very distorted

AI, like other scientific and technical advances, is part of the “very good” world that human beings are meant to steward and extend.

It is a significant advance on much previous technology in the way it recapitulates the patterns of learning and cognition that arose in the course of the development of life (especially the nervous system and the brain). In the case of Large Language Models and similar systems, it also is able to incorporate (via training data) much of the vast achievement of human culture. In this way it is potentially a profound and fruitful extension of human image-bearing, and like other major cultural achievements (such as the invention of writing) it can be expected to unlock good potentialities of the created and cultivated world that were previously inaccessible.

But it is also, inevitably, subject to the patterns of deception — including the patterns it has absorbed from its training data — that will tend to bend its outputs and its users in corrupting directions.

4. As consequential as the Internet — or electricity — or agriculture

While the scope of AI’s full potential is not yet clear, it is reasonable to believe that it is as consequential a technological development as the Internet (developed and deployed 1990–2010). But we should consider the likelihood that it will prove as consequential as electricity (1850–1950), and the possibility that it will prove as consequential as agriculture (the “neolithic revolution,” 8,500–6,500 years before the present day).

Insofar as all of these were the result of image bearers extending the “very good” world, they created genuine common wealth that continues to benefit humanity and creation; but all of them were also subordinated to foolish and prideful visions, leading to significant damage to human beings, human societies, and the created world; and almost all of their most significant consequences could not have been foreseen by their early inventors and innovators.

We can expect all this to be true of AI as well.

5. Asymmetrical risk—even without a singularity

There are no good reasons, including no good technical reasons, to believe that AI will somehow usher in a “singularity” in which human beings are replaced in their unique role and responsibility for one another and for the cosmos — even as there is every reason to believe that AI, like all technology, will vastly outstrip human capabilities in specific areas.

Fantasies or fears of AI “replacing” us are misplaced (though AI will almost certainly come to replace some tasks and activities currently performed by human beings). But concerns that AI will be harnessed to exploitative ends, or will be deployed in ways heedless of its unintended consequences, are well founded.

And like certain useful but asymmetrically dangerous technologies, like nuclear fission, AI may be capable of unleashing vast destruction (as, for example, in the discovery and design of highly virulent biological weapons).

6. The fantasy of the superhuman

While AI may or may not prove to be as consequential as the most dramatic technological developments of human history, it carries unique risks because of its close and genuine kinship to one distinctive way human beings interact with the world (“intelligence”) and its ability to mimic (though probably not genuinely possess) other distinctive human characteristics including personality and purposefulness.

Misunderstood or misapplied, AI may hold unique potential for the destructive triumph of pride, magic, and Mammon. These risks apply even if AI proves technically less capable than we may imagine, because the mere fantasy of creating “superhuman” capabilities, and of inventing alternatives to human beings, is sufficient to distort relationships, economies, and societies.

Six Redemptive Directions

With these assumptions in mind, we offer the following guidance to those called to build ventures that extend AI in redemptive ways — meaning not just within ethical boundaries, but actually seeking to repair some of the damage done by previous waves of technology.

This guidance is meant to operate primarily at the venture level. We recognize that many important decisions about AI will be taken at the level of government and policy (such as regulation of the sources and scope of training data), and those regulatory frameworks will in turn constrain and enable decisions made by a handful of very large infrastructure providers (the companies developing and training foundation models). While we hope that some redemptive actors will have real influence “from the top” on policy and may build infrastructure at very large scale, most entrepreneurs exercise their greatest influence “from below” by shifting the direction of innovation through specific applications.

Like the Internet, electricity, and agriculture, AI is a general-purpose technology that can be harnessed to many ends. Redemptive entrepreneurs can lead the way in demonstrating that AI can be deployed — in fact, is best deployed — in ways that dethrone pride, magic, and Mammon and that elevate the dignity of human beings and their capacity to flourish as image bearers in the world.

AI is best deployed in ways that dethrone pride, magic, and Mammon and that elevate the dignity of human beings and their capacity to flourish as image bearers in the world.

1. Redemptive AI will inform but not replace human agency.

One of AI’s fundamental capabilities is its ability to operate as a “prediction machine” that can inform human decisions and choices. But AI’s predictive powers are not being deployed in a neutral environment. Many human beings, in too many dimensions of life, are already limited in their ability to make free and wise choices by unjust markets, inflexible and quasi-mechanical systems, entrenched bureaucracies, and repressive regimes. AI could easily be deployed on behalf of any of these social forces to further suppress genuine human freedom and responsibility.

Redemptive AI will actively repair and restore human agency rather than further concentrating or diffusing it. It will provide the inputs and incentives to make better decisions, but it will not pretend to relieve human persons of their responsibility to choose wise paths, especially in areas (e.g., policing, the extension of financial credit, the evaluation of employees, or the management of natural resources) that can only be responsibly undertaken by persons conscious of the dignity of human beings and the created world entrusted to us.

2. Redemptive AI will develop rather than diminish human cognitive capacity and extend rather than replace education.

The current reality is that education — the means by which human societies prepare people to make meaningful and lasting contributions to their common life — is inequitable in most modern nations, especially the United States, and is failing to develop the full capabilities of many people. Even those who ”succeed” on the terms of our current educational systems are continually tempted by current technology and media to spend their waking life in “the shallows.”

It is obvious that AI could be deployed to accelerate these trends, by providing the means for students to fake competence in a subject, by providing ready-made “answers” to both technical and complex questions, or simply by offering an even more customized and irresistible stream of consumable entertainment. Such a direction would deprive most human beings of the opportunity to become genuinely informed and creative participants in culture.

Redemptive AI can make massive contributions to education and lifelong cognitive growth by appropriately scaffolding, supporting, and sequencing the difficult tasks involved in becoming an educated person who possesses both skill and wisdom.

3. Redemptive AI will respect and advance human embodiment.

In sharp contrast to many currents of modern behavior and belief, we believe that having bodies is “a feature, not a bug” of being human. The first few generations of computer technology have abetted a damaging trend toward disembodiment, privileging sedentary mental activity while encouraging if not forcing people to neglect their design as creatures who learn, work, and think best when we are moving purposefully through the world together.

Compared to the systems widely available today, AI has the potential to interact much more dynamically with human beings using their full sensory and physical capacities (such as through audio interfaces that allow people to stay engaged with their embodied environment rather than screen-based interfaces that draw them away from it), while also dramatically assisting people who lack one or more typical capacities to participate more fully in the world (such as through brain-computer interfaces for those who have lost neuromuscular capabilities through paralysis).

4. Redemptive AI will serve personal relationships rather than replace them.

AI shows great promise for being able to fluently interact with the relational dimension of human life (as when Large Language Models are prompted to take on the persona of a chatbot). The clear and present danger is that this fluency will be exploited, perhaps at the willing and eager behest of users, to provide deeply persuasive simulations of relationship. Such simulations will have the power of many addictive substances and behaviors, in that they will directly harness the reward systems of the human mind-body-soul complex while delivering no real benefits and degrading or entirely erasing users’ ability to choose real life.

Redemptive AI, while benefiting from its sensitivity to relationships, will never present itself as a person, will not offer to substitute itself for persons or personal relationships, and will not purport to relieve its users of the burdens of genuinely caring for and being cared for by other persons. Instead it will facilitate more relationally healthy pathways for human life.

(For example, many employers currently schedule contingent workers’ shifts in ways that are supremely indifferent to those workers’ family responsibilities; an AI “aware” of workers’ family commitments could be deployed to create far more relationally optimal work schedules while also matching or exceeding the economic efficiency of current solutions.)

5. Redemptive AI will restore trust in human institutions by protecting privacy and advancing transparency.

Too many systems today are opaque about their own operations, concealing their inner workings from the public, while relying on extraordinary levels of surveillance and data-gathering about persons. The emerging reality is one in which systems have no transparency at all while persons and their data are rendered “transparent” to corporations, advertisers, and nation-states.

Without redemptive development, including technical breakthroughs, AI will exacerbate both of these trends, because as currently designed it is an inherently opaque system, capable of gathering and representing huge amounts of data about individuals, and making that data fully available only to very large-scale owners and operators. What is actually needed is a substantial reversal in which institutions and the systems they deploy become more transparent, while persons and their individual information become more protected.

Redemptive AI will be designed to give more clarity, not less, about how institutions operate, while ensuring that individuals retain the dignity of being known through their own choices and disclosures, not through a constant and unchosen stream of surreptitiously collected and analyzed data.

(Consider the likelihood of governments wishing to “pre-incriminate,” with the help of data analysis, those presumed likely to break the law. Redemptive AI, while assisting in ascertaining the truth about criminal behavior, will extend the protection against self-incrimination by only providing public justice systems with information about actual criminal behavior, not merely purported patterns in data. At the same time, redemptive AI that operates with high transparency may be able to dramatically reduce uncertainty about the evidence offered in criminal trials, preventing unjust convictions and increasing confidence in the public justice system.)

6. Redemptive AI will benefit the global majority rather than enrich and entrench a narrow minority.

Current pathways to the most powerful AI systems are extraordinarily capital- and energy-intensive, lending themselves to concentration in the hands of a few resource-rich corporations located in a handful of countries. Depending on how AI services are delivered and priced, this does not necessarily need to mean that AI cannot benefit the majority of human beings — if it ultimately can be provided at very low marginal cost, it can have a very beneficial effect for low-income users.

But without specific redemptive innovation, it is almost certain that the greatest benefits of AI will flow to the already wealthiest and most powerful corners of the world, not least because they are already most entrenched in the data economy (compare, for example, the amount of training data available in English to that in languages spoken only by small groups of people).

Redemptive AI will differentially find ways to unlock value at “the bottom of the pyramid” — and will pursue innovations that accomplish that goal without ensnaring the world’s poor ever more deeply in a kind of datafication of their lives which disrupt human connection and largely only benefit the owners of the largest pools of data.

A Call to Repair and Redeem Through AI

At this very early stage in the history of AI, it is extremely tempting for venture builders and investors to adopt a gold-rush, land-grab mentality, racing to claim a stake in the technology by swiftly building infrastructure and applications that promise quick financial returns, assuming (if these questions are considered at all) that ethical reflection and protections can come incrementally, once capital is secured and profit is made.

But too many recent waves of technology — social media being a particularly vivid example — have followed this pattern, delivering some benefits but also consolidating power in unaccountable large-scale institutions, substituting thin forms of existence for true human flourishing, and extracting huge costs in physical, relational, spiritual, and social well-being. If it is true, as Yuval Harari, Tristan Harris, and Aza Raskin have suggested, that algorithmic social media was humanity’s first large-scale encounter with “AI,” the rise of far more powerful and flexible algorithms is hardly something to be treated as an ethically inconsequential opportunity for massive profits.

We believe redemptive entrepreneurs, while certainly pursuing breakthroughs and moving at the speed of expertise, will build into the very foundation of their products a vision not just of leveraging what the existing system of technology has produced, but of repairing what it has damaged. Redemptive AI can contribute to the ultimate redemptive mission: to liberate human beings to live fully as what they truly are, incomparable, irreplaceable image bearers of the Creator who made all things in and for love.

This ORI Thesis was authored by Praxis Partner Andy Crouch, in collaboration with Praxis Venture Partner Mark Sears and Praxis Co-Founder & CEO Dave Blanchard, as well as attendees at the Redemptive AI Forum.

--

--