(Three) Theological Questions for the Age of Algorithms
The vertigo I experience from the rapid pace with which digital algorithms are changing our world, is often matched by my bewilderment at how other people seem to react to these developments.
Whether it’s Large Language Models like GPT4 or Bard, the metaverse and virtual reality, or blockchain and “trustless” contracting, I have encountered a stunning variety of attitudes towards these things in classrooms, seminars, boardrooms (and of course, over cocktails). I found this range of responses, to say the least, perplexing.
But no longer. I now believe that I can predict your attitudes to these (and future) advancements in algorithmic technologies — based on how you answer just three questions.
These questions do not necessarily have right answers- or at least we don’t yet know what they are from the relevant sciences. Yet, this does not seem to prevent people from having very strong convictions about what they think the right answers are- convictions approaching faith. That’s why I call these religious questions.
Question 1: Does the human mind do anything more than information processing?
In other words, do you believe that what our brains do is more than just manipulation of data and information? If you answer “Yes”, you see the difference between AI and humans as a chasm- one which can never be bridged. This may lead to either a sense of confidence that humans can never be replaced, or a sense of concern that a rival alien species with comparable or superior competence to us is evolving at breakneck speed in front of our eyes. Confident or concerned, your attitude rests on an underlying form of speciesism. You are probably wary of talk of a “bionic” future in which the distinction between humans and algorithms may vanish through technology implants and organic computing. You are more likely to think of AI as a tool, not a teammate.
On the other hand, if you answer “No”, you might see AI as a natural progression of our understanding of the world. You might also be concerned about humans being surpassed in their ability to process information, but you are likely to be more tolerant of the concept of a future in which humans and algorithms coexist as equals- in which AI may have rights too. After all, if we are all just information processors, making moral distinctions among us is harder.
Question 2: Do human beings have free will?
Those who say “Yes” typically believe in libertarian free will, the idea that humans are the uncaused causes of their actions. They are likely to have a more optimistic view on what we can do through policy and regulation to prevent the harmful effects of the rapid development of digital algorithms. Those who say “No”- who believe free will is an illusion — are more inclined to worry that machines could be programmed to make decisions for us, or that our actions could be predicted and manipulated by algorithms. If we are fully deterministic systems, then algorithms have a better chance at being able to predict what we will do (perhaps better than we can, ourselves).
Question 3: Can we be sure that we do not live in a simulation?
Nick Bostrom’s arguments notwithstanding, people still vary widely in their response to this question. Those who say “Yes” care about authenticity and worry that digital algorithms capable of simulating humans (AI) or reality itself (VR) will fool us into losing touch with the genuine. Those who say “No” in contrast are less concerned about the attack on reality that may come from such digital technologies; after all, if we are already living in a simulation, what’s the harm in creating more simulations (and simulacra) within this one?
While the answers to questions 1 and 2 may be correlated -those who answer “Yes” to question 1 may also do so for question 2- it is not logically necessary for them to do so. Systems that do something more than process information may or may not be fully deterministic. I also think the responses people offer are not always logically well-founded. One does not need free will to formulate useful policies. And I believe one can answer “Yes” or “No” to question 3 without necessarily being constrained in how to answer the other two.
These questions are hardly new- philosophers have been playing around with ideas about materialism, determinism and solipsism for centuries. But I find them surprisingly relevant to diagnosing human attitudes in the age of algorithms.
Let me put it another way: horoscope columnists use 12 zodiac signs. I think my 8-bucket classification (obtained by forcing binary (Yes/No) responses to 3 questions) may well have at least as much explanatory power.
Postscript
I asked three leading Large Language Models the following question: “Is the author being tongue in cheek with this essay?”
Bing: “It’s hard to say for sure whether the author is being tongue in cheek in this essay. The author does use humour and sarcasm at times, but it’s unclear whether this is meant to be taken seriously or not. The essay is written in a serious tone and the author seems to be genuinely interested in exploring people’s attitudes towards algorithmic technologies.”
Bard: “It is difficult to say for sure whether the author is being tongue-in-cheek in this essay. The author does make some humorous observations, such as the comparison of people who believe that we live in a simulation to those who believe in astrology. However, the author also makes some serious points about the ethical implications of algorithmic technologies.”
GPT4: “The author’s tone appears to be serious rather than tongue-in-cheek, as they present a unique framework for understanding attitudes towards technological advancements. The essay introduces three profound philosophical questions and hypothesizes how responses to these questions might influence one’s perspective on technology. That said, there are elements that could be seen as a bit playful, such as the comparison of the framework to a horoscope, and the general tone could be seen as exploratory rather than assertive, suggesting the author is open to the possibility that the framework may not capture all the nuances of human attitudes towards technology. However, it’s essential to remember that interpretation of tone can often be subjective, and different readers might have varying perceptions of the author’s intentions. Overall, it seems that the author is earnest in their attempt to shed light on the complexities of human attitudes towards technological advancements.”
I am not so sure myself, anymore.