We Have Paleolithic Emotions, Medieval Institutions, and Godlike Technology

There is a real problem with humanity that threatens to upend our society and our future. It is a problem that we must confront head-on for our own survival. And the problem is this: we have Paleolithic emotions, medieval institutions, and godlike technology.

It’s a striking and provocative statement, but it’s not without merit. Our emotions, institutions, and technology have all developed at different paces and in different directions, leading to a dangerous misalignment. We are now living in a world where we have the power to do almost anything, but lack the wisdom and foresight to use that power responsibly.

One of the key drivers of this mismatch is the rapid advance of artificial intelligence (AI). The race for AI supremacy is on, and companies are vying for the top spot by maximizing engagement, attention, and profit. But what are the consequences of this race? What happens when AI technology is more powerful than our own ability to control it?

This question is becoming more urgent every day. Recently, Geoffrey Hinton, the so-called “Godfather of AI,” quit his role at Google so he could more freely speak about the technology he helped create. Hinton is one of the pioneers of deep learning, a branch of AI that has led to remarkable advances in image and speech recognition, language translation, and more. But he is also deeply concerned about the consequences of this technology.

The Race for Attention

We are entering an “attention economy,” where our attention spans are getting shorter and shorter, and the flow of information is overwhelming. In this environment, it’s becoming harder and harder to discern truth from falsehood, and to maintain a sense of identity and personal privacy.

Our attention is indeed a scarce resource, and where we direct it can have a significant impact on our lives and the world around us. When we give our attention to something, we are giving it a form of energy and validation, and this can influence the way it grows and shapes our reality.

The race for attention has become a central feature of our lives, with tech giants leading the charge. They are constantly seeking to maximize engagement and monetize our behavior. Attention has become a commodity that is bought, sold, and traded by companies seeking to capture our focus and keep us engaged with their products or services. But this comes at a huge cost.

Our attention is finite, and when it’s stretched too thin, we become vulnerable to manipulation and misinformation. Our attention is a key part of our agency and control, allowing us to make informed decisions and take meaningful actions in our lives. When our attention is hijacked or depleted, we become passive consumers rather than active participants, and we lose our ability to shape our own reality. We become dysfunctional creatures who are lost in every way.

The Race for AI

As AI technology becomes more advanced, it has the potential to completely upend our sense of reality. Deepfakes, for example, allow anyone to create convincing videos that depict people saying and doing things they never actually did. With the right tools, a scammer can easily impersonate you, simply by listening to your voice for a few seconds. You may never know that you’ve been impersonated until it’s too late.

But the dangers of AI go beyond the surface. “When you invent a new technology, you uncover a new class of responsibilities and ecosystem. If the technology contains power, it will start a race. If you do not coordinate, the race ends in tragedy. “ Former Google Design Ethicist Tristan Harris and former head of user experience at Mozilla Aza Raskin have talked extensively about the ways in which AI companies are caught in a race to deploy as quickly as possible, without adequate safety measures. They argue that existing AI capabilities already pose catastrophic risks to a functional society, and that our institutions are ill-equipped to handle them.

The speed at which AI is being developed and deployed far outstrips the ability of our institutions to properly regulate and mitigate potential risks. Additionally, AI technologies often operate across multiple jurisdictions, which can make it difficult for any one institution to effectively regulate them. There is also a lack of coordination between different institutions and countries, which can lead to gaps in oversight and accountability.

AI systems are often designed to optimize for a specific objective or outcome, such as maximizing profits, which may conflict with other societal goals, such as fairness, safety, or privacy. Our institutions are not yet equipped to handle such complex trade-offs and ethical considerations.

Fire of Love

The looming threat of AI is a black box shrouded in mystery, and it’s rapidly becoming a source of deep concern for humanity. We’re essentially relinquishing control to machines that we can’t fully comprehend, and as technology evolves at breakneck speeds, the consequences become increasingly difficult to predict.

One of the main concerns is the development of “black box” AI systems, where it is unclear how the technology is making decisions or what data is being used to make those decisions. This can lead to unintended consequences, such as the perpetuation of bias, discrimination, and many more.

But here’s the rub: mere words don’t seem to cut it anymore. Even dire warnings fall on deaf ears, leaving us all in a state of perilous uncertainty. In a recent documentary film called “Fire of Love,” we see a striking parallel to our current predicament. The story of Katia and Maurice Krafft, two passionate volcanologists who risked everything to save lives, highlights the importance of bridging the gap between understanding and action.

In one poignant scene, the Kraffts struggle to convince the local government to evacuate residents in the face of an impending disaster. Words and reports weren’t enough to sway them, so the couple took matters into their own hands and used real-life images to show the world the gravity of the situation. Could this approach work for AI? Or do we even want to go down that road? Only time will tell, but it’s clear that we need to find a way to close the gap between our understanding and our ability to act.

Previous
Previous

From Scarcity to Abundance: Embracing the Next Frontier of Economic Transformation

Next
Next

An Urgent Balancing Quest Between AI Innovation and Ethical Considerations