AI and the Singularity by Frank Kaufmann


Ai vs human

Podcast Transcript

AI seemed to burst into our lives suddenly.

One day it seemed not to exist at all, and the following day it seemed to exist everywhere, in countless inscrutable forms, brands and promises.

It instantly seemed to erupt a class of people as immersed in AI as fully as the rest of us count on our eyes to see.

There’s the Chatbot app itself, the people, the companies, and as suddenly a billowing industry of experts, programs, seminars, and teachers and trainers. 

Listen to the podcast here:

In classic fashion, consumers dive in blindly, anything and everything for convenience, shortcuts, and entertainment. 

The social and psychological devastation wrought by much less powerful platforms and apps continues to eviscerate the mental health of children, young people, families, and society at large. 

And lo, with the same reckless and thoughtless consumerist madness the phone-addicted universe dives even more rabidly into the flames of AI, utterly ignoring dire warnings about the singularity from the brilliant technical minds on Earth. 

A most peculiar brew swarms around us as AI explodes into every part of human life complete with total threats of danger, and with the scintillating attraction of an available function that changes and provides greater ease for every single thing I do. 

The vast majority of us have no idea what we are dealing with. We do not know how it works. We only see the mind-stunning outcomes of the technology. In our ignorance, we equally struggle to see the full extent of its promise, and more frighteningly, the full extent of its danger.

Simply, we just cannot learn or understand fast enough to intuit how truly dangerous Chat and AI are. 

While we are not equipped sufficiently to command the details, we should be able to make the leap to recognize danger when a founding developer, Elon Musk desperately urged a moratorium on development, 

“More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems.” citing  “profound risks to society and humanity.”

A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which the nonprofit Future of Life Institute released on Wednesday.

Open AI president Sam Altman the person at the epicenter of AI development was joined by 100s of prominent Tech geniuses adding their names to a statement urging global attention on existential AI risk.

The statement, which is being hosted on the website of the Center for AI Safety (CAIS), equates AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.

Are the bona fides of the people behind these warnings adequate? Is the language clear enough? Is there something confusing or mild in how the dangers are being described? 

Mo Gawdat, previously chief business officer at Google X says in interviews “It is beyond an emergency,” “It’s the biggest thing we need to do today.” 

“It is the most existential debate and challenge humanity will ever face.” There is a point of no return and we’re getting closer and closer to it.”

I come to the point of this podcast. Surely the vast majority of people listening can only feel, “Even if I were to believe every warning wholeheartedly and with equal urgency as these great geniuses, what could someone like myself possibly do. Even the most knowledgeable people alive are struggling, by the hundreds they are calling for a halt to AI development.

This is what every one of us can do. I am not saying this will halt the out-of-control juggernaut of human self-extinction, but I am saying that there is never a reality, regardless of how seemingly distant or beyond reach that outpaces my personal power to act in response. 

Do this: 

Stop everything. Remove yourself from every possible distraction so that you may introspect as profoundly and deeply as you ever have. 

First let us take stock of these facts. 

  1. AI can acquire, organize, and re-present information 1000s of times faster than us. 
  2. AI can write 1000s of time faster, more accurately, more flawlessly, and more succinctly than us.
  3. It can draw, paint, produce music, graduate universities, and 1000s of times more quickly than us.
  4. Coupled with Robotics AI is exponentially faster and stronger physically than us. 

If we continue down the list, we eventually will become hard-pressed to discover a single capacity in which AI is not superior to us. For the sake of this exercise let us embrace this as fact and reality. 

This is what scares the geniuses, we have created entities that are superior to us in every identifiable capacity. 

Next double-check to be sure. Are you an AI program? 

For any humans listening to this podcast, your answer should be no.

OK with steps 1 and 2 we’ve come to the discovery that there exists one entity that seems superior in every identifiable capacity to the other. And of the two, I’m the weak, dumb, slow one. 

Good. Now

Since you are the weak, dumb, slow one, would you like to change places? Would you rather be AI? 

Strangely, I think most of us say no, I prefer to be me. I don’t want to be an AI. 

That’s healthy. 

This next step moves us toward how we can be involved in thwarting the genuine dangers the geniuses are warning us about. (I’m not saying we’ll necessarily win, but at least we can be actively involved, and that is wonderful.)

The next step is taking as detailed and extensive an inventory as possible, exactly WHY I chose the preference to be human? 

What do I think would be lost if I gave up my humanity in favor of being faster, stronger, and “smarter.” EXACTLY why did I choose being human of the two options?

I am not going to make this list for you. This is YOUR list. Your very own list. It is why you chose to be you, why you chose to be human instead of something smarter, stronger, faster. 

Maybe you chose to be you because AI will never really miss its Mom. I never want to be some form of entity that cannot know what it is like to miss my Mom.

Maybe I chose me because the shoulder of my beloved to rest my head in Spring by a brook somehow makes all my troubles go away. I never want to be something that cannot know that infinite mystery. How does that happen? 

Maybe I chose me, because of the way my heart leaps when I finally manage to do something that seemed impossible. I tried and tried. I cried I banged my head against the wall, I all but gave up, but today I got it. I got it. That feeling. It’s that feeling. 

Once you make your own exact list. Work on these every day with all your heart. Love and love, thank and thank, try and try, revel and revel. Get better at these things, Become the best at these things. 

Doing this is joining the fight. Is taking a stand. Is being a signatory with the geniuses.