DANIEL VAUGHAN: ChatGPT brings us into the Age of AI

By 
 February 27, 2023

For most of history, it wasn't easy to know when you entered a new era of human history. When the Romans took over, it was hard to tell they'd control much of the world for as long as they did. Or when a significant invention arrives, it's not always apparent that the impact will be as big as it is. In the last 75 years or so, we've experienced two such moments.

First, the invention of the computer gave us the computer age. Next up, the internet brought a new level of interconnectivity that we're still feeling the impacts of daily. We're entering a third phase now with technology: artificial intelligence. Humans have tinkered with various forms of machine learning, automation, and basic artificial intelligence builds. The arrival of ChatGPT has sped up the acceptance of AI and mainstreamed it in ways we've never seen.

In an essay for the Wall Street Journal, Henry Kissinger, former Google CEO Eric Schmidt, and Daniel Huttenlocher, dean at the Massachusetts Institute of Technology, argue that ChatGPT's AI will "transform the human cognitive process as it has not been shaken up since the invention of printing."

A different way of thinking.

Their argument is this: for most of human history, knowledge was built in one way. Humans learned something about the world, and then we struggled to pass that along to the next generation. The printing press changed this because we could write down and memorialize what we learned, and future generations could build on it. Also, people could share what they learned with other people through written language.

The process of reading forced us to learn from other humans, and we used that to advance technology and knowledge in general. Artificial intelligence reverses that process. A human enters a question into the AI program - in this case, ChatGPT - but we don't know how the AI got the answer. You get the solution to a question but don't know how the AI got there or how to get there yourself. You're given knowledge but not understanding.

The harms are real... as are the benefits.

Kissinger, Schmidt, and Huttenlocher argue that the fallout could be harmful. If humans use AI to replace our mind's ability to understand, our abilities will atrophy:

To the extent that we use our brains less and our machines more, humans may lose some abilities. Our own critical thinking, writing and (in the context of text-to-image programs like Dall-E and Stability.AI) design abilities may atrophy. The impact of generative AI on education could show up in the decline of future leaders' ability to discriminate between what they intuit and what they absorb mechanically. Or it could result in leaders who learn their negotiation methods with machines and their military strategy with evolutions of generative AI rather than humans at the terminals of computers.

Of course, the reverse is also true. If you use AI to accentuate human understanding of the world, our productivity will reach new heights. But it's fair to ask how much future generations will rely on these AI tools as they improve.

One example is, "Doctors worry that deep-learning models used to assess medical imaging for diagnostic purposes, among other tasks, may replace their function. At what point will doctors no longer feel comfortable questioning the answers their software gives them?"

This is a fair point because we could easily reach a point where doctors could feel pressured by insurance and medical malpractice lawsuits to trust AI over their instincts. Other fields could experience the same thing. The threat of AI replacing human judgment is real, as is the issue of AI taking the place of human understanding.

The new age is upon us.

In the conclusion to the essay, Kissinger et al. say, "we have a novel and spectacular achievement that stands as a glory to the human mind as AI. We have not yet evolved a destination for it. As we become Homo technicus, we hold an imperative to define the purpose of our species. It is up to us to provide real answers."

They're right in that regard. We are still driving towards a destination on how to use artificial intelligence. But it's unquestionably true we've walked into a new era. The computer and internet age are giving way to artificial intelligence. Science fiction is littered with examples of good and bad forms of AI.

But at its core, AI will always reflect human thinking and the data we feed it. Many of the AI programs getting built struggle with things humans instantly recognize as racism, bias, and other issues. But AI does not have a morality in the way humans do. It doesn't know or understand what is "good" or "bad." Even the messages ChatGPT writes that claim these things are wrong are put there by human checks, not an AI moral code.

Where we go from here is still being determined. But what I do know is that there's no stopping the direction we're headed. The technology race between the United States and China will hasten the development of all artificial intelligence programs. The Age of AI has arrived. We're not ready for it, but we'll have to deal with it all the same. 

" A free people [claim] their rights, as derived from the laws of nature."
Thomas Jefferson
© 2015 - 2024 Conservative Institute. All Rights Reserved.