How large language models will transform the economy

© Provided by Washington Examiner
A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. New York City school officials started blocking this week the impressive but controversial writing tool that can generate paragraphs of human-like text. (AP Photo/Peter Morgan)

Writing is hard and time consuming. Thanks to ChatGPT, it just became easier and faster. Rudimentary chatbots have been writing simple articles, such as sports game summaries, for years. ChatGPT, from artificial intelligence (AI) supernova OpenAI, leaps far beyond this through its search of billions of internet texts and ability to generate passable prose from prompts and questions. It can even simulate software code. But it’s a large language model (LLM), so it’s still not great at math or many other things. It makes funny mistakes.

But it’s powerful enough to begin transforming dozens of tasks and businesses. And it doesn’t need to be perfect to be the source of unimaginable mischief too.

We’ll begin with the good news. Joel Mokyr’s book The Lever of Riches: Technological Creativity and Economic Progress springs to mind. Just as computers augmented many human capabilities, leading to boundless new products and even industries, ChatGPT can augment an even more human activity—writing. The quantity of text will explode. But will the quality of text—and of the embedded ideas—follow?

Load Error

That’s a bigger question.

Mark P. Mills calls it “useful computing” and notes the computer power dedicated to training AI models has grown 300,000-fold per year over the past six years. However, the idea that AI is approaching human-level general intelligence is wrong for many reasons; among them is its voracious consumption of energy and space compared to the elegant efficiency of the human brain.

Behind the scenes, ChatGPT is based on fancy pattern-matching math. But it’s also a new user interface, which might subsume large portions of the functions of Amazon’s Alexa, Apple’s Siri, and Google’s Search, thus expanding and democratizing the world of “search.” Search not only helps us learn. It’s also augmented our memory massively, expanding our lowly biological data storage. But ChatGPT’s “search plus” can also amplify and democratize human output.

Regular people will get access to previously arcane tools. WolframAlpha has already highlighted the path to turning language into math. As Andrej Karpathy says, “The hottest new programming language is English.”

ChatGPT will amplify machine output too. Lots of ink has been spilled, yet again, about the jobs ChatGPT will render obsolete. Teams are already replacing the back end of software stacks with LLMs. ChatGPT can turn unstructured data into structured data (i.e., create a database from a data jumble).

But used creatively, ChatGPT could benefit nearly anyone and be a central component of a new productivity boom. It might debug software for coders. Researchers can become exponentially more productive. Writers and managers can generate new ideas and outlines, allowing them to focus on what they do best. It can off-load busy paperwork. People who don’t like to write but who have other amazing talents—in art, mechanics, sports, or sales—can now turn their ideas and personalities into written output.

Now for the bad news. Such a powerful and versatile tool will also amplify harmful things, including both disinformation and censorship. Marc Andreessen, as usual, is correct: “The censorship pressure applied to social media over the last decade pales in comparison to the censorship pressure that will be applied to AI.”

Think about today’s chief sources of disinformation, dispensed at even greater speeds globally. Then turn these falsities into millions of articles, video clips, tweets, and TikToks. Now you don’t need human trolls to amplify propaganda. The entire process is automated.

The time and energy required to police these bots will also increase, and Elon Musk’s Twitter 2.0 is already on the job. GPTZero is a new tool that sniffs out AI-based text. Anti-spam counter-technologies will arise to rebut the erroneous material.

Another potential danger of the ChatGPT era will be the amplification of an already dangerous trend: the abandonment—indeed, often the abolition—of human judgment.

Some of AI’s great contributions may, for example, come in health care. But only if we allow physicians, patients, scientists, and entrepreneurs to use AI as they see fit.

A tragedy of the COVID era was the top-down, one-size-fits-all, medicine-by-decree, which discouraged experimentation, learning, and individual risk-reward calculations. It leveraged the imperious relationship of Medicare and giant health systems over individual doctors and patients. It accelerated the sad decline of doctors into mere box checkers employing protocols written by politicized medical associations.

The rise of AI must thus be met with new rights and institutions that ensure it is a decentralizing force, as opposed to a consolidator of power. Who programs the AI? Which information is the AI allowed to consider?

OpenAI founder Sam Altman says it well:

You should be able to write up a few pages of here’s what I want, here are my values, here’s how I want the AI to behave, and it reads it and thinks about it and acts exactly how you want, because it should be your AI.

Will Altman follow his wise words with action?


This article originally appeared in the AEIdeas blog and is reprinted with kind permission from the American Enterprise Institute.


Washington Examiner Videos

Tags: Opinion, Beltway Confidential, Think Tanks, Opinion, Artificial Intelligence, Technology

Original Author: Bret Swanson

Original Location: How large language models will transform the economy

Continue Reading