Economic Outlook

Artificial Intelligence (AI): Focus on the Economic Impacts

By MICHAEL PATON

Toward the end of 2022 a computer program known as ChatGPT—a form of artificial intelligence—burst into the public consciousness with the possibility of revolutionizing communications. Artificial intelligence, broadly defined, is the simulation of human intelligence by computers.

ChatGPT is software that can generate text and hold realistic conversations with human partners because the program has been trained using trillions of words, most taken from the Internet. In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions. In this way, a computer program that is fed examples of text can learn to generate lifelike exchanges with people, or become an image recognition tool that can learn to identify 

and describe objects and images. New, rapidly improving “generative AI” techniques can create realistic text, images, music and other media.

The power of AI comes from its use of machine learning, a branch of computational statistics that focuses on designing algorithms that can automatically and iteratively build analytical models from new data without explicitly programming the solution. Essentially it is a tool of prediction, in the statistical sense, taking information you have and using it to fill in information you do not have. Many in the scientific community believe that ChatGPT is only the beginning of new and more powerful forms of artificial intelligence programs.

What does this mean for the economy? A study by Goldman Sachs suggested that widespread AI adoption would drive a 7%, or $7 trillion, increase in annual global gross domestic product over a 10-year period. Other studies point to an annual three-percentage point rise in productivity for companies that are able to adopt the technology. Still other economists believe that increases in global output eventually will hit an astonishing 30% a year—given that this form of AI is only in its infancy.

There’s certainly danger in hyping up new technology that ends up having marginal macroeconomic effects, but at the same time history does show periods when major technological changes occurred and changed the economy. The adoption of the electric motor starting in the late 1800s, and the personal computer in the 1970s proved transformational. A Massachusetts Institute of Technology paper last year found that roughly 60% of employment as of 2018 was in occupations that didn’t exist in 1940.

However, as a report from the Economist points out, the first lesson from history is that even the most powerful new technologies take time to change an economy. James Watt patented his steam engine in 1769, but steam power did not overtake water as a source of industrial horsepower until the 1830s in Britain and 1860s in America. In Britain the contribution of steam to productivity growth peaked post-1850, nearly a century after Watt’s patent. In the case of electrification, the key technical advances had all been accomplished before 1880, but American productivity growth actually slowed from 1888 to 1907. Nearly three decades after the first silicon integrated circuits were invented, computers could hardly be seen in the productivity statistics. It was not until the mid-1990s that a computer-powered productivity boom eventually emerged in the U.S.

Questions about the effects of AI on economic growth often take a back seat to concerns about consequences for workers. Here, history’s messages are mixed. There is good news: despite revolutionary mechanization and economic change, fears of mass technological unemployment have never before been fully realized despite the volley of political criticisms. The bad news: technology can and does take a toll on individual occupations, however, in ways that can prove socially disruptive. Early in the Industrial Revolution, mechanization dramatically increased demand for relatively unskilled workers, but crushed the earnings of craftsmen who had done much of the work before, which is why some chose to join machine-smashing Luddite movements. In addition, in the 1980s and 1990s, automation of routine work on factory floors and in offices displaced many workers of modest means, while boosting employment for both high- and low-skilled workers.

While the jury will be out for some time, AI will become a fast-evolving technology with great potential to enhance worker productivity, to make firms more efficient and spur innovations in new products and services. At the same time, AI can also be used to automate existing jobs and exacerbate inequality, and it can lead to discrimination against workers. While previous technological advances in automation have tended to affect “routine” tasks, AI has the potential to automate “non-routine” tasks, exposing large new swaths of the workforce to potential disruption. The challenge for policymakers is to foster progress and innovation in AI while shielding workers and consumers from potential types of harm that could arise.

About the author: Michael J. Paton is a portfolio manager at Tocqueville Asset Management L.P. He joined Tocqueville in 2004. He manages balanced portfolios and is a member of the fixed-income team. He can be reached at 212-698-0800 or by email at [email protected].

Scroll to Top