Technology

Will AI replace employment? Call us skeptics

Equipo Comunicaciones

Comunicaciones y redes sociales

27 DE May DE 2024 - 7 minutes of reading

https://www.moveapps.cl/en/blog/will-ai-replace-employment-call-us-skeptics/

We understand all the caveats. But just because something is possible in theory doesn’t mean it will happen. To be clear, AI (in particular, great linguistic models, or LLMs, like ChatGPT) can do useful things that increase worker productivity. But, if anything, AI will generate more tasks for human workers than it is likely to eliminate.

We have all heard the warnings and the promises. Artificial intelligence (AI) will completely transform work and put millions of jobs at risk.

 

Robot performing tasks in a laboratory

AI, far from replacing jobs, could generate more tasks and new opportunities for workers.

We have been studying the world of work for decades and we believe there are many reasons to doubt that this labor transformation will happen, no matter how much technology improves.

The big claims about AI assume that if something is possible in theory, then it will happen in practice. It’s a big leap. Modern work is complex, and most jobs involve much more than the kinds of things AI is good at: summarizing text and generating results based on instructions. And whatever work it performs, AI needs human supervision and control to get useful results.

No one is demonstrating a realistic need for large-scale AI. It’s been a long time since we could produce magnificent orchestral music electronically, but symphonies still exist; it’s been decades since commercial airplanes could fly largely on their own, but we still have pilots. Did trucking companies replace thousands of drivers with autonomous trucks, as some experts predicted a few years ago?

To be clear, AI (in particular, great linguistic models, or LLMs, like ChatGPT) can do useful things that increase worker productivity. But, if anything, AI will generate more tasks for human workers than it is likely to eliminate, as we found when we reviewed current research on the effects of AI and spoke with vendors who are developing AI and employers who use it.

Consider these common claims for LLMs, and what we believe our research suggests is most likely to occur.

They can handle simple communication tasks with customers and partners.

Yes, LLMs can handle some basic interactions for individuals. But many of these simple tasks are no longer in the hands of workers. For example, most basic business correspondence consists of attorney-authorized forms, and call center employees already follow standard scripts when talking to customers.

It is true that technology companies could develop better chatbots that autonomously help customers and consumers, possibly jeopardizing jobs. But are companies really going to buy this technology? Research suggests, for example, that it may be more cost-effective for some companies to make customers take a lot of difficult actions to have their complaints resolved, because some people will simply drop the matter rather than go through the hassle.

In addition, a stand-alone customer support service brings with it uncertainties that companies are unlikely to want to take on. Firms certainly don’t want chatbots “working on their own” and proposing novel and unwanted solutions to customer problems, as Air Canada discovered when a chatbot offered customers lower fares than the company wanted.

LLMs are very good at summarizing an extensive bibliography and research.

In fact, if we want to know what the political trends are in Uruguay, for example, LLMs can give us an answer in a matter of seconds. However, it is not a task that comes up very often in most jobs, and AI usually provides the same information that could be obtained with a Google search. In addition, AI may rely on unreliable sources or simply make things up.

Therefore, someone who knows what a good report looks like has to evaluate the result. The more of these people there are, the less you need the result; the more you need the result (because you don’t have these people), the greater the risk.

In addition, different LLMs do not give the same results to the same questions, and if you ask an LLM the same question at different times, you will get different results. The end result is a duel of ChatGPT results. If I don’t like the implications of that report generated by the boss on Uruguay, I produce my own report with a different result. The vice president then receives both and has to judge which is correct with no way of knowing why they are different. Again, this means using a human expert and, as before, the more people you have, the less you need the result.

LLMs can make sense of the massive amounts of data that companies and organizations collect today.

That’s true, with one big caveat. LLMs can only perform complex analysis if a human has organized the data in a way that LLMs can read it. And doing that work can be overwhelming, so the AI has already fallen spectacularly short of previous expectations. Right now, according to our research, only 11% of data scientists report getting their own organization’s data in shape to produce useful answers.

To get an idea of what the job entails, suppose we want to know why employees leave our company. We have text data from exit interviews in one data set, information from performance evaluations in another data set, and company performance data in a third. This means making sure that all documents are compatible, for example, by using the same definitions of terms and the same numerical scales. It should not be forgotten that the data may be distributed among different providers, which means a lot of back and forth to obtain the information.

LLMs can prepare for drafting or coding tasks that are essential for some jobs.

The unstated assumption here is that people just write and code, ignoring all the other tasks involved in those jobs. Let’s think about computer programming. In reality, programmers spend only a third of their time writing code. One estimate indicates that half of their time is spent on administrative tasks. The rest consists of finding out the customer’s requirements, solving problems, etc. Although early evidence suggests that LLM programming tools speed up code writing at the beginning of a project, human programmers must clean up code later in the process, which wastes time.

But even if LLMs were to completely take over one-third of the work devoted to pure programming, we can’t cut out one-third of an individual programmer. The only way to reduce the number of programmers would be for their work to be completely interchangeable and for them to be organized as a typing pool, where tasks are assigned centrally. Instead, most programmers are spread across projects where the knowledge of what to do and how to do it is project-specific. We can’t drop another programmer to write 10% of the code.

It’s great that LLMs can make current jobs more productive. The savings allow employees to do other things; we expect them to catch up with the overwhelming amount of work many of them now have instead of adding more tasks. The use of LLMs also creates new tasks: engineers learning how to obtain credible results, experts judging whether the results are reasonable and, above all, data managers and engineers channeling the incredible amount of unused data we already have.

All this will not come cheap for users. The data also suggests that LLM providers, which are losing large amounts of money now, could raise their prices in the future. How much the increasingly cheaper corporate world will pay is an unknown. But if they do, it will create more jobs than it will eliminate, as virtually all technology has done in the past.

 

Source: The Wall Street Journal.


Browse by categories and tags






Share this article


Are you ready?

Tell us about your project

Let's talk