
Welcome to the Utopia Forums! Register a new account
The current time is Tue Apr 21 18:06:48 UTC 2026
Utopia Talk / Politics / Thousands of CEOs admit AI had no impact
|
murder
rank | Mon Apr 20 02:28:46 Thousands of CEOs admit AI had (has had) no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973. Newfangled computers were actually at times producing too much information, generating agonizingly detailed reports and printing them on reams of paper. What had promised to be a boom to workplace productivity was for several years a bust. This unexpected outcome became known as Solow’s productivity paradox, thanks to the economist’s observation of the phenomenon. “You can see the computer age everywhere but in the productivity statistics,” Solow wrote in a New York Times Book Review article in 1987. Data on how C-suite executives are—or aren’t—using AI shows history is repeating itself, complicating the similar promises economists and Big Tech founders made about the technology’s impact on the workplace and economy. Despite 374 companies in the S&P 500 mentioning AI in earnings calls—most of which said the technology’s implementation in the firm was entirely positive—according to a Financial Times analysis from September 2024 to 2025, those positive adoptions aren’t being reflected in broader productivity gains. A study published in February by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted. However, firms’ expectations of AI’s workplace and economic impact remained substantial: Executives also forecast AI will increase productivity by 1.4% and increase output by 0.8% over the next three years. While firms expected a 0.7% cut to employment over this time period, individual employees surveyed saw a 0.5% increase in employment. Is AI actually making people more productive? In 2023, MIT researchers claimed AI implementation could increase a worker’s performance by nearly 40% compared to workers who didn’t use the technology. But emerging data failing to show these promised productivity gains has led economists to wonder when—or if—AI will offer a return on corporate investments, which swelled to more than $250 billion in 2024. “AI is everywhere except in the incoming macroeconomic data,” Apollo chief economist Torsten Slok wrote in a blog post, invoking Solow’s observation from nearly 40 years ago. “Today, you don’t see AI in the employment data, productivity data, or inflation data.” Slok added that outside of the Magnificent Seven, there are “no signs of AI in profit margins or earnings expectations.” Slok cited a slew of academic studies on AI and productivity, painting a contradictory picture about the utility of the technology. Last November, the Federal Reserve Bank of St. Louis published in its State of Generative AI Adoption report that it observed a 1.9% increase in excess cumulative productivity growth since the late-2022 introduction of ChatGPT. A 2024 MIT study, however, found a more modest 0.5% increase in productivity over the next decade. “I don’t think we should belittle 0.5% in 10 years. That’s better than zero,” study author and Nobel laureate Daron Acemoglu said at the time. “But it’s just disappointing relative to the promises that people in the industry and in tech journalism are making.” Other emerging research can offer reasons why: Workforce solutions firm ManpowerGroup’s 2026 Global Talent Barometer found that across nearly 14,000 workers in 19 countries, workers’ regular AI use increased 13% in 2025, but confidence in the technology’s utility plummeted 18%, indicating persistent distrust. AI adoption can even be counterproductive at a certain point, according to a study conducted by Boston Consulting Group, leading to “AI brain fry.” In a survey of 1,488 full-time U.S.-based workers, respondents reported increased productivity when using three or fewer AI tools, but self-reported productivity plummeted when respondents used four or more tools, with workers saying they felt brain fog or made more small mistakes as a result of technology overuse. Nickle LaMoreaux, IBM’s chief human resources officer, said this year the tech giant would triple its number of young hires, suggesting that despite AI’s ability to automate some of the required tasks, displacing entry-level workers would create a dearth of middle managers down the line, endangering the company’s leadership pipeline. What could reverse AI’s productivity pattern? To be sure, this productivity pattern could reverse. The IT boom of the 1970s and ’80s eventually gave way to a surge of productivity in the 1990s and early 2000s, including a 1.5% increase in productivity growth from 1995 to 2005 following decades of slump. Economist and Stanford University’s Digital Economy Lab director Erik Brynjolfsson noted in a Financial Times op-ed the trend may already be reversing. He observed that fourth-quarter GDP was tracking up 3.7%, despite last week’s jobs report revising down job gains to just 181,000, suggesting a productivity surge. His own analysis indicated a U.S. productivity jump of 2.7% last year, which he attributed to a transition from AI investment to reaping the benefits of the technology. Former Pimco CEO and economist Mohamed El-Erian also noted job growth and GDP growth continuing to decouple as a result in part of continued AI adoption, a similar phenomenon that occurred in the 1990s with office automation. Some productivity increases may be hiding in plain sight. A study led by the Stanford Institute for Economic Policy Research found using internet browsing data from 200,000 U.S. households that generative AI increased the efficiency of online tasks like job hunting, travel planning, or shopping from between 76% and 176%. However, researchers found the time AI users saved on chores was spent hanging out with friends or watching television, as opposed to spent on work on new skills development. Slok saw the future impact of AI as potentially resembling a “J-curve” of an initial slowdown in performance and results, followed by an exponential surge. He said whether AI’s productivity gains would follow this pattern would depend on the value created by AI. So far, AI’s path has already diverged from its IT predecessor. Slok noted in the 1980s, an innovator in the IT space had monopoly pricing power until competitors could create similar products. Today, however, AI tools are readily accessible as a result of “fierce competition” between large language model-buildings driving down prices. Therefore, Slok posited, the future of AI productivity would depend on companies’ interest in taking advantage of the technology and continuing to incorporate it into their workplaces. “In other words, from a macro perspective, the value creation is not the product,” Slok said, “but how generative AI is used and implemented in different sectors in the economy.” https://fo...productivity-employment-study/ |
|
Sam Adams
rank | Mon Apr 20 06:47:19 AI is currently no where near as useful as a good human at most jobs. So ya. |
|
TheChildren
rank | Mon Apr 20 08:58:38 u got actually indians aka "ai" just admit fools is useless garbage. mostly scams like da indians scam centas |
|
Asgard
rank | Mon Apr 20 10:20:09 It depends It is very useful for me I work with Jira, and everytime someone sends me a Jira to work on but it’s defined incorrectly it’s hell to adjust it manually. I gave AI a guide on how to properly set a Jira for my team, one time, and now I just tell it what Jira to fix and it fixes it by itself. Saves me 10 minutes a day, that’s a lot of time cumulatively. That’s just one small example But no, it can’t replace people, it can only help, for now. |
|
jergul
rank | Mon Apr 20 10:55:26 I have spent some energy working out how to get past AI to a human. The digital variant of "let me speak to your manager". Thankfully, it is generally not that hard. |
|
jergul
rank | Mon Apr 20 10:55:56 So negative productivity in that limited sense. |
|
williamthebastard
rank | Tue Apr 21 15:48:45 LLMs really are not much more than improved search engines, where instead of having to click on a series of links and buttons to pursue a query, that kind of longer process has been baked into fewer or just one click, and most importantly: Its been endowed with a pretty passable conversational tone that made the 1st generation of complete mugs like Pillz think their phones had become self-aware. Thats what really sold it to a gullible audience: the all-new customer-friendly user interface packaging. |
|
williamthebastard
rank | Tue Apr 21 15:56:50 They might just as well have programmed a dumb blonde program that just says "Im so pwetty!" and put all that research money into making a slow computer look really realistic instead and the Pillz generation would be going "Omg, its self-aware!" |
|
Pillz
rank | Tue Apr 21 16:19:13 Wtb, the gay neo Nazi sex worker, pretending as though he knows anything Lol |
|
Seb
rank | Tue Apr 21 17:28:51 It's useful for fuzzy automation. If the problem can be done with RPA or a rules engine, LLMs can be a potentially less brittle thing to use. It's not terrible at summarising or extracting key information from a document (particularly if there's something specific in the document you are looking for). Mostly it's overrated and I don't think it's going to get dramatically better. Fundamentally it is not intelligent so much as can access knowledge (not just information) held within training data. When people talk shit like "it will be able to come up with new physics" as some of those Accelerationists used to mumble about - it will never be able to do that. |
|
Seb
rank | Tue Apr 21 17:30:15 Accessing knowledge isn't intelligence. Intelligence creates new knowledge. I don't think LLMs can do that. |
|
williamthebastard
rank | Tue Apr 21 17:39:22 Certainly useful for what it actually does. I probably produce better work now using LLMs constantly, but not faster. Sometimes slower. And the conversational, flattering, Hal speech program they included as their main feature wore thin after a while. |
| show deleted posts |