The stock markets recently experienced a dip, with concerns over the prospects for the current artificial intelligence (AI) boom playing a role. As of the end of July:
Investors are sending mixed signals regarding their appetite for tech stocks, as the growing debate over the artificial intelligence boom, and a US clampdown on chip exports to China, raise questions over the direction of growth for key companies.
There were fears of a fresh sell-off after the US-listed shares in the chip maker Nvidia dropped 7% overnight, amid concerns that excitement over companies at the forefront of AI development had been overblown.
Nvidia, which has been the biggest beneficiary of the AI boom, has now fallen by 20% from last month’s peak. The semiconductor designer Arm, which has also benefited from the AI hype, ended the session down 6%.
Shares in Microsoft were down almost 3% in after-hours trading, after it revealed that growth at its cloud division, Azure, had slowed as it struggled to keep up with AI-related demands.1
“Tech bubbles” have been happening for a long time. Longer than the phrase “Silicon Valley” has been around. When railroads were still at the cutting edge of innovative technology, When the market for speculative investments by the Jay Cooke & Company bank in Northern Pacific Railway bonds collapsed, it set off what is now remember as the Panic of 1873. Which quickly spread to Europe and other parts of the world.2
Navneet Alang in The Walrus in May published some reflections on the AI hype, with a focus on the large language models (LLM) used by chatbots.3 He touches on a number of issues, including how to define AI “intelligence” in relation to the human kind and how definitions of what intelligence, thinking, mind, self-awareness, and meaning are more than a little complicated just for the human kinds. (The Walrus headline is “AI Is a False God.”)
Sometimes the line between explaining an impression of what AI is about and promoting confusing concepts about what it is can be very blurred. Alang writes:
The sense of there being a thinking thing behind AI chatbots is also driven by the now common wisdom that we don’t know exactly how AI systems work. What’s called the black box problem is often framed as mysticism—the robots are so far ahead or so alien that they are doing something we can’t comprehend. That is true, but not quite in the way it sounds. New York University professor Leif Weatherby suggests that the models are processing so many permutations of data that it is impossible for a single person to wrap their head around it. The mysticism of AI isn’t a hidden or inscrutable mind behind the curtain; it’s to do with scale and brute power. [my emphasis]
Mysticism here is more-or-less equated with something that’s not easy to understand. There’s nothing that exotic about the notion that computers can perform lots of calculations on data and do it faster than humans working with spreadsheets and pencils is not exactly new. That kind of, you know, what computers do.
Alang’s description of attending a Microsoft event earlier this year meant to tout the wonders of AI – i.e., to promote the investor buzz around the whole field - is interesting and entertaining. But, despite the fact that he writes as though he is surprised that tech boosters promise utopian results from their latest program or gadget – not exactly a new development! he provides an important qualification to the undifferentiated hype:
Yet for all the high-minded talk of what AI might one day do, much of what artificial intelligence appeared to do best was entirely quotidian: taking financial statements and reconciling figures, making security practices more responsive and efficient, transcribing and summarizing meetings, triaging emails more efficiently. What that emphasis on day-to-day tasks suggested is that AI isn’t so much going to produce a grand new world as, depending on your perspective, make what exists now slightly more efficient—or, rather, intensify and solidify the structure of the present. Yes, some parts of your job might be easier, but what seems likely is that those automated tasks will in turn simply be part of more work. [my emphasis]
That’s a helpful perspective, as long we recognize that a lot of “slightly more efficient” starts to look like a “new world” after a while, though not necessarily a grander one.
He also tells a good when-accomplished-historian-meets-a-famous-tech-bro-oligarch story:
Some Silicon Valley businessmen have taken tech solutionism to an extreme.
Actually, taking “tech solutionism to an extreme” is almost a synonym for Silicon Valley. An academic friend of mine in the Bay Area told me about being somewhat bewildered at some of her techie acquaintances who would talk enthusiastically about the “revolutionary” new approach they were coming up with. But then when you got to specifics about the program they working on – it was a new video game. With bizillions of dollars in venture capital chasing new possibilities, that kind of hype becomes part of daily life. The venture capitalists, though, are working with a model in which a large portion of the startups in which they invest will never turn out to be profitable. Their business model is to make money on the ones that are. So tech entrepreneurs learn how to make convincing pitches. Continuing with the story:
It is these AI accelerationists whose ideas are the most terrifying. Marc Andreessen was intimately involved in the creation of the first web browsers and is now a billionaire venture capitalist who has taken up a mission to fight against the “woke mind virus” and generally embrace capitalism and libertarianism. In a screed published last year, titled “The Techno-Optimist Manifesto,” Andreessen outlined his belief that “there is no material problem—whether created by nature or by technology—that cannot be solved with more technology.” When writer Rick Perlstein attended a dinner at Andreessen’s $34 million (US) home in California, he found a group adamantly opposed to regulation or any kind of constraint on tech (in a tweet at the end of 2023, Andreessen called regulation of AI “the new foundation of totalitarianism”). When Perlstein related the whole experience to a colleague, he “noted a similarity to a student of his who insisted that all the age-old problems historians worried over would soon obviously be solved by better computers, and thus considered the entire humanistic enterprise faintly ridiculous.”
There is no shortage of people who confuse “technology” with “magic.” Like the magical solutions to climate change that will somehow magically be produced by Technology – an argument advanced by groups like oil lobbyists who don’t want any solutions to problems that don’t involve relying on fossil fuels from now to eternity. Actual, practical, non-magical solutions like batteries, wind farms, and solar panels don’t count as the utopian answer that the mysterious force known as Technology always has on the horizon – just not right now.
It's worth noting also that some tech oligarchs also use the AI Doomsday talk to promote regulations that they prefer, not because the Skynet of the Terminator movies is about to take over, but because they want to restrict competition that might dilute their monopoly market control. It’s always a good idea to pay attention to the oligarch behind the curtain.4
Alang offers a decent framework for parsing superficial “tech” fantasies:
A common understanding of technology is that it is a tool. You have a task you need to do, and tech helps you accomplish it. But there are some significant technologies—shelter, the printing press, the nuclear bomb or the rocket, the internet—that almost “re-render” the world and thus change something about how we conceive of both ourselves and reality. It’s not a mere evolution. After the arrival of the book, and with it the capacity to document complex knowledge and disseminate information outside of the established gatekeepers, the ground of reality itself changed.
AI occupies a strange position, in that it likely represents one of those sea changes in technology but is at the same time overhyped. The idea that AI will lead us to some grand utopia is deeply flawed. Technology does, in fact, turn over new ground, but what was there in soil doesn’t merely go away.
Large language models are also a long way from Mr. Data levels of thinking and learning proficiency. A recent Nature article discusses the risk of “model collapse” for LLMs, i.e., “what may happen to GPT-{n} [chatbots] once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear.”5
Alang’s article is a helpful Big Picture look at the current AI discussion.
But while no article or book can cover every nuance of a topic like this, I would stress a couple of other features of AI that are important to keep in mind. One is that AI computations and syntheses of information are not modelled on the human brain, although some developments in AI have given scientists some better insight on how human brains function.
The other is the energy problem that billionaire tech bros would mostly like to dismiss, because it disturbs their fantasies about achieving immortality by transferring their brains’ software into androids or whatever. AI consumes a tremendous amount of energy compared to human brains. The biological versions are orders of magnitude more energy-efficient than the current AI devices.
Makortoff, Kalyeena & Jolly, Jasper (2024): Mixed signals on tech stocks amid debate over viability of AI boom. The Guardian 07/31/2024. <https://www.theguardian.com/business/article/2024/jul/31/mixed-signals-on-tech-stocks-amid-debate-over-viability-of-ai-boom> (Accessed: 2024-08-08).
Butts, Mickey (2023): How One Robber Baron’s Gamble on Railroads Brought Down His Bank and Plunged the U.S. Into the First Great Depression. Smithsonian Magazine 09/18/2024. <https://www.smithsonianmag.com/history/robber-baron-gamble-railroads-brought-down-bank-plunged-us-into-first-great-depression-panic-1873-180982877/> (Accessed: 2024-08-08).
Alang, Navneet (2024): AI Is a False God. The Walrus 05/29/2024. <https://thewalrus.ca/ai-hype/> (Accessed: 2024-08-08).
Pay no attention to that man behind the curtain. GreyAllen7 YouTube channel 11/30/2006. (Accessed: 2024-08-08).
Shumailov, Ilia et. al. (2024): AI models collapse when trained on recursively generated data. Nature 07/24/2024. <https://www.nature.com/articles/s41586-024-07566-y> (Accessed: 2024-08-08).