Periods of crisis and prosperity of capitalism based on the S&P 500 index, which measures the capitalization of the 500 largest companies in the US (considered the most representative index of the real situation of the world market).
2000-present (2024)
Hardware
Acceleration / Hybridization / Homogenization / Energy transition
Sedimentaciones
Precarity / GMOs: datafication of biology
Software
T
Tecnología
Web 2.0: platforms / Artificial intelligence
M
Modelos de negocio
Overliquidity crisis / Startups / Big techs / Recession and regression
I
Instituciones
Ungovernability
H
Hegemonías
Deglobalization
Capitalism 4.0
On the eve of the 21st century, a warning began to circulate: computer systems were not ready for the date change. Many microprocessors used only two digits to record the year, so that after December 31, 1999 they would jump to January 1, 1900. The multiplier effect of the error in operating systems and databases could be enormous: ATMs, bank accounts, communication systems, power grids, air-traffic control towers and nuclear power plants. It was called the “Y2K effect” (year 2000). Governments and companies invested around 300 billion dollars in preventing it. And even so, uncertainty prevailed. The Chinese government forced all those responsible for aeronautical companies to put their own skin on the line and take a flight on New Year’s Day.
On January 1, 2000, bus ticket machines stopped working in two Australian districts, as did 150 slot machines in Delaware. A Pennsylvania library recorded a 100-year delay in the return of a book and charged the corresponding fine. There were some rejected credit cards in the United Kingdom, a false alarm at the Onagawa nuclear plant, and Telecom Italia sent its bills dated January 1900. And nothing more. But major changes unfold on multiple time scales. In some way, in several ways, that year the present began.
“What we perceive as the present is the vivid fringe of memory tinged with anticipation —wrote Alfred N. Whitehead—. Past and future meet and intermingle in an ill-defined present.” Whitehead was a prestigious mathematician who, after the death of his son in the First World War, devoted the last half of his life to building a philosophical system that explained reality as a continuous, seamless process. In that process the “present” is just a precarious frontier, a frayed line between the momentum of the past and the inertia toward the future, a pause along the way to see how far we have come and think about what lies ahead, while the body pushes us forward.
This book is not immune to Whitehead’s ghost. Although it is built on a sharp periodization between versions of capitalism, in the passage from one to another the divisions dissolve, continuities and layers of time stretch the pasts out until they are tied together with portents of the future. My history of the present begins almost arbitrarily with the discreet crisis of the year 2000 and from there flows through the different layers, plunging toward the future. But first, let us review the process that brought us here.
“What would be the axioms of a cosmic sociology?” sociologist Luo Ji asks astrophysicist Ye Wenjie in The Dark Forest, the second part of Liu Cixin’s Three-Body trilogy. “The first —Ye replies— is that the primary need of any civilization is its survival. The second, that although civilizations grow and expand, the total amount of matter in the universe is always the same.” Any organization is a circuit of individuals, objects, energy and information. Human organizations run behind a built-in defect: so far they have consumed more energy than their environment can regenerate. By the 17th century, Europe had resolved that deficit in two ways that would prove irreversible: it transitioned toward fossil energy and systematically appropriated non-European resources. Little by little a world system was configured that connected and exploited diverse ecosystems in a network centered on Europe and controlled by anti-market monopolies that accumulated private capital. Capitalism 1.0 was born. The “Industrial Revolution” merely incorporated steam engines and wage labor into the center of the circuit. And British hegemony, after the revolutions and wars of 1780–1820, organized this almost accidental software with free trade among nations.
In 1873 free trade saturated the market with supply and the system collapsed. The software was renewed with anti-market processes: capital concentration and internalization of processes. Thus capitalism 2.0 was able to grow again by systematizing innovation, massifying demand and labor, and expanding the energy regime with oil and electricity. The world became more homogeneous and more hotly contested. After the destructive cycle of 1914–1945, US hegemony perfected that software with international institutions, transnational corporations and national systems of welfare and planning.
From 1968 onward, US hegemony began to crack. Energy became more expensive, economies stagnated and transnational capital relocated its production and poured its profits into the global financial system. Part of that capital was invested in information technologies that made it possible to segment supply and displace labor. Governments accompanied these changes with fiscal adjustment and deregulation policies. From the 1990s on, this capitalism 3.0 was organized around the world’s financial markets and international organizations, and financial flows concentrated in the new digital technologies and “emerging markets”.
Everything began to change with the new century.
The Y2K effect
In March 2000 the NASDAQ 100 stock index reached its all-time high: 5,132 points. The National Association of Securities Dealers Automated Quotation had been created in 1971, in the midst of the crisis of capitalism 2.1. Unlike the New York Stock Exchange, headquartered on Wall Street, NASDAQ was a virtual stock market, faster and with fewer entry requirements. That is why it was preferred by companies linked to the new technologies of capitalism 3.0. As the web was perceived as a commercial space, digital ventures with the .com (commerce) domain proliferated, with almost identical business models trying to capture venture capital —even if that meant hiding or altering information, as Yahoo did— under the premise “first grow, then earn”: get visibility, financing and a high stock price; profits will come on their own.
On March 13, 2000, the dot-com bubble burst and almost 5,000 digital companies disappeared through bankruptcy or merger. On December 23, 2001, Argentina declared default on its sovereign debt, closing a cycle of emerging-market crises that had begun in 1994. Between the fall of the dot-coms and the Argentine default, other things happened: China managed to be accepted into the World Trade Organization as a “market economy”; the United States suffered a series of terrorist attacks on its own territory and embarked on a complicated war in the Middle East; and climatologist Paul Crutzen, in the heat of a discussion with geologists, brought back the word “Anthropocene”. The five events can be explained by the preceding process, while much of what came afterwards is connected to those five events: web 2.0, the 2008 crisis, the Arab Spring, the Islamic State, rising commodity prices, US–China disputes, left- and right-wing populisms, the acceleration of artificial intelligence, the pandemic, the climate crisis and this book. With the 21st century, something new definitely began.
Is this new thing a capitalism 4.0 or a 3.2? A new version of capitalism or the refinement of the previous one? It is hard to say when we are still in the middle of the process. “The owl of Minerva spreads its wings only with the falling of the dusk,” says Hegel: knowledge always arrives at the end of the day. In the meantime, everything is so close that we cannot see it: thousands of conjunctural trees hide the forest of structure. My hypothesis in this chapter will be that overcoming this process will require a new version of capitalism, some of whose features we can already see. We live in a present that is rushing toward the future. To simplify, I will focus on two vectors: digital disruption and planetary irruption. In short, I return to the software and hardware that have organized this entire book. Only now the software is becoming hardware and the hardware is becoming software.
Digital disruption
The dot-com crisis, as often happens, concentrated capital. The companies that survived were able to acquire those that went under —their developments and infrastructures— at knock-down prices. Google, eBay and Amazon, among others, achieved a dominant position in the market, and their scale allowed them to internalize processes and innovations without having to worry about competition. Once again, a market process consolidated anti-market forces. Those anti-market forces included the US state itself, which recruited several digital companies into the security complex deployed after the September 2001 attacks. The most well-known official case is Palantir, founded in 2003 by Peter Thiel to provide software and data analysis to intelligence services. Years later it was leaked that the National Security Agency had access to user data from Google, Apple, Facebook, Dropbox, Microsoft, AOL and Yahoo for the secret surveillance project PRISM.
The crisis also made connectivity cheaper: with the bankruptcies, many corporate users dropped off the network but investment in infrastructure remained, so there was excess capacity. For its part, the US government maintained its policy of encouraging investment through cheap credit and low taxes, and the excess liquidity sustained the supply of venture funds waiting for the next digital entrepreneur with a disruptive business idea in need of financing. The web was ready to be rebuilt.
Web 2.0: platforms and startups
In 1988, Donald Norman popularized the concept of “User-Centered Design” (UCD), a design methodology —which some elevate to the level of a “philosophy”— that arose at the University of California, San Diego, in the 1980s. UCD aims to optimize products by centering every step of their design on the needs of the end user, data analysis and iteration or continuous improvement based on user feedback. UCD was the field on which disciplines such as product management and UX (user experience) were developed. In 1993, Apple incorporated UX as an in-house position and hired Norman himself. I thank Juan Manuel Garrido for this information.
UCD opened up a line of work that sought to make online navigation easier, harmonize interfaces and retain the user in an increasingly mass internet. After the dot-com crisis, these UCD developments became the basis for building what later came to be called “web 2.0”: a network based on interaction, social networks, applications designed for individual use, and the subsequent collection and algorithmic organization of that user’s data.
In technological terms, this is a three-legged system. First, the web itself, which expanded the number of users, the information available, connectivity and physical devices through the “internet of things”, that is, the connection of any physical object to the network to receive and send digitized information. Then, platforms, intermediary digital infrastructures on which different users interact and from which data can be extracted. Finally, algorithms, instructions or sequences of steps that make it possible to automate responses or solutions and classify huge data sets to identify patterns and learn new responses and solutions. In this way it becomes possible to integrate people and objects into the web by means of their interaction on platforms and to mine the data from that interaction using algorithms. The project of territorializing the internet already existed: if web 1.0 laid out roads and signs so that we could navigate it comfortably, the platforms of web 2.0 built towers and gated communities that use public data to refine private algorithms.
In terms of business models, there emerged advertising platforms, such as Facebook or Google; intermediaries, such as Uber or Airbnb; service platforms, such as Netflix; infrastructure platforms, such as Oracle or Amazon Web Service, which rent out their clouds to other digital companies; and industrial platforms, such as those developed by Siemens and General Electric to connect the manufacturing process to the internet, also known as industry 4.0. In all cases, the aim is to extract data from users’ activity; in many cases, it is also to cut costs by minimizing assets and skipping intermediations. Lately, moreover, the old and sound habit of charging for things is prevailing, offering as premium what used to be “free”, such as Google storage space.
Many of these platforms are designed and managed by large digital technology corporations, the so-called big techs. But the driving agent of capitalism 4.0, the one that captures all its libido and narrative, is the startup, the nascent company formed by young entrepreneurs around a plan or “mission”: to offer a novel product or service capable of creating its own market and monopolizing it for a while. The ecosystem that makes them possible was slowly cooked during capitalism 3.1 and congealed in 4.0: lots of liquidity and “disruptive” digital technologies that make it possible to do without assets, reduce costs and bypass intermediaries. In that ecosystem, startups aspire to be funded by venture capital or some “angel investor” who puts in money out of their own pocket, before going to the stock market with an initial public offering (IPO). Those that manage to be valued at more than 1 billion dollars before the IPO, or in less than ten years, like SpaceX or Mercado Libre, are called “unicorns”, an animal as rare and beautiful as it is mythical. More realistic startups aim to be “gazelles”: maintaining growth of 20% for four years, thanks to a market niche on which they can iterate their business cycle.
Startups move capitalism 4.0 emotionally by reviving two passions of capitalism 1.0: the ancestral aspiration to own one’s own business and the romantic figure of the “entrepreneur”, the bold, innovative businessperson who operates outside the structures and prejudices of their time. Schumpeter had warned that the corporations and regulations of capitalism 2.1 would tame and choke entrepreneurs like Edison or Daimler. Capitalism 4.0 brought them back and consolidated a model based on radical innovation, accelerated growth and the audacity to spot market opportunities.
But the startup is an ideal that represents only the tip of the business pyramid. In the rest of the pyramid, most of the really existing nascent companies consist of family or friends’ ventures that start with low capitalization, 5,000 dollars or less, and grow slowly. On the other hand, a large part of innovations are produced in large, bureaucratic organizations (private corporations or public research agencies), precisely thanks to the scale and rules under which they operate. Finally, while entrepreneurship is commonly associated with highly capitalized spaces on the technological frontier, with Silicon Valley at the forefront, the incentives to undertake can come from elsewhere. Otherwise, it is hard to explain why the percentage of early-stage entrepreneurial activity is higher in Ecuador (32%) and Burkina Faso (34%) than in the United States (13%). Later we will see what those incentives are.
Post-2008 depression
The new economy did not enjoy even ten years of peace. In 2008 a new crisis erupted, this time much larger. It was sparked by a series of mortgage loans granted to US citizens who were far below the qualification required to obtain them. But it quickly escalated into banking fraud, massive bankruptcies and dubious financial practices. At bottom it was the overliquidity policy of capitalism 3.0 that was dying. Since the beginning of the century, US individuals and European states had been living on cheap credit. Thus they had been able to sustain mass demand despite the regressive redistribution of income that favored the richest. When that debt-based Keynesianism became unsustainable, governments rushed to save the banks by injecting more and more money at the cost of tightening public services and “social spending” further and further.
Once the financial crash had passed, the world economy entered a plateau. Neither low interest rates nor monetary expansion nor deficits nor tax cuts managed to revive growth. Investment and employment remained low until 2016. Afterwards, the recovery was partial and localized. Viewed in the long term, the 2008 crisis catalyzed problems and contradictions that capitalism 3.0 had been accumulating since 1980: the average annual growth rate of all countries was 0.7%, two percentage points lower than over the previous twenty years.
The forces that were stalling economic growth were deep. The technologies and business models that matured in the transition between capitalism 3.1 and 4.0 are regressive and recessionary. They are regressive because digitality cheapens capital, destroys skilled jobs and empowers managers, whose decisions weigh more, to the point of being able to set their own salaries. The 4.0 company can hire less-skilled staff and reinforce control over them, thus establishing a kind of digital Taylorism. A growing number of self-employed workers remain outside the wage system. The share of wages in GDP fell by between 40% and 50%, even in fast-growing countries such as China or India.
At the global level, digitality favors the reshoring of value chains back to developed countries, also called reshoring or nearshoring. Between 2011 and 2014, trade in intermediate goods fell by half. What is the point of seeking a worker on the other side of the world who works more and more cheaply if a robot can do it better and more cheaply here, at headquarters? The world is shrinking and deglobalizing; the periphery is not left out but, like workers, it is assigned less-skilled and lower-paid functions, such as assembling components.
Another regressive feature of capitalism 4.0 is capital concentration. After the brief phase of economic deconcentration that characterized the young capitalism 3.0 of the 1970s, the number of firms in the market and on the stock exchange fell, and the size of companies increased, widening their profit margins by 39%. Wealth became concentrated again, and startups themselves are feeling it. The United States reached its peak number of companies in 1996, a year in which there were 700 IPOs. Since then, the number of companies has fallen by 46% and the average number of IPOs has been 100 per year. IPOs of tech companies were 1 per 50,000. Only 1% of startup attempts received money from angel investors. The percentage of ventures financed by venture capital is even lower.
This concentration is in part the result of anti-market operations by corporations that seek to control intellectual property rights, data sources, infrastructure, etc. But the very characteristics of digital goods make it possible to form “natural technological monopolies”, an oxymoron explained by three factors. First, digital companies, like a railway, require scale and network effects that, once consolidated, favor concentration if not outright monopoly. Second, data gain value by volume, so their extraction and processing also tend toward concentration. Third, digital goods are ridiculously cheap —since the mid-20th century the price of computing operations has been divided by 100 billion—; those who operate with them enjoy a comparative advantage over those who operate with tangible goods, whether they compete (Walmart vs. Amazon) or are integrated in the same value chain (Apple outsourcing the manufacture of all its hardware).
In addition to being regressive, capitalism 4.0 is recessionary. In an almost unprecedented phenomenon, productivity increases from digital innovation do not statistically correlate with macroeconomic growth. For Cédric Durand this is explained by anti-market trends: “the services supplied by Google, social networks and many applications are not commodified except in a residual way through advertising. The revenues from advertising are taken much into account as intermediate consumption by advertisers, but there is no direct imputation of the services provided to consumers.” However, at this point in the book we cannot claim that anti-market forces do not make the economy grow. Quite the opposite. The problem lies elsewhere and the debate remains open. For suit-and-tie Keynesians such as Ha-Joon Chang or Larry Summers, former US Treasury secretary, the problem is that digital disruption deflates demand, investment and growth, especially in a context of regressive income distribution. Behind every intermediation skipped by a startup that does not invest in capital or hire workers, there is a 2.0 company that went bankrupt. Behind every YT Music or Tidal playlist there are distributors and record stores that close. For black-jeans libertarians like Elon Musk or the aforementioned Thiel, the problem is that after the dot-com crisis, distrust of bold projects spread and most entrepreneurs settled into conservative business models. Disruption never went beyond ICT, when there is a whole world of opportunities for technological innovation and bold investments, starting with genetics and the space race.
The problem may be the exhaustion of a business model that capitalism 4.0 inherited from the previous version. Connected to the drip of overliquidity, the big techs bet on growth rather than profit, and on playful names like “google” or “yahoo”, but they have already crossed the threshold of economic innocence: labor and legal conflicts, murky entanglements with the state and direct impact on society and politics. Mark Zuckerberg giving explanations in the US Senate was closer to David Rockefeller than he wanted to believe. Industrial platforms exacerbate overproduction and absorb part of already thin manufacturing profits. The growth model based on offering services “for free” thanks to the zero marginal cost of digital goods, advertising and cross-subsidies no longer pays off. The Amazon model of owning physical infrastructure, subscription services and low wages is prevailing. Hardware returns. But if all corporations converge on the same business model, competition will be tougher, costs more rigid and conflicts more territorial and abrasive.
Is this what capitalism 4.0 will look like? A recessionary and regressive world? Must we get used to a depressive capitalism, living with the faith of finding unknown pleasures? Neither Malthus’s pessimism nor Ricardo’s optimism make sense until we see the steam engine of our era running at full power. At the 2012 edition of the ImageNet Large Scale Visual Recognition Challenge, the largest visual recognition software contest, computer scientist Geoffrey Hinton, along with two of his students, presented a program capable of recognizing 20,000 objects with 70% more accuracy than the others. It was the debut of deep learning, our steam engine. But that story has to be told from the beginning.
A brief history of artificial intelligence
A history of AI might begin in 1900 with mathematician David Hilbert’s challenge (to establish axiomatic foundations for deducing all known and yet-unknown mathematics from mechanically checkable proofs), continue in 1931 with Kurt Gödel’s response (there exist mathematical statements for which it is impossible to decide whether they are true or false); then present the “decision problem” (can there exist a method or machine capable of answering any arithmetic question with a yes or no?) and end with Alan Turing’s 1936 proposal: an abstract, effective and programmable machine capable of performing mechanical calculation by following a set of instructions.
At this point, any history of AI notes the bifurcation in the paths to that abstract machine. On one side, Walter Pitts and Warren McCulloch developed in 1943 a network of nodes that switched on and off to emulate the functioning of neurons. Pitts and McCulloch were recruited by Norbert Wiener to the Macy Conferences on cybernetics from 1946 to 1953. John von Neumann was also there, a veteran of the Manhattan Project who took the binary model of Pitts and McCulloch to develop the EDVAC computer (until then he had been working with a decimal model). On the other side, a group made up of Marvin Minsky, John McCarthy and Herbert Simon, among others, set up their own camp in Dartmouth in 1955 to “carry out a study of artificial intelligence” —the term Minsky coined for what until then had been called “computerized simulation”— working on a predictable logical-symbolic model based on input–output rules: if X, then Y.
The cyberneticians were interested in life. Wiener and his people sought to emulate in machines and society the self-organizing feedback systems of biological beings, including autonomous thought. Their computational models were inspired more by automation, probability and even thermodynamics than by logic. Working with artificial neural networks enabled self-organization “from below” from a random beginning. For example, the perceptrons developed by Frank Rosenblatt in the late 1950s were capable of recognizing letters without being explicitly taught.
The Dartmouth logicians worked on perfecting rules and instructions for abstract machines focused on specific problems. In symbolic, rule-based AI, statistics basically does the work. They achieved successes such as the General Problem Solver, got publicity and funding, and began to disparage the cyberneticians, pointing out that it would never be possible to manufacture as many artificial neurons as there are in a human brain. Public funds for neural networks were cut off in 1969. The logicians dominated the field for the next twenty years. Their zenith was DeepBlue, the chess computer developed from 1985 by two students who joined IBM in 1989. In 1997, DeepBlue defeated world champion Garry Kasparov. But it was only good for beating Kasparov: it had been fed for fifteen years with ever-finer rules for that single purpose.
Meanwhile, computing power and data volume were growing, and the cyberneticians had their revenge. In the 1980s, network researchers distributed problems among artificial neurons to process them in parallel. They trained algorithms through backpropagation with massive amounts of specific data. Deep learning was born. It was no longer necessary to manufacture as many neurons as the human brain has; the network could detect patterns and associate them without being programmed to do so, and could even function with incomplete tests, such as a torn photo or a badly sung melody. Because it does not seek a logical output but a point of equilibrium, the network always reaches some solution. In 1989, funding returned to neural networks.
The Maya predicted the end of the world for 2012. But they did not specify what world would come afterwards. That year, Hinton, one of those 1980s researchers, not only won the ImageNet contest but also systematized developments in neural networks in a paper that included groups from the University of Toronto, Microsoft, Google and IBM. Google X lab built its own network that same year. In 2014, Google hired Hinton and the following year acquired the startup DeepMind. In 2016, AlphaGo, a program developed by DeepMind for Google, defeated the world Go champion. Unlike DeepBlue, AlphaGo was not stuffed with rules: it was left to play against itself while its deep learning went who knows where. No one yet knows how it “thinks”. Its rivals say it is like playing against an alien.
The history of AI is the story of those who sought to replicate consciousness against those who sought to replicate life. Mind vs. body, mechanism vs. organism, rationalism vs. romanticism. The idea of “artificial intelligence” belongs to the former, but our era belongs to the latter. “Artificial intelligence” is organic, physical, it breathes feverishly around us, it is going to physically surround us with new perceptive hardware, or with entire cities, such as China’s City Brain project. But it will also collect all the human garbage we leave on the web: biases, verbal violence, fake news. Engineers try to compensate for these biases through reinforcement learning from human feedback: manually redirecting results so that deep learning does not go off the rails. But it is like trying to stop Cthulhu with a slingshot: the volume of data and depth of learning are on a non-human scale.
Meanwhile, business is already rearranging itself around an abstract and organic machine that envelops us. In January 2024, thanks to Copilot, its generative AI project, Microsoft once again surpassed Apple in value after twenty years. Andrew Ng compares AI to electricity: a technology that is revolutionary in itself and can also revolutionize other branches of the economy. As with electricity, a concentrated public-service model —the electrical grid or the AI service offered by giants such as Google or Alibaba— confronts a customized and competitive model —the battery or the solutions that various startups propose for companies conceived from AI.
But the history of AI is not over yet. Hinton himself, who resigned from Google out of concern over AI’s use, acknowledges to his friend and colleague Hector Levesque, a supporter of symbolic AI, that “symbols exist in the external world”. Today systems aim at “hybridity”, not only because they combine hardware with wetware (natural neurons integrated into node networks), but also because they integrate symbolic (logical) and connectionist (network-based) processing of information. Introducing predictable rules into AI may be a way to head off monstrous drifts of deep learning.
But all this speaks only of one hemisphere of global digitality. Let us look at the other.
China: the other internet
If the USSR abruptly shifted from planned communism to chaotic capitalism, China managed to transition from chaotic communism to planned capitalism. The constant mobilization and overflow of the Maoist Cultural Revolution trained a flexible and pragmatic leadership and a society specialized in survival. When it came time to move toward capitalism, Deng Xiaoping and his circle did so experimentally and gradually, without Castro’s harangues or Yeltsin’s overacting. In 1978 they dismantled the rural commune system to encourage starving peasants to make money however they could. Later, they created special zones for foreign direct investment. By 1984 China had a dual economy, half private, half state-owned, growing at 8% per year. In 1987 they privatized state-owned enterprises, in 1993 they proclaimed the “socialist market economy”, and in 1999 they legalized private property. In 2001 China was able to join the WTO. In all these phases the Communist Party maintained control of government and of much of the economy, and its capitalism 3.0 consisted in imitating technologies with the advantage of low labor costs.
From the 2008 crisis onward, Chinese growth slowed and problems came to light. Its political system encourages corruption and waste by competing local governments; corporate information is opaque or outright false; zombie firms proliferate with no one deciding to close them; and the cheap labor supplied by its 114 million rural migrants deprived of rights by the hukou registration system Hukou is a family registration system that sets rights and access to services according to the location of the registrant, privileging “locals” over “non-locals”, or internal migrants. Although it has ancient roots, it was formalized in 1958 during Mao Zedong’s communist government. With the reforms and opening of the 1980s it was progressively relaxed, but to this day it remains in force. is no longer so cheap. Ah, the market.
Faced with this drift of the hardware, Chinese software changed strategy. First, it did something very much of the era: sustain economic activity by issuing bonds and encouraging private indebtedness. A risky move that incubated a real estate bubble. Second, it did something very Chinese: it reinforced political control, especially from Xi Jinping’s presidency onwards; the cult of personality and official doctrines (“Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era”) returned to China, along with a discourse of austerity and a certain purging of the financial system.
Third, and most important, China changed its technological strategy: from offering itself as the world’s factory by imitating technologies to imposing itself as a technological power by leveraging the accumulated physical and human capital. The iconic assembler Foxconn was overshadowed by the champions of a new era: WeChat, the mega-app that uses the smartphone to record almost its user’s entire life, or Hikvision, the video surveillance company that supplies US security forces thanks to its low cost and the fact that Chinese facial recognition technology, tested in Africa and Xinjiang, works on non-white faces.
China’s comparative advantage is an immense volume of digital data. With the spread of cheap smartphones, the Chinese made up for their connectivity lag and set the global trend of doing everything via mobile, walking through life staring at a screen. Only they do it in a single platform: WeChat. The volume of data generated in China is far greater than in other countries and remains within its own digital ecosystem, outside global platforms, thanks to a constellation of native big techs such as Tencent, Xiaomi and Alibaba. And to the Chinese state, whose symbiosis with companies rests on technology: firms develop goods and services for social control, and the state funds these developments and leads the global race for 5G and quantum computing that those goods and services require to function.
Here it is worth pausing over the particularity of the Chinese internet. In March 2000, Bill Clinton said the Chinese government’s attempt to control online freedom of expression would be “like trying to nail Jello to the wall”. Three years later, the Chinese government effectively controlled online freedom of expression thanks to the Golden Shield project, a system of firewalls and proxy servers to block IPs that transmit certain content. But the Chinese internet is not monolithic: beneath the Golden Shield, Chinese firms, foreign firms based in China, Chinese entrepreneurs living abroad (Eric Yuan, the founder of Zoom, was caught in a jurisdictional tug-of-war between Washington and Beijing during the pandemic), local governments interested in developing their own digital districts, users, and a market of cannibal entrepreneurs all interact. As an example of the latter, consider the venerable Wang Xing, who between 2003 and 2010 cloned four US apps pixel by pixel for Chinese users: Friendster, Facebook, Twitter and Groupon. Not only did he face no legal trouble, he became a role model local entrepreneur. Inside every Chinese digital entrepreneur, that peasant desperate to survive is still beating.
This ecosystem, as closed at the top as it is wild at the bottom, developed a specific digitality, with its own search engine (Baidu), its own platforms for domestic consumption (Weibo) and global reach (TikTok), and its own data centers (such as the one Tencent built in the mountains of Guizhou). And it is much better prepared to take the next step in the AI race: moving from invention to application. It has more data, fewer legal obstacles and a great hunger for enterprise. After AlphaGo defeated the world champion in 2016, China produced more research on deep learning than the United States. At the end of that AI race, a global hegemony will likely be defined and, with luck, a battery of institutions to govern the new capitalism. We still do not know what they will be like, but we already know our machine, whose edges coincide with the world.
Surrounding capital
The novelty of capitalism 4.0 is not disruption (we have seen that any technology is disruptive when it meshes into new layers M, I and H), nor information and communication technologies (we have seen that every society had them), but scale. Again: any human organization is a network for circulating people, energy, information and natural and artificial objects. Even before capitalism, the trend was toward expanding these networks and making those artificial objects ever more complex, which at some point were called “capital”. Today that expansion has resulted in a physical and virtual infrastructure on a planetary scale, capable of capturing and processing information on a non-human scale. That is the novelty of our present and its power for the future, but also what connects it to the past.
To illustrate that infrastructure, imagine a city perched atop an iceberg. The inhabitants on the surface are us, the online community, its administrators and its “content”: culture compressed and rapidly distributed in digital formats. The city is web 2.0 as a container that houses several neighborhoods and buildings (platforms, search engines, browsers, etc.). All of them are programs, but each program is a brick that can be turned into data for another program. An abstract architecture of virtually infinite metaprograms. Machine learning is essentially that: a learning program that, from data series, builds another program capable of recognizing new data. Below this level are its different physical supports: devices, modems, routers, servers, data centers. Further down there is a planetary-scale physical infrastructure: continental and submarine cables, satellites. Technology is already an ecology. Software has turned into hardware.
This ecology is capital that has grown to envelop us, a kind of surrounding capital. Surrounding capital allows us to use less individual capital: the zero marginal cost of digital objects, the absence of assets in startups, the intermediations destroyed by platforms and the forty objects we can replace with a cheap phone are examples of this austerity paid for by our environment. Surrounding capital is a form of collective wealth, although privately owned and managed, unthinkable at any other time in history, but which makes it possible for us to have fewer things than our parents. Surrounding capital evades any existing form of government and subjects its human and non-human environment to the instability inherent to a still-emergent technoeconomic model. It is a planetary machine that brings us austerity and uncertainty. Discussions about how many jobs AI will destroy (around 30% of those currently existing), the growing desalarization of new forms of work, and what will happen to those left out must be framed within this broader context.
Surrounding capital is the historical trait of our present, capitalism 4.0. But it is an emergent structure, whose effects are not yet fully deployed and which does not come with its destiny written on its forehead. The political and social frictions it encounters will define its shape.
Submarine cables. The layout of submarine telecommunications cables largely coincides with the old telegraph network and with the maritime trade routes of past centuries.
Three political questions
a) Governability and ungovernability
The relationship between people is a political construction. And the relationship of people with (and through) their capital is even more so. What is the politics of surrounding capital? In principle, it is a continuation of what Lippmann envisioned and capitalism 3.0 initiated: shaping a new subject from an artificial ecosystem. Only in capitalism 4.0 that ecosystem is a much more encompassing and invasive technological park that makes it possible to capture data from each of us, melt them into a statistical mass and return them to an individual redefined as a “targeting profile”, ranging from potential customer to possible terrorist. We live in a digital cocoon that captures every action, channels human flow and detects anomalies.
Once again, the difference is scale. An algorithm perceives a pattern —a series of homogeneous data— and finds a pattern that predicts actions. A Paleolithic hunter did this by following tracks; a farmer does it by selecting seeds; Spotify does it by recommending Ginastera because we listened to Bartók and Magma. The 4.0 novelty is the emergence of a global infrastructure that automates that operation and scales it to a non-human level. In 2015 the volume of digitized information available was 5 zettabytes, a 5 followed by twenty-one zeros. During 2020, 1.3 million people per day joined some platform. Today more than 54% of the world’s population uses some social network. Under these conditions, it is possible to cross and scale old biometric data with new behavioral data recorded by digitality and control information about physical complexion, social behavior and sensitivities of individuals and populations.
Every society manufactures its individuals; to govern someone requires telling them what they are. The new 4.0 ecosystem defined a flat and transparent subject, whose behavior is more important to predict than their motives to understand. But this subject is not clay in the algorithm’s hands. It reacts in a non-linear way. Web 2.0 replaced a Fordist communication logic (a few mass media producing homogeneous information for many users; the cartel between Reuters, AFP and Associated Press we saw in capitalism 2.1) with the horizontalization of the network: all users producing customized information for small groups. Feedback within this ecosystem led to connectivity that was less and less oriented toward exchange and more toward the reaffirmation of a tribal and emotional “self”, overwhelmed by controversial information it cannot digest. It has to choose, beyond any evidence. And in exercising that non-rational freedom, it breaks any predictability and collective order. The same technological ecosystem that made us transparent for an algorithm made us opaque to ourselves.
“User-centered design” soon encountered the possibility that the user would use things in another way, that they would find a shortcut across the grass beside the prim, carefully designed path by Martha Schwartz, or that they would use a game-streaming platform to plan a seizure of power. That is the basis of the current political crisis. Companies at the Mlevel were able to contain that unleashed user; government institutions at the I level were not: it is possible to customize a latte art or a video-game skin, but not a school curriculum or a national law. That out-of-control user is also the input of today’s network-based AI. “We are not going to fully understand the potential and risks of generative AI without individual users actually playing with it,” says Alison Smith of the Booz Allen Hamilton consultancy. AI is UCD unchained; it assimilates and amplifies all the features of the web that feeds it: biases, fake news, hate speech and piracy (OpenAI has already warned that it is not possible to train its machines without using copyrighted material). If these used to be problems in marginal Internet backwaters such as The Pirate Bay or 4chan, today they emanate from the mirrored towers downtown: Google, Microsoft, Meta, Amazon, Alibaba, Baidu and Tencent. The algorithms that came to govern individuals have created a new collective ungovernability.
b) Bitcoin vs. AI
“Cryptocurrencies are libertarian, AI is communist,” Peter Thiel said during a public debate with Reid Hoffman, the founder of LinkedIn. In 2008, while the world’s financial capitals filled with protesters railing against the multi-billion-dollar bank bailouts, Satoshi Nakamoto uploaded a paper titled “Bitcoin: A Peer-to-Peer Electronic Cash System” to the internet. There are many ways to fight banks. Technically, bitcoin is a currency without physical backing that circulates in a peer-to-peer exchange network and is created by mining validated transactions in that same network and recording them on a blockchain accessible to every user but protected by cryptography. A return to hard currency by other means: the backing is no longer a precious metal, but an inviolable P2P network. Aesthetically, bitcoin is an ungovernable network of private money, a new gold standard without gold, capable of sweeping away banknotes, banks, finance ministers and perhaps the world’s governments. A fantasy tailor-made for libertarian libido. With its appearance, any nerd logged into Reddit can challenge Wall Street from the computer in their bedroom.
The contrast between cryptocurrencies and AI is philosophical and political. Philosophically, the optimum of AI would be to concentrate all existing information in a single sovereign point that made all decisions. The optimum of cryptocurrencies would be a network in which every being on the planet participates and carries transactions to such a volume and speed that they become impossible to crack, coordinating untrustworthy beings in a trustworthy way (without collective deliberation or sovereign decisions). AI leads us toward a vertical and rational society; blockchain, toward a horizontal community that is not necessarily rational. “AI is communist, cryptocurrencies are libertarian.”
Politically, this confrontation is evident in China. Beyond restricting bitcoin on its territory —where 65% of global crypto is mined— the Communist government attacks the crypto horde on two fronts. One is the e-yuan, its powerful stablecoin, a cryptocurrency pegged to the yuan, ergo, to the government. The other is quantum computing. On an ordinary computer, bits encode information as 1 and 0, that is, electric current that flows or does not flow through a microprocessor. A quantum bit or qubit can register intermediate states between 1 and 0 and thus exponentially increase its computing power. Hypothetically, a 2,500-qubit computer could break all currently existing cryptography. At the time of writing, the most powerful quantum computer has 433 qubits and IBM is announcing one with 1,121. The 2,500-qubit computer will arrive sooner rather than later. And China is racing toward it: in 2019 it invested 2 billion dollars in a national quantum information lab in Hefei. Meanwhile, Ethereum aims to extend blockchain to contracts, that is, to the foundation of civil society. Eventually, blockchain could establish the truth of non-AI-generated data by recording them on that decentralized database.
But we should not fall into false dichotomies. Cryptocurrencies and AI can converge. We have already seen that human irrationality has a place in AI. Authoritarianism and chaos can function together.
c) Deglobalization and planetarity
The virtual and material machine of capitalism 4.0 is almost as large as the world. It is difficult for it to remain united for long. The deglobalization that characterizes capitalism 4.0, with its reshoring, recession and disputes over hegemony, is leading many countries to adopt import-substitution policies in industry (Japan in microprocessors) and agriculture (China in soybeans). We should not be surprised if the internet also deglobalizes. If China managed to develop its own digital ecosystem, others will try as well, with Russia at the forefront, which hosts many illegal download sites and has a native search engine with global projection, Yandex. The platforms themselves undermine the unity of the single web.
It is too early to know where this process may end, but we can define four tendencies or models, two of which already exist. The first is web 2.0 as we know it: an open space for any type of business and expression. The second is the Chinese model we have already seen: more politically controlled and less regulated in terms of good corporate practices. The third is the model sought by the European Union and part of global progressivism: a regulated web that curbs hate speech and fake news and protects the right to privacy at the cost of restricting its uses. Finally, there is the big-tech project: a commercial web totally oriented toward maximizing business opportunities, whether in advertising, subscriptions or data mining, at the cost of user privacy and web neutrality.
In environmental terms, the planetary scale of surrounding capital is as invasive as it is instrumental. However, thinking about any of these possibilities requires dissolving the boundary between the natural and the artificial. The planetarity of surrounding capital is not only the promise of geoengineering, nor the six 15-meter-wide tunnels that Tencent dug into the mountains of Gui’an, in Guizhou, for its data centers. The planet has been bearing the world for far too long, and today their processes are entangled.
Planetary irruption
At the beginning of the 2020 pandemic, the microprocessor crisis occurred, a good example of planetary hybridity. The COVID-19 pandemic was a hybrid that combined natural and artificial processes that are still the object of hypotheses and debates. Microprocessors, for their part, are the main input of any electronic device, the third most traded product in the world. A sophisticated manufacture and also, in some sense, a commodity. At the beginning of the global lockdown, automakers canceled their microprocessor orders and hardware companies increased theirs because of remote work. Apple and Samsung even stockpiled them. Later, automakers resumed their orders, microprocessor manufacturers were overwhelmed, and a global shortage ensued. Microprocessor supply is not very elastic to demand variations due to its production process, and 80% of it is concentrated in South Korea and especially in Taiwan, whose company TSMC supplies Nvidia, AMD, MediaTek and Qualcomm.
The situation was worsened by the great Taiwanese drought of 2021 and 2022 which, in addition to agriculture, affected the Taichung microprocessor center. Then there was a 300% increase in the price of silicon, the main input of microprocessors. Perhaps it was retaliation by China, the world’s leading producer, after the United States blocked microprocessor supplies to Huawei. The scarcity of silicon was a reminder that China is also a natural monopolist of rare earths such as neodymium and scandium, used intensively in the electronics industry. The “nature” of China’s rare-earth monopoly is also a hybrid between effective possession of natural deposits such as Bayan Obo, source of 70% of rare earths, and the political will to bear a high environmental cost: China Rare Earth Group Co., the consortium of Chinese rare-earth companies, pours 75,000 liters of acid water into the ground each year. Finally, the blockage of the Suez Canal in 2021, caused by a grounded container ship, and the resulting tripling of logistics costs also reminded us of the planet’s materiality. In fact, global trade depends on a series of choke points or “bottlenecks”: narrow maritime passes such as Suez, Panama, Hormuz, Malacca and the Dardanelles, among others, which shorten distances but are permanently exposed to blockages, both from congestion and military conflicts, that would send transport costs soaring.
Pandemics and droughts, supply and demand, microchips and rare earths, political conflicts and contingencies that escalate. The entanglement of natural and artificial factors, such as antibiotic-resistant bacteria or Instagram capybaras, defines our new planetary condition. It is the new hardware.
Acceleration, homogenization and hybridization
The entanglement of natural and human (or artificial) factors underlies all discussions of the “Anthropocene”. Legend has it that Paul Crutzen coined the term suddenly during a debate among geologists in Cuernavaca, Mexico, in February 2000. In reality, the concept had already circulated in the USSR in the 1960s. And after that discussion, a more relaxed Crutzen pinned it down in an article in issue no. 41 of the International Geosphere-Biosphere Programme newsletter. The hypothesis spoke of a new geological epoch marked by human impact. The concept was more warmly received in the social sciences, journalism and art than in the discipline that specifically studies geological epochs. For geology, the Earth’s development is measured in eons, eras, periods and epochs, the latest of which, the Holocene, began 12,000 years ago, after the last glacial period. Events such as the First World War, the conquest of America or the fall of the Roman Empire are barely recent sparks on the surface of what geology studies. How could human action influence something so hard, deep and slow? The detonation of atomic bombs from 1945 onward may have had some impact on tectonic plates. But in that case we would be dealing with an event rather than an epoch. The total mass of all structures and objects produced by humans is roughly 30 trillion tons. But this does not constitute a geological change, and it is difficult to establish a stratigraphically verifiable starting point.
Finally, the volume of carbon dioxide in the atmosphere in the last two centuries has risen from 300 parts per million to 400. In this case, rather than a geological epoch, we would be living through an acceleration. Even an acceleration of acceleration, if we take into account that the growth rate of GDP, energy and water consumption, infrastructure, transport, fertilizers and (until 1980) population have risen exponentially since the post-war period. But that is no longer a geological problem. And this is not a geology book. The concept of “acceleration” is central to any history of capitalism and to that of this book in particular.
Along with acceleration, we see a process of homogenization and hybridization. If there is a hero of the Anthropocene, it is Gallus gallus domesticus, better known as the “chicken”. The first indications of its domestication date back 3,000 years in Southeast Asia. Since then, its skeleton has grown, the chemistry of its bones has changed and so has its genetics. Most of these changes occurred from the mid-20th-century acceleration onwards, when industrial chicken breeding began and chickens outnumbered humans. Today this hybrid is the most numerous terrestrial animal on the planet, with 23 billion individuals. The modern chicken is not only a human product; its bones, discarded by the million in landfills, will form an important part of the fossil record of a posthuman planet. Accompanying the chicken on this adventure are cows, sheep and pigs: all species massified, homogenized and hybridized by anthropogenic hands.
The human body is also hybridizing at an accelerated pace. In 1999, Michael Goldblatt, director of the US Defense Advanced Research Projects Agency (DARPA, the cradle of the internet), announced that “the next frontier is within ourselves”. Surrounding capital is also advancing in that direction. The reduction of the body to data that took place in capitalism 3.0 is now turning back and can edit the body, our wetware, as if it were data. Institutions are quickly adapting to this turn: in 2018 biophysicist He Jiankui announced that he had applied the CRISPR gene-editing technique to two twin girls to immunize them against HIV and was condemned by the scientific community and arrested by the Chinese government; in 2020, Jennifer Doudna, a promoter of human gene editing, received the Nobel Prize in Chemistry for developing the CRISPR technique. The COVID-19 pandemic accelerated and homogenized this process. Humanity was subjected to a body-control experiment on an unprecedented scale: mass vaccination campaigns in all countries with barely tested vaccines, disciplinary control mechanisms (quarantines, restrictions) and biopolitical ones (tests, online geolocation, health passes). Hardware becomes software; materiality participates in the instabilities of capitalism 4.0.
Post-normality
Acceleration, homogenization and hybridization are processes that affect the planetary hardware: they reduce biodiversity and vegetative cover, increase carbon emissions and alter biogeochemical flows of phosphorus and nitrogen, etc. But planetary hardware is inherently unstable already. Long before the emergence of humanity, Earth suffered great extinctions, radioactive bursts and climate shifts. The natural instability of the planet and humankind’s planetary impact feed back into each other in non-linear ways. It is a clash between Godzilla and King Kong: hybridized natural force against the exceptional, dangerous primate. That is why, if there is something like an “Anthropocene”, rather than a planetary era marked by human action, it can be characterized as the opposite: the irruption of planetary natural and artificial forces into human systems and routines. Climate changes that displace populations, pandemics synthesized by global traffic, buried river basins and wetlands that flood cities. In the sociology of organizations, there is talk of “normal accidents”: disruptive events of great magnitude that are as predictable as they are inevitable, that couple together and interact in unexpected, non-linear ways. We do not know when or where the next toxic leak, pandemic or flood will occur, nor what scale it will have, but we know it will happen.
“Accidents” are the new normal of the Anthropocene as an irruption of altered nature. Complex systems use high-risk, planet-impacting technologies. The defenseless hominid who built the first hut to shelter from the rain did not control nature but did control their technology; modern man, from Frankenstein to Blade Runner, felt they controlled nature but no longer technology. Today it seems we do not control either. We are building hybrid, poorly amalgamated ecosystems that we do not know how will react among themselves. We have already seen that digitality itself is a case of unstable ecology: by operating on our perceptions and sensations, it had non-linear effects on our cognition, thought and desire, and unleashed a new ungovernability.
Cities are another hybrid and unstable ecosystem. The abandonment of planning during capitalism 3.0 left urbanization subject to heterogeneous, uncoordinated processes and actors: real-estate developers, migrants, informal-economy agents, criminal networks, parasitic and pathogenic species. We can also add a new generation of artificial organisms: AI devices such as facial recognition. The city is now a community of interdependent organisms that interact with a complex environment governed by its own dynamics. It is not a concrete jungle; it is an artifice that has slipped out of human control to become an ecosystem. The retraction and erosion of state welfare systems have led the city, along with the web, to replace the nation as identity builder: tribes and neighbors above citizens.
The city is particularly traversed by the two vectors of capitalism 4.0: digital disruption and the irruption of altered natural forces. We have already seen that each version of capitalism added a role to cities: circulation node, production center, consumption center, commodity in itself. In capitalism 4.0, digitality has reconfigured them: many production processes shrink in scale and locate in urban centers; urban economies are transformed into platforms or new nodes of the global economy; the availability of georeferenced data makes it possible to define “digital neighborhoods” with greater commercial and real-estate appeal; an algorithm to predict gentrification has even been developed; and “collaborative work” platforms cluster a non-salaried working class in cities, destined to have a sociopolitical weight that will not be linear or predictable.
As for the irruption of natural forces such as floods and energy and health crises, the city is increasingly becoming a precarious environment we must adapt to rather than one we can adapt to ourselves. Before the 2020 pandemic there was already debate about whether the 4.0 city would become overpopulated due to the flow of deindustrialized work into services, or whether it would depopulate as a result of urban degradation and the possibility of remote virtual work. Lockdown encouraged the illusion of a “return to the countryside”, but under current conditions that would either be an option for a few or a farce: real-estate developments in green patches on the periphery, with all the environmental and infrastructural problems of urban sprawl. The pandemic is over but it is too early to know its effect on cities. In any case, it deepened prior trends toward digitality and precarity. For example, in Latin America, the world’s most urbanized and unequal region, the pandemic accentuated the process of valuing or segregating spaces based on the availability and quality of public transport.
The energy transition
One of the reasons the 2008 financial crisis turned into a longer economic plateau was rising international commodity prices. Between 2002 and 2005 the price of soybeans rose 29%; coffee, 42%; rubber, 96%; metals, 100%; oil, 114%. Their causes ranged from the oil crisis caused by US wars in Iraq to rising consumption in Asian countries, especially China and India. Their consequences ranged from fifteen years of boom and redistributive policies in Latin America (which exports commodities) to social and political crisis in Arab countries (which import food). And to the fracking revolution.
“Fracking” is a term that refers to two different, preexisting technologies. Hydraulic fracturing of the subsoil to extract oil —a technique known since the mid-20th century— and horizontal drilling: making a vertical well to a certain depth and then turning 90° to inject water, break up the subsoil and release hydrocarbons. The combination of both began to be applied experimentally in 2005 to extract unconventional hydrocarbons such as shale gas or shale oil, drilling faster and deeper.
After the 2008 crisis, fracking became the dominant technique and repositioned the United States as an oil and gas producer: from 2014, it doubled its production, met global demand, flattened international prices and sidelined traditional producers grouped in OPEC, whose oil revenues fell amid the global food-price surge. In addition, the use of shale gas instead of oil allowed the US to reduce its carbon emissions. From 2019 on, the fracking craze began to subside. Oil entrepreneurs settled into a more conservative business model, production stagnated and OPEC regained its position. The environmental risk and political conflicts associated with fracking began to weigh in the costs. Ultimately, extracting bituminous shales from subsoil fractures is scraping the bottom of the oil barrel. For now, the United States maintains its position as the main exporter, but shale production has yet to rebound and prices remain high. On the other hand, supply chains tend to concentrate, and China and Russia could form a global hydrocarbon-refining cartel. Anti-market forces triumph: the entire line from wellhead to gas pump is left in the few, volatile hands of geopolitics. Another sign that abundance is over and that sooner rather than later we will have to think about an energy transition.
In the world of ideas, debates on the transition pit “environmentalists”, who favor renewable energies, against “modernists”, who favor nuclear energy. But in the material world, neither is enough. All the nuclear plants in the world produce only 5% of the energy in use; we would have to build three more plants per month for 60 years and face a uranium shortage by 2030, in addition to rising costs from ever stricter regulations. If the promise of fusion energy is fulfilled, it will probably face the same constraints. Solar and wind energy, for their part, are still unable to supply even 25% of the energy needed, have lower yields than fossil fuels, are not constant, difficult to store and, with current technology, their panels and turbines occupy between 300 and 400 times more space than nuclear plants, as well as using non-renewable resources such as copper and rare metals. One option would be to store electricity from wind or solar sources in nitrogen, a gas that can be extracted from water by electrolysis and is easy to compress and distribute. But this “green technology” has not yet matured enough to scale its production and commercialization.
The new energy regime will be a hybrid, integrating renewables, nuclear, hydroelectric dams and some relatively friendly hydrocarbon such as gas. In the meantime, as with any transition, the move to a new energy regime will require heavy use of old energy. Renewables are already growing at the same rate as global consumption, and Asia’s electrification is slightly reducing emissions. But we will continue to use fossil fuels for a long time. There are corners of the world that are still waiting for the transition to hydrocarbons in order to reduce indiscriminate logging. On that path, natural gas is a cheap and safe alternative.
The other force at play in the transition is the very austerity of capitalism 4.0, which has slowed some curves. The global population growth rate has frozen around 1% per year, sustained more by low mortality than by fertility, which has tended to fall due to rising living standards and public fertility policies. The population tends to concentrate, and by 2050 some 70% of people are expected to live in cities occupying 3% of the Earth’s surface, although urbanization patterns on the world’s periphery are extremely precarious and inefficient. Average global meat and water consumption in the diet has also declined.
Recall that the end of capitalism 2.1 —its planning and its welfare state— was caused neither by Hayek nor Reagan, but by the simultaneous crisis of artificially cheap oil and a global monetary system based on paper dollars. It was a return of thermodynamics. Capitalism 3.0 surfed it with structural adjustment, wild commodity trading, global financial flows and outsourced production in sometimes ridiculously long value chains. In 2020 a photo circulated of a jar of Argentine pears canned in Thailand to be consumed in the United States. With capitalism 4.0, thermodynamic adjustment makes another turn and becomes unpredictable. New models of welfare and planning adapted to contingency and a monetary system more representative of energy values will be needed. Cryptocurrencies —whose electricity consumption in 2020 left the 240,000 inhabitants of the Republic of Abkhazia, on the Black Sea, without power— could be energy-based money, much to their own chagrin, if they abandon dogmas and libertarian nonsense.
Capitalism 4.0 is a planetary cyber-physical machine, AI, but it is also a thermodynamic adjustment that can make life more precarious for most of humanity.
The age of precarity
The two vectors of capitalism 4.0 converge in precarity. Digital disruption shrinks the firm into the startup, bypasses intermediations, destroys more jobs than it creates, and desalarizes its new workers. The irruption of planetary forces cracks the material support of our lives: a new flood or forest fire forces us to move, a new virus strain locks us down or makes us sick or both, and at the end of the tunnel all we see is the dark gleam of uncertainty. Hardware has become software, incorporating into its instability the impact and intrusion of human society. Software has become hardware, surrounding us with surrounding capital with non-linear effects.
Life under capitalism 4.0 promises to be unstable, contingent and precarious. And it has already found its subjects: the cardboard picker, the redditer playing the stock market, the dealer, the bitcoiner, the immigrant delivering for an app, the tiktoker hoping to make it big. The humans of capitalism 4.0 have understood that tomorrow you never know; better to do something with little and see what happens. In precarity, “entrepreneurship” rhymes with “survival”. If we drill through the bottom of the new software until we touch the first layer of sedimented hardware, we will find the historical basis of precarity, the most stable and dynamic of human activities: the informal economy.
The term “informal economy” was coined by anthropologist Keith Hart in 1971, after fieldwork in Accra, Ghana’s capital, in an attempt to convince economists at the International Labour Organization that the macroeconomic concept of “unemployment” made little sense amid the complex tangle of unregulated, irregular activities and exchanges the poor engaged in to survive. In the 1980s, Hernando de Soto, a Peruvian economist who worked with Fujimori and was anointed by Hayek as his successor, developed the influential thesis that the poor are not jobless proletarians but entrepreneurs without capital: their economic activities generate an amount of wealth not reflected in GDP that is hampered by state regulations. We can now see why the percentage of early-stage entrepreneurial activity is higher in Ecuador and Burkina Faso than in the United States.
With capitalism 4.0, informality has conquered the economy from below and above. Below, informality is the survival space of that ancestral institution capitalism has always tried to subdue: the market. The Special 301 Report, published annually by the Office of the United States Trade Representative to report piracy, counterfeiting and so on, includes a list of notorious markets. It lists Mexico’s Tepito market, Argentina’s La Salada and Ukraine’s Petrivka, among other mainly Indian and Chinese informal markets whose global centrality makes them a geopolitical problem. Beyond this blacklist, there are other forms of informal markets: from the universal phenomenon of street vending to extreme cases such as markets in war zones like Darfur, Lebanon or Kabul. These post-war markets arise both from the need to self-organize amid scarcity and from profit-seeking in a context of need. Eventually, they can help in economic regeneration and even coexistence, as with the Arizona mall in Brčko, Bosnia and Herzegovina. Another example of coexistence and conflict is informal markets in border zones, such as Ciudad del Este or El Paso. A border separates as much as it unites. Just as putting two bodies at different temperatures side by side creates an energy flow from one to the other, juxtaposing two or more distinct legal, monetary and social ecosystems inevitably generates a flow of goods and people from one side to the other, allowing cooperation as much as competition, including organized crime (which is still a form of cooperation).
Informal markets. The Office of the US Trade Representative (USTR) monitors informal markets. The map indicates levels of monitoring priority and “notorious” markets, including “La Salada”, for the period 2004–2015.
Informal markets are not only present in every part of the world; they function across the world. Urbanists Peter Mörtenböck and Helge Mooshammer coordinated a global network of collaborators to report and map 72 informal markets in different parts of the planet. In October 2023 they visited Argentina, invited by the urbanist collective m7red, and took the opportunity to tour La Salada. On hearing the sound of packing tape being wrapped around one of the many bundles of goods, Helge said, “That is the background noise of all informal markets, from Russia to here.” Informal markets are a local and global phenomenon. On the one hand, they operate on a territorial level, embedded in utterly local and particular geographic, cultural and political conditions. On the other, they are a global platform, present all over the world, completely integrated into the circulation of goods and people, leveraging the technologies of capitalism 4.0 and carrying its brands and consumer patterns to corners few CEOs or advertisers would want to set foot in. In times of deglobalization, they even function as a backup: wherever war, poverty or some catastrophe cuts off technofinancial flow, a market will pop up and the Nike logo will rise from the ashes, held up by the dirtiest, most calloused hand on the planet.
But informality has also spread upward, into the highest circuits of capitalism. In the previous chapter we saw that global commodity trade operates outside regulations, through almost adventurous excursions into high-risk areas. Hart himself puts it this way: “The informal economy started forty years ago as a way of talking about the poor urban people of the Third World living in the cracks of a rule system that did not reach them. Now the rule system itself is in doubt. Everyone ignores the rules, especially the people at the top —politicians and bureaucrats, corporations, banks— and avoid being held accountable for their illegal actions. The privatization of public interests is probably universal but the novelty of neoliberalism is that, whereas the alliance of money and power used to be covert, it is now celebrated as a virtue, wrapped in liberal ideology. The informal economy seems to have taken over the world, disguised with the rhetoric of the free market.”
Capitalism 4.0 is marked by increasing capital informality: technofinancial flows are less and less constrained by any sort of norms. Capital thus emancipated commodifies objects and practices until it leaves almost nothing outside, expands the quantity and opportunity of illegal economic operations and reduces its capacity to absorb the active population in a stable way (the “legitimate employment” still promised by some politicians). Practices such as leasing, short selling and the startup model itself have generated capitalists without capital at the top of capitalism.
For De Soto, the solution to informality is to whiten it and formalize it as it is. He even promoted campaigns to grant formal title to informal housing in Lima that were later imitated by left-wing governments in São Paulo and Calcutta. It seems that the only way to overcome precarity is to start treating it as a system in itself.
Informality made system
A system that generates more goods, less legality and more surplus people inevitably feeds more informal markets. The triangle between capitalism, states and markets is being reconfigured. The state organizes markets so that capital can accumulate. Capital channels a large part (but only a part) of goods and information through the market in order to realize their value. And the markets bring together all of us who need those goods and that information to live. Markets are fighting, inch by inch, over an increasingly scarce and tricky surface of the Earth with states (which calibrate their tolerance), large corporations (whose brands they disseminate and/or counterfeit) and international organizations (which watch them closely without being able to act on them). States and capital will not stop pressuring informal markets to condition or redirect their functioning, but they cannot suppress them —nor do they intend to.
The formality or informality of markets is a moving frontier depending on the needs and capacities of capital and the state in each era. When the US government banned alcohol it created a market on which certain capitals operated; when it lifted the ban it created another kind of market for other capitals. Another example is the cocaine market, whose legality or illegality has remapped Latin America. With its global trafficking banned from 1940 onward, Colombia and Bolivia replaced Peru as producers, and Mexico replaced pharmaceutical company Merck as distributor; Sinaloa’s poppy fields went up in flames and cocaine began to flow along the old opium and marijuana routes. Successive restrictions and “wars on drugs” since the 1970s raised profits but also risks and costs. Anti-market forces triumphed: Félix Gallardo and Jorge Luis Ochoa cartelized the Mexican and Colombian markets respectively, and para-state violence became part of the business model.
The cartelization of informality is not just a matter of narcos. It also occurs with peripheral urban settlements. In many cities in Asia and Africa, urban property is as concentrated as rural, or more so: half of Southeast Asia’s urban surface is in the hands of 5% of owners; in India, 6% of owners hold three quarters of unused urban land. Thus all residents of informal settlements are de facto tenants of an urban landowning elite that profits from informality.
The informal economy is part of capitalism 4.0 at all its levels and in every future we can imagine. If the global order collapses, it will be the backup that sustains the distribution of goods in the surviving fragments; if the global order scales up, it will be the territorial platform that gives it a foothold and conditions of possibility at every point on the planet; if the technocapitalist flow escapes human control, it will be our refuge, perhaps the last vestige of what we once called “civil society”. But it is worth looking more closely at those futures, however briefly.
Three futures
Capitalism today encompasses the entire planet, including the human body. Twenty-five years of this present are about to elapse —too long to speak of “crisis”: this is a version of capitalism, with its technologies and its problems. Version 4.1 will have to solve them in some way. The problem is no longer so much growth as managing surrounding capital and its externalities: energy crisis, hybrid processes, identity intensity. I dare to speculate on three abstract, extreme possibilities. In reality they may overlap or appear partially.
One possible scenario would be for deglobalization to advance until it fractures global capital into sovereign or imperial capitalisms. Predictably, this process will be legitimated in the name of national sovereignties, but more likely hegemonic blocs such as BRICS will form. This scenario may be highly conflictive and unstable geopolitically, but it would permit reversing homogenization and rebuilding technical, biological and cultural diversities.
Another possibility would be the eventual reestablishment of a world order capable of reversing deglobalization and coherently managing surrounding capital. This scenario would deepen homogenization, perfect digital governance and would have to tackle climate-crisis management. National sovereignties and democratic processes of deliberation would have an important ornamental role.
Finally —and this is the most speculative case— the acceleration of global capital could lead it to disengage from all biological processes. Technofinancial flows would slip the leash of the law through cryptocurrencies; of nature, through extropian transhumanism or the singularity; of the planet, through space colonization; and of humanity in all three cases. In this scenario, the whole planet would be the periphery in relation to an ubiquitous center.
Un ensayo sobre las distintas fases del capitalismo a través del tiempo: ese sistema inexorable que empezó a vapor, derivó en industria y hoy se codifica en un presente hecho de precarización e inteligencia artificial. Para entender cómo llegamos hasta acá e imaginar cómo seguimos.
Cubierta impresa a dos tintas (negro + Pantone dorado). Terminación con bajorrelieve de El Gato y La Caja. 264 páginas.
Biografía del autor:
Alejandro Galliano (Tigre, 1978). Docente de la Facultad de Filosofía y Letras (UBA). Coeditor de panamarevista.com. Colaborador en las revistas Crisis y Nueva Sociedad. Autor de ¿Por qué el capitalismo puede soñar y nosotros no? Breve manual de las ideas de izquierda para pensar el futuro (Siglo XXI, 2020).
Un ensayo sobre las distintas fases del capitalismo a través del tiempo: ese sistema inexorable que empezó a vapor, derivó en industria y hoy se codifica en un presente hecho de precarización e inteligencia artificial. Para entender cómo llegamos hasta acá e imaginar cómo seguimos.
Formato epub.
Biografía del autor:
Alejandro Galliano (Tigre, 1978). Docente de la Facultad de Filosofía y Letras (UBA). Coeditor de panamarevista.com. Colaborador en las revistas Crisis y Nueva Sociedad. Autor de ¿Por qué el capitalismo puede soñar y nosotros no? Breve manual de las ideas de izquierda para pensar el futuro (Siglo XXI, 2020).
Aldrich, H., Ruef, M. (2017). Unicorns, Gazelles, and Other Distractions on the Way to Understanding Real Entrepreneurship in America. Academy of Management Perspectives. 32, 458-472. 10.5465/amp.2017.0123.
Badaró, M. (coord.) (2016). China y las transformaciones del capitalismo contemporáneo: enfoques antropológicos. Etnografías Contemporáneas, Año 2, Núm. 2.
Banco Interamericano de Desarrollo (2017). Robot-lución: el futuro del trabajo en la integración 4.0 de América Latina. Integration and Trade Journal 21(42). Argentina: Planeta.