Does artificial intelligence dream of human empathy?

According to a report by UBS in January 2024, revenue from the artificial intelligence (AI) market is forecasted to grow by 72 percent annually in the coming years, reaching $420 billion by 2027. This marks a significant upward revision from previous estimates, which projected AI revenues to hit $300 billion by 2027.

UBS’s report suggests that the global technology sector will continue to strengthen, with a special emphasis on the role of artificial intelligence as the next evolutionary stage in technology, following mainframes, personal computers, and smartphones. Experts predict that the driving forces behind the industry’s growth will be the increased demand for AI and the rising need for specialized infrastructure such as chips and graphic processors. AI applications across various markets are anticipated to maintain their growth trajectory, leading to an expanded use of AI programs and models. The market value of AI is expected to rise from $2.2 billion in 2022 to $225 billion in 2027, indicating an annual growth of about 152%. UBS analysts have revised their initial market estimates, increasing the projected revenues from $28 billion in 2022 to $420 billion in 2027, representing a growth rate of 72%, up from the previously forecasted 61%. However, they highlighted the risk that these forecasts might still be conservative. The benefits of utilizing AI are firmly believed in.

What are the actual costs of AI?

We seldom ponder their origins when purchasing deli meats or withdrawing cash from ATMs. Many of us perceive these processes as akin to magical spells cast by machines. Similarly, we often ignore our modern lifestyle’s social and environmental costs.

This unawareness, whether among customers (of products) or users (of applications), is a recurring theme throughout the history of industry and economy. The early experiments with electricity conducted by pioneers such as Benjamin Franklin and Nikola Tesla were fraught with dangers, often leading to life-threatening accidents for the researchers themselves. In fields such as mechanics, pharmacology, or biology, the risk usually fell on the inventor, who personally bore the consequences of their discoveries. The economy and the digital economy are no different in this respect, though we frequently remain unaware. A Uyghur miner or one of the “Invisible” will not slip a note asking for help into a shirt they have sewn.

The story of Clara Immerwahr

Cyclone B was the name of the agent used by Nazi Germany for the mass extermination of people in gas chambers during the Holocaust. This agent was a form of hydrogen cyanide, packaged into pellets that released poisonous gas when exposed to air. Fritz Haber, a German chemist and Nobel laureate, invented the method for producing what would later be adapted to create Cyclon B. Haber earned the title of “father of chemical warfare” for developing combat gases during World War I. He did not invent Cyclon B in the form known from World War II; however, his work with hydrogen cyanide applications facilitated its later use in extermination camps.

Fritz Haber’s wife, Clara Immerwahr, also a chemist, was a staunch opponent of war and her husband’s work on chemical weapons. Their marriage was fraught with ethical and professional disagreements. Clara vehemently opposed Fritz’s involvement in the creation of chemical weapons, which led to tragedy in 1915 when she committed suicide by shooting herself with Fritz’s pistol. Her death was interpreted as a protest against her husband’s actions, though the circumstances of her decision remain a subject of historical debate.

Clara Immerwahr, a critic and victim of Haber’s scientific policy, is often remembered as a moral voice opposing the use of science for destructive purposes. After her death, Fritz Haber continued his work, including the development of combat gases, but outside Germany, in the face of rising anti-Semitism and political changes in the country.

Where does artificial intelligence come from?

When we create a new prompt, we rarely think about the technology behind it. The same is true when we turn on the lights, start a computer, travel by plane or car. Our ignorance of these processes does not mean they result from magic.

In the 2003 documentary “The Cost of AI,” by VPRO, Marije Meerman uses a metaphor describing AI technology as a “mighty computational cloud of sweat, blood, and metal.” This metaphor reminds us that AI does not arise in a vacuum.

AI does not create anything new; it merely reflects existing data. One chip costs $10,000, and training one AI model may require as many as 25,000 such chips. The computational power needed for this process doubles every six months. While the results of AI work may seem magical, they are the outcome of complex but entirely material processes.

Who are the “Invisible”?

The “Invisible” are workers who play a vital role in training artificial intelligence. In the digital economy, they represent a paradigm shift in the history of invention, where risk and responsibility once fell directly on the creator or their team. Today, the “Invisible” is often employed by outsourcing companies, which in turn employ “Invisible” workers from Kenya, working for less than $2 an hour. Many German workers are foreigners. They make up 80% of a thousand-strong team. Their job involves watching and categorizing challenging content online—a task machines cannot perform. Do AI technology providers care to ensure these workers do not suffer from PTSD? The answer seems obvious, considering the analogies to traditional industries, where significant manufacturers of shirts or food typically ignore the dark side of their operations.

The “Invisible” also include miners from Chinese silicon sand mines, a key ingredient in chip production, where 70% of the extraction comes from China, including the Xinjiang province, where Uighurs are forcibly employed.

The image you see is a creation in response to my prompt. Authentic photos from such places rarely reach public awareness.

Will the machine dream of human empathy?

Joseph Weizenbaum (1923-2008), the godfather of artificial intelligence, created one of the world’s first chatbots, ELIZA. This program, which simulated a conversation with a therapist through a simple mechanism of analysis and response, was a pioneering step in AI development. Weizenbaum, a German-American computer scientist and professor at MIT, became a critic of certain aspects of artificial intelligence, especially its misuse in situations that require human judgment and empathy. His works, including the book “Computer Power and Human Reason: From Judgment to Calculation” published in 1976, underscore the ethical and social implications of computing technology and advocate for a responsible approach to innovation processes. Robert Oppenheimer knew this all too well.

Weizenbaum warned, “The danger of AI is not that machines will start thinking like humans, but that humans will start thinking like machines.” I will supplement his words with a particular thought from Stephen King, who once stated that watching a movie from the perspective of the Beast’s eyes is no problem. The difficulty begins when we start to be fascinated by how the Beast kills.

Remember these words when creating another prompt to generate text, image, sound, or film.

Leave a Reply

Your email address will not be published. Required fields are marked *