Why artificial intelligence reflects the culture of its creators

 

Listen to the series Science beyond the suit (in Slovene):

Nina Beguš, PhD, is the guest of Episode 5 of this podcast

Nina Beguš, PhD. Photo: personal archive

 
 
 

authors:

Maja Čakarić,

Zarja Muršič

editor:

Klara Škrinjar


 
 

In the laboratories and offices of Silicon Valley, technologies that increasingly shape our everyday lives are being developed: from the way we communicate to how we understand the world around us. But if we look deeper, we see that artificial intelligence also hides less visible patterns of power, exclusion, and cultural bias, forcing us to rethink what it means to be human.

Nina Beguš, PhD, a researcher at the intersection of humanities and technology at the University of California, Berkeley, and author of the book Artificial Humanities, knows these mechanisms from the inside. She concludes that technology is still a reflection of men and their thinking, and that the biases of its creators are inevitably embedded in algorithms. In this conversation, she reflects on why we are fascinated by submissive female robots such as Siri and Alexa, and why artificial intelligence is like a pharmakon, both medicine and poison, depending on who holds the pen that writes the history of the future.

Your research work is part of a long intellectual tradition. When you think about your field, on whose shoulders do you stand? 

Without a doubt, N. Katherine Hayles. In fact, we recently met at a symposium. When I gave her my book, she asked me to sign it, and I wrote: "To my intellectual mother." She really was that to me, even though we never collaborated and didn't even know each other personally for a long time. 

When I started working on humanities at the intersection with artificial intelligence in 2012 and 2013, it was still quite early days. I needed people who had already taken this path. And of course there were such people. Her work was exceptionally early; she was already writing about posthumanism in the 1980s, had a deep knowledge of science, and transferred it to literary studies. She allowed her intellectual questions and research to take her beyond disciplinary boundaries, without being constrained by them. I found this extremely important. 

I also come from a literary background, but I wanted to develop a broader, more holistic humanistic approach to artificial intelligence. Her work was like a guiding star for me in what initially seemed like an elusive and very broad question: how to deal with the technology that was unfolding before our eyes and how to respond to it with deep humanistic insight and perhaps also with ethical reflection.

Which of her works and emphases are key in this regard?

Her best-known work is the book How We Became Posthuman. This is probably her first truly groundbreaking work. Later, she also dealt with topics such as writing machines, ways of thinking, and the relationship between humans and technology. She is still a very active researcher and has recently published a new book.

Her key insights relate primarily to the blurring of boundaries between the human and the technological. She points out that the philosophical and ontological conception of modernity, in which nature, technology, and humans are strictly separated, no longer works in the world we have developed. In this sense, her work represents an important critique of the Cartesian tradition of thought. This critique was already relevant when the book was published, but today it is becoming even more important due to the development of artificial intelligence, biotechnology, and other sciences that are forcing us to rethink what it means to be human.

When we talk about literature that looks to the future, we cannot ignore the fact that American female science fiction authors in particular significantly reshaped thinking about technology and the future as early as the 1960s. What does such a gender-sensitive perspective bring?  

When we talk about science fiction... especially in the US, this genre was for a long time almost exclusively the domain of male authors and young male readers. Only gradually, mainly in the 1960s, did this begin to change. Many female writers, such as C. L. Moore and Alice Sheldon, had to write under pseudonyms or male names; Sheldon, for example, wrote under the name James Tiptree Jr. This clearly shows how limited women's access to this literary field was.

It is also telling how female writers approached the myth of Pygmalion differently. At its core, this is a male fantasy, known since Ovid. In Western literature, it is almost always presented from a male perspective: a man creates an artificial woman and falls in love with her, while she remains without a voice of her own and the ability to act independently. With the arrival of female authors, especially Mary Shelley and later the English-speaking poets of the 19th century, the focus shifts. The perspective of the artificial woman comes to the fore, her experience of creation, control, and gradual emancipation. This fundamentally transforms the myth.

Is this also evident in science?

In science, I see the excessive division into individual disciplines as a problem. Modern knowledge production has strictly separated the humanities, natural sciences, and engineering, which is also clearly reflected in the structure of universities. The result is an increasingly narrow focus within individual disciplines and a loss of broader insights from combining knowledge from various disciplines. These are only possible with an interdisciplinary approach, crossing boundaries between fields. 

This is precisely what my book Artificial Humanities calls for: since contemporary research questions are interdisciplinary by their very nature, we must consciously develop this way of thinking.

How does such an approach work in a specific research environment?
Interdisciplinarity has been discussed in science since the 1990s, but it is difficult to implement in practice in institutions that are organised in such a way as to prevent synthesising knowledge together. I myself try to transcend these boundaries in various ways.

For example, I have been working in a natural sciences laboratory for five years with microbiologists, biophysicists, and researchers who develop machine learning models. They are engaged in studying the earth and climate change. 

The collaboration began when they invited me to the laboratory and pointed out the difficulties they were having in describing their work. They were studying a part of Colorado where they analyse soil, but they did not know how to adequately describe the influence of the people present in this environment, as it is a ski resort. This complexity—how to exclude people or include them in studies—was too much for them, so they didn't mention it at all. In their descriptions, humans were present only as a kind of "disturbance." They were frustrated because they realised that the descriptions of their experiments did not correspond to the actual scientific work.

They said, "Let's try to find a better language to describe our field of biogeochemistry and our relationship to the earth at different levels, from microbiology upwards." We began to talk intensively and realised that this was possible; that they already had their own philosophy of how their science works. Together, we wrote scientific articles and even a book. My role was mainly to extract this implicit philosophy from their laboratory work. It all started with frustration: with the feeling that the  technical language available to them to describe their research conditions did not correspond to their actual work.

Moving from the humanities to the field of technology, you also deal with the question of how to teach machines human values. Given that large language models are developed by only a handful of companies, how is it even possible to prevent the values of individual privileged groups from being written into algorithms?

Today's development of artificial intelligence is indeed concentrated in a handful of companies. This is not a necessary reality, but it is currently the prevailing model. Nevertheless, I remain optimistic and believe that we will find different approaches. It is important to emphasise that today's language models are closely linked to capitalism: they are created in Silicon Valley and their primary goal is profit. But there are alternatives. Universities are also developing different models, not just large, commercially oriented transformers.

The question of values is basically almost unsolvable, but we could deal with it with a better approach. Most people are excluded from the process of creating these technologies. If you want to create a commercially successful product, you make a general model in the form of an assistant, a friendly character that helps the user. Claude and Chat GPT are not neutral systems, but constructed characters that mediate between the user and the technology.

We must not forget that data used to train models always has a social dimension. Artificial intelligence is actually a very qualitative technology, less technical than it often seems. So how can we create a system that takes different values into account? This is almost impossible even within a single country, let alone on a global level.

In addition, the models developed by a few companies are used in very different cultural environments. People are dependent on a limited set of systems. In smaller languages, including Slovenian, we also face a lack of data so we are often forced to resort to related languages.

That is why I think a lot about cultural artificial intelligence. In the past, technological ethics were introduced gradually, with social consensus. This is still the case in medicine, for example. Digital technologies have skipped this process. Today, we have chat systems that even offer virtual psychological support without proper supervision.

The key question is which values we want to carry into the future. The first step is joint reflection, and the second is the direct involvement of humanitiessts in the development of technology. Ethics cannot be an add-on at the end of the process; it must be present from the very beginning.

This is also discussed in my book Artificial Humanities. Humanities scholars must participate in the very design of technological products, not just fix the consequences. If we don't think about what we are building from the outset, technology will inevitably repeat existing patterns of power.

Let me give you a concrete example from my consulting work in Silicon Valley. A company wanted to develop a virtual being and asked us for help with the issue of emotions and memory. First, we asked them why they were even talking about a "being" and what baggage this concept already carries with it. We managed to stop them to some extent, but it was impossible to really curb the market interest in such products. Where the market is large and profits are easy, resistance is very limited.

Why are virtual robots or assistants so often designed as women? Siri, Alexa... How do you interpret this choice and what role does gender play in such technological images? 

This was precisely the reason why I started exploring this topic back in 2012, when I came to the US and encountered Siri for the first time. I was surprised that it was not just a voice search engine, but a virtual assistant with a female character, designed to establish a relationship with the user.

Her answers were not just functional, but trivial, almost intimate. If you asked her, "What is your favorite animal?", she would add a standard explanation that she is just software, but at the same time respond with something completely human, for example, that she likes dogs best, and return the question to the user. This was a clear signal that she was not just a tool, but a character.

When I connected this with broader developments in the industry, with major breakthroughs in machine learning, and at the same time with fiction—with stories and films such as Her and later Ex Machina—it became clear how powerful the role of fiction is in technological imaginaries. At that point, I knew I had found a topic that I needed to seriously engage with.

What is really going on here? Why are these virtual beings so often female? It seems to be a combination of subordination, objectification, and servility. Although companies later offered other voices, the basic pattern is still very clear. The example of OpenAI is telling, when actress Scarlett Johansson herself went public and said that the organisation wanted to use a voice resembling hers for its Sky product without her consent. If she hadn't done so, we might never have known about it.

When companies today openly talk about incorporating AI companions and erotica into language models, it becomes clear that these are not just random trends but a conscious effort to create a certain type of relationship. It is obvious that they are building something that the film Her had already suggested to them – and the use of Scarlett Johansson's voice would further reinforce this connection.

These technological characters are rooted in specific cultural traditions. In the book, I discuss artificial intelligence based on the character of Eliza Doolittle in Shaw's play Pygmalion. Eliza's Pygmalion, Professor Higgins, does not treat her as a whole person but as someone who needs to be taught proper speech, reminiscent of an early form of virtual assistant. For example, Professor Higgins becomes attached to her without really recognising her as an equal. It is surprising how strongly this classic literary work resonates with today's developments in artificial intelligence, which is not entirely coincidental: Alan Turing was a great admirer of Shaw's plays. A colleague added an interesting thought that in the British cultural context, the ideal form of a virtual assistant would actually be a butler—not a female assistant, but a male servant. 

Someone  working, for example, in the field of evolutionary anthropology might want  to see whether anyone has studied social learning from the perspective of physics or linguistics. Do you think artificial intelligence can help researchers see beyond their own discipline and discover connections they would otherwise overlook?

I am much more optimistic than many others in the humanities. For me, AI has great potential because it can lead us to new insights due to its completely different way of processing information. This applies not only to discoveries in medicine, but also to the humanities, for example in deciphering old manuscripts. The important idea is that more people will have access to knowledge tailored to their level.

The problem with language models is that they are not deterministic but probabilistic, so they present a slightly different picture to each user and cannot be relied upon as facts. An engineer at Anthropic explained to me that you can see in the model when it is misleading or telling untruths because it knows that it must not reveal certain things (e.g., how to break into a safe). 

There is a lot of work to be done on how models work and what their results are, as they have a huge impact on society. When so many people use a model, a cultural loop is created that also enables manipulation and misinformation. I always view artificial intelligence as a pharmakon: it is both a cure and a poison. How we use it is important. I trust in human nature—we are good at solving problems, but also at creating new ones.

I trust in human nature—we are good at solving problems, but also at creating new ones.
— Nina Beguš

When we talk about how strongly today's artificial intelligence models shape society, we cannot avoid the question of who actually creates these technologies. Ada Lovelace was one of the pioneers of computing, but today, men dominate the field of AI. How do you view this "boys' club" in the technology industry?

California's influence is undoubtedly strong. Having lived on both the east and west coast, I have become more familiar with the specific culture that is deeply ingrained in products that are then distributed worldwide. These products are undoubtedly a reflection of technologists and their thinking. It is true that these professions are dominated by men, partly because of their focus on engineering, but also because managing such companies is not easy.

Of course, there have been women in leadership positions, such as Sheryl Sandberg at Facebook or Susan Wojcicki, who took over YouTube. However, managing such a giant requires the right person. It is not so much a question of gender, but rather the ability to operate in an extremely demanding and stressful position that requires a clear vision.

There is a lot of talk in Silicon Valley about geniuses, with Steve Jobs being a perfect example. If we look at the history of science and inventions, we see that breakthroughs actually come about through the collaboration of many people. We have built science in such a way that everyone contributes their part, which ultimately leads to a breakthrough. Nevertheless, we still idolise the history of great personalities who are mostly men. A mythology is created around certain individuals, such as Jobs. Today it is Sam Altman, in a few years it will be someone else.

It seems to me that feminism in America is different from what we are used to in Europe. For many Americans, the idea of a woman in a leadership position is still almost inconceivable, which is also reflected in the presidential elections. This mythology is built primarily around men. Samuel Morse is a good example, he built an empire with the telegraph because he knew how to sell his invention. He patented it, founded a company and raised funds because he demonstrated progress every step of the way. On the other hand, Charles Babbage, a colleague of Ada Lovelace, had brilliant ideas but never realised the machine so the sources of funding dried up. At the time of his death, it was believed that nothing tangible had come of his work, but today he and Ada Lovelace are major figures in the history of computing.

History is therefore written with a focus on individuals whom we honor as inventors as if they had worked in isolation, which is never the case. Due to feminist approaches, the importance of Ada Lovelace has only been discussed more loudly in the last 15 or 20 years. Today, we know of the Ada Lovelace Institute, the Alan Turing Institute, and similar institutions.

Are you therefore optimistic that the history of the future will be written more broadly, that it will be less focused on narrow, less diverse views and more beneficial to society and the planet as a whole?

I believe that these topics are being discussed much more intensively today. If I compare the way we approached ethics ten years ago with how we do it today, the changes are obvious. There has always been a democratic approach, a concept of a forum where people of different social classes, genders, races, and professions gathered in one space to discuss a specific problem.

Today, however, we operate on a social level that is hard to imagine, especially when we consider that there are eight billion of us in the world. In addition, we do not know how many virtual entities are collaborating with us and thinking about problems together with us. This represents a colossal social change. 

Let's look at the example of the financial industry. We can see that each employee can have between three and ten assistants with artificial intelligence. This creates a completely new relationship to work, as these people simultaneously perform their original tasks and train artificial intelligence agents.

Let's return to the starting point. Your mention of N. Katherine Hayles and her work How We Became Posthuman also brings to mind Descartes and his dualism of body and spirit. It is as if these concepts are surprisingly intertwined.

The hype surrounding artificial intelligence has been around since the very beginning, even before we had a term for it in the 1950s. When the first neural networks appeared, The New York Times wrote about "the embryo of a computer that will one day talk and read." Such expectations have always been present but so has great skepticism.

We must be aware that the people who sell these models are running a business. They have to present their product as something big that will change the world. In Silicon Valley there is a battle for innovation and breakthroughs; if you want to sell your work, you have to convince people that you are changing the world with it. So we are listening to a very narrow group of people who profit from this and tell us what artificial intelligence is capable of. Personally, I believe that this is indeed a great technology, comparable to the invention of electricity, which will change the world in the 21st century.

However, development is not progressing as quickly as proponents of singularity (the turning point when artificial intelligence would surpass human intelligence and begin to improve on its own, without our control) like to predict. Let's look at the people who were part of this development, such as Geoffrey Hinton. He was underestimated in scientific circles for decades because his models did not work well. This was simply because we did not have enough digital data and computing power. Now we have achieved these conditions. However, he had to endure a lot in the scientific community to get to this point. In Slovenia, we were already working with neural networks in the 1980s. I don't know how Hinton survived; he needed tremendous faith in his work.

Now that he is finally celebrating his triumph, he warns us that this is a dangerous thing. Why? There are two reasons. First, he feels personally responsible for his creation. When he observes developments and predicts the future, he sees how quickly things can go wrong. Second, I think he is dominated by the realisation that artificial intelligence has an advantage precisely because it does not have a body. It does not have the biological limitations that humans have. Information is transmitted much more slowly and in a more limited way in human language than between machines. Hinton therefore sees our limitations, which artificial intelligence does not have. His explanation of the dangers of artificial intelligence makes sense when we understand the premises that led him to these conclusions.


 

The production of podcasts and other content was financially supported by ARIS through the 2025 Public Call for the (Co)financing of Science Popularization Activities.