Is it possible to develop technology free from bias?
What place do we want to assign to AI in our society?
This article was based on episode #21 of Axoly Tech Podcast.
We had the opportunity to meet Gabriela Arriagada Bruneau, a professor in Artificial Intelligence and Data Ethics at the Catholic University of Chile.
If you want to listen to the entire episode (in Spanish):
For non-Spanish speakers, this episode allowed us to delve into biases within different types of Artificial Intelligence and the role we want to give it in our world.
Here are some excerpts from the episode.
Ignacio: Through my work as a developer, I’ve learned that technology and software are never truly neutral. What I build can have a profound impact on society and the planet. So, I wanted to ask you: is it possible to create AI systems without inherent biases, or will there always be some level of built-in prejudice?
Gabriela: Yes, I appreciate your question very much because I always tell everyone: we cannot create Artificial Intelligence without biases. One of the major problems we had initially […] was that there was an explosion of many people “playing” with it; it was the new toy, this whole idea of “look what we can do,” and a series of applications took off. It started growing exponentially. However, at that point, many problems began to arise. Because, perhaps not surprisingly to those of us who work in ethics but maybe surprising to those who are more mathematically inclined, technologies are not neutral.
So we started encountering a series of problems, for example, a predictive model that worked mathematically very beautifully generated bad results because it harmed people. It classified them poorly, made or presumed certain things about people, created patterns. Because there were biases in the data, because there were structures we weren’t understanding why they were affecting those results. And in that sense, I think this idea is super important: that in reality, we cannot get rid of the biases. […] What we do have to do is understand what types of biases we have, how they affect us, also you as a developer, us as a society, and how they interact with each other […].
Ignacio: Do you have an example of a bias that affects someone on a daily basis?
Gabriela: I think there are biases that we don’t even have on our radar. […] I am doing a project where we conducted a retrospective analysis of how a series of researchers built a natural language processing model. So basically, what they were trying to do is optimize waiting lists […]. However, that’s when we started to find that perhaps, despite the model having a good precision metric, it was failing in many things. For example, it failed a lot in gynecology. It turns out that there, for instance, we had structural biases related to how diagnoses were being typed in the discipline of gynecology. These went from not considering the level of pain of certain patients by some doctors due to cultural biases, but we also started to identify that many diagnoses were not written as diagnoses. They did not use language that actually described a diagnosis because often they were written by midwives or nurses and not the doctor. […] And that’s where you see a connection. A bias is replicated in how the machine is incapable of seeing something, because as humans, we are also incapable of seeing it. We don’t consider it.
As you dig deeper, it becomes more interesting. It’s not just about what I feed into the machine, but there are underlying structural biases that can affect the model’s performance.
Ignacio : In terms of ethics, for instance, when we’re working on artificial intelligence or AI-powered applications, how can we redesign our workflows to mitigate potential risks and biases that we might inadvertently perpetuate? How can a development team prioritize an ethical perspective right from the start of a new project?
Gabriela : I believe what’s happening now is that universities are prioritizing ethics in their curricula […]. When I’m teaching programming to my students, we don’t just focus on the technical aspects. We also consider the social implications of our work.
I’ve always argued that in particular areas like artificial intelligence, we have disciplinary convergences that set us apart from other fields. […] That’s why I think it’s essential to integrate ethics as a methodology of work. It’s an ongoing process, not just within the lifecycle of AI, but also on an ecosysteatic level. This means considering everything from formulating the problem we’re trying to solve to how we’ll use data, train the model, and implement it.
But that’s not all. There’s so much more to it. And that’s why I like talking about social-technical intelligence. […] My goal is for more people to take up this work. I want more people to start researching and developing methodologies that integrate ethics with interdisciplinarity. I believe that’s the key to integrating ethics into AI.
To achieve this, I think it’s crucial that we have a larger community of practitioners who can share their knowledge and expertise. We need to create an ecosystem where researchers, developers, and policymakers can collaborate and work together to develop more responsible and ethical AI solutions.
Ignacio : It’s not just about knowing what needs to be done, but also putting it into practice and having regulations in place because ultimately, who makes the decisions that affect implementation isn’t always the ones who develop the program.
Gabriela: Actually, one of the “problems” we face is that, at least today in Chile, there isn’t a legislation that effectively addresses these structural issues. In the congress, we’re discussing a new law on artificial intelligence that has been sent for review, but I and various academics are not in agreement with it. We hope it will be improved. But broadly speaking, I think this has to do with layers of responsibility associated with it.
On one hand, regulations help because when working with companies that have significant economic interests, it’s essential to have regulations that enable internal and external audits to provide incentives for responsible practices. At the same time, I believe we need to raise awareness about this. I think transversally, we need to emphasize the importance of this issue not just because we’re doing good or bad to many people, but also because there’s another dimension that hasn’t been discussed much in the early waves of this AI ethics advancement, and now it’s being talked about: environmental impacts. […] We need to have a regulatory framework that allows us to establish clear guidelines for organizations on how they audit their processes. This includes thinking about the process, justifying it effectively, and having an inclusive context of development.
But I also believe we need to start thinking about measuring the impacts of our actions. For example, when using o generating a language model like Chat GPT, we need to consider what kind of resources are being consumed, like water and electricity. So, I think a comprehensive approach is needed: regulation, legislation, incentives for investment in audits and other measures, but also social awareness and taking responsibility as Chileans for this discussion.
Ignacio : When something is being digitized, it’s essential to consider whether it’s truly necessary. We need to think about the potential impacts and weigh them against its usefulness. But in order to make informed decisions, we need to have a basic understanding of what’s going on as users. I believe that closing those digital gaps is crucial for building trust in AI.
Gabriela : I really enjoy many of the initiatives being taken just because of the renewed intelligence policy that was implemented in May this year (2024). The initiative started in 2021 and many of these efforts are also aimed at formalizing knowledge and bringing it closer to the general population.
I like talking about “digital divides” where we have this idea of an elite who has a high level of knowledge about how digital technologies work, and then there’s a middle ground with different levels of access to understanding and benefit […]. And then you have those who are completely unconnected, which unfortunately make up part of the population.
Sometimes it’s assumed that only older people who may not have grown up with technology are illiterate in this sense, but also many young people are unable to read and write digitally. It’s because they’re exposed to digital technologies from a very young age, they don’t develop a critical perspective on how to use them properly. […] We’re seeing trends that we need to be concerned about because we need to understand the cognitive effects these will have on society and on boys, girls, and adolescents. For me, a key aspect of building trust in our ecosystem is also considering the impact of adopting digital technologies from an early age. We lack sufficient studies and data on this topic, which makes it crucial that we start to see evidence of how this is playing out and how it has evolved over time.
Ignacio: As you said before, technology is part of a society or organization and has effects on people. That’s why I believe we need to stay alert and watchful for changes and developments, so we can be prepared for any potential consequences and take proactive measures to mitigate them.
Gabriela: Yes, it’s like looking at everything as an interconnected system. In reality, this is about understanding how influence works, and how relationships shape things. When you can step back and see the bigger picture, “how is this affecting me?”, “how am I affecting others?”, you start to understand.
For me, this is what’s so exciting about artificial intelligence – we’re moving too fast, and we need to slow down and take a closer look at what we’re doing. We need to question our assumptions and be more thoughtful in our approach. But at the same time, I think this technology also offers us a chance to get back to basics and understand what we really want from it. What are our goals? How do we want humans to interact with machines through automation? I think we’re not always taking the time to reflect deeply enough on this, we’re not scaling up our thinking far enough.
Ignacio : For a long time, it was believed that technology or digitalization was neutral and that it’s a solution to the problems, and I think there’s been a kind of “technosolutionism” in many things where technology has been positioned as an infallible pillar, excluding any subjectivity. That’s not true. And when it comes to intelligence, I agree with you that we need to re-think what we want to put at the center. Do we just let this technology grow without control, driven by those who created it? Or, as a society, as individuals who are being affected by it, do we take a stand and say what we want? What kind of project do we want to create? What kind of technology do we want to develop? And what role do we want technology to play in our lives?”
Gabriela : I think there are some misconceptions about artificial intelligence that need to be addressed. We don’t have a universally accepted definition of AI, and I believe the term itself is flawed. I reluctantly accept to call them ‘artificial intelligences’, which doesn’t accurately reflect what we’re dealing with.
What’s relevant here is understanding that these so-called “intelligences” are actually just various tools and techniques for processing information in different ways. But what bothers me is the definition of AI itself, which is often reduced to simply simulating human behavior or mimicking certain human knowledge or learning. I think this definition is not only poor but also fundamentally flawed.
Take, for example, a recent AI system that was designed to detect cancer from images with 100% accuracy. But when it was tested on a dataset of scans, […] it ended up misclassifying the background as the tumor, and not picking up any relevant information about the actual tumor. This shows that while the AI can process vast amounts of data, it doesn’t truly understand the context or meaning of what it’s looking at.
What worries me is that we’re trying to replace the complexity and beauty of human understanding with something more simplistic. Are we just trying to make our lives easier by simulating intelligence? But in reality, true intelligence involves understanding language as a dynamic system, not just processing numbers and patterns. Human beings don’t simply understand language through algorithms; we learn it through our senses, our relationships, and our experiences.
We need to be more careful about how we define and pursue AI, rather than settling for a watered-down version that doesn’t capture the essence of what makes us human.
So I think that many times, the perception that has been given to artificial intelligence in the use of language that starting with what we call artificial intelligence, I think that it has also influenced people to have a very wrong idea of what it really is and what it is capable to do.
Ignacio : The example you gave makes me think that it’s even more necessary to have the ability to audit models, in terms of transparency and accountability. And from everything we’ve discussed, there’s a lot to learn, not just for those who create and work on developing artificial intelligence, but also for those who use them and are affected by their use, whether through our training data or because we’re impacted by their use. This is a fairly transversal and global change at the societal level, and I wonder how this process can be harnessed to give these technologies the space they deserve, rather than being victims of an organic evolution without control.
Gabriela : Lately, there was a note from one of the leading voices on artificial intelligence issues discussing the need to stop progress on any advanced AI or model superior to GPT-4 now. Considering this could have consequences… We should perhaps strive for agreements, set limits, and establish international accords […]. For instance, as part of the National Center for Artificial Intelligence, I’ve witnessed organizations without profit motives, governments, research institutes, technology centers, all working towards a common goal – reaching these agreements. To what extent do we want to go? What boundaries do we want to set? How do we perceive this impact on our population? Moreover, there is an inherent vulnerability that affects the Global South specifically.
We’re aware that many of these technologies are developed globally, with laws and interests from the Northern hemisphere, affecting our own population in some way. From here, we’re discussing and empowering how this affects us as well. I believe, from my perspective, that it’s necessary to a certain extent, and could potentially be enough for a period of time, to have international agreements on this issue. This is extremely difficult. I think it can be done, for example, within Latin America, where we can establish our own agreements, which the European Union can then join, or the United States. We have powers like China that may not want to establish certain international agreements, they might not even agree to them fully, but we’ve seen some good-faith actions so far. For instance, when they developed their own legislation and are discussing it with the US and EU. This is often compared to nuclear arms because it’s a high-risk technology that transcends political boundaries and geography. Artificial intelligence has the same potential power. In this sense, I believe we need to show respect for its impact so that we can measure those possible effects and reach agreements that will be necessary. We need to take this seriously.
Ignacio : In the short term, what do you think is the most important challenge of artificial intelligence?
Gabriela: We’re finishing up writing a book with colleagues and in the final chapter, we’re wondering how we’ll see this evolve over the next 10 years, or maybe 5 years. We might be completely off the mark, perhaps something unexpected will come out that nobody saw coming, or it might all stop. It’s uncertain precisely because I think there are many things happening at once. But I believe one of the most important issues we need to address is for us to take responsibility and reflect on what we’ve discussed: before we can achieve robust global regulation or legislation, we need to establish standards for transparency in processes, standards for how we’re sharing data with each other, for what purposes. We need something like an exploratory agreement. If we know these companies are going to continue advancing, we know governments will keep investing in scientific research related to artificial intelligence… Then at least let’s have some common ground. That’s the most important thing right now. Because that will require infrastructure for human resources and technological resources that align with these dimensions of ethical development we’ve been discussing. If we have a regulation that requires that for an artificial intelligence model or data that comes from Europe or the United States to enter Chile, well, they must go through the criteria that we also consider necessary […]. I think that will somehow push those requirements to become part of the everyday reality of developing and adopting AI in our country, but also one that’s taken into account from outside.
Ignacio : returning more to the day to day. When we use one of these artificial intelligences, how can we realize the biases we have, the biases that this information may be presenting to us and how to avoid continuing to replicate those biases?
Gabriela : It’s super difficult and has been a challenge in philosophy and meta-ethics too, when discussing inherent biases that one can’t process or recognize. We often have biases and rarely even realize it. I think the practical way to recognize these biases is through variability and exposure to different types of information and cognitive processing methods. In simple terms, I’m referring to social interaction. In everyday life, considering the abuse of existing social media platforms and excessive personalization through recommendation algorithms, how many times do I actively seek parallel information? How many times do I question what I’m seeing? […]
We’re in an era where we receive so much information, even at academic and professional levels. We lose touch with being able to contextualize the issues we’re tackling. In that sense, when they tell me “I’ll ask ChatGPT”… First of all, you won’t be asking anything to ChatGPT because it doesn’t respond; it generates simulated grammatical predictions. But why not ask it which topics are related to this instead? And then look for them yourself. That’s how I put myself in relation with this technology.
I don’t use Google to search for information, I use Reddit. It’s a platform that a lot of people don’t know about and it’s basically like information groups on various topics. Often when searching for specific information on solving a problem or learning about a subject, it’s human recommendations. That’s what I think is important when interacting with excessive information – to simulate human connection. To reach out to someone who has experienced the opposite or something similar. Because that’s the experience that truly enriches understanding whether I’m correctly interpreting this or questioning what I need to question. Technology doesn’t do it for me.
Ignacio : For those who work in the digital sector, whether creating software, technology or developing artificial intelligence systems, what do you think should be the first change, the first action that should be implemented to be more aware of everything we have talked about?
Gabriela : There are two things: one is knowing that you don’t know what you don’t know, and the other is recognizing that you have a professional deformation. I know it sounds rough, but it’s a technical term. The professional deformation basically stems from our increasing hyper-specialization over time in how science is being practiced. […] We have different ways of understanding problems. […] And if they’re not in an interdisciplinary field, they should expand into that. This goes from reading popular science books that show how certain digital transformations are taking place in other areas, to listening to podcasts – something accessible, simple to do and not time-consuming, but which will start opening a different cognitive perception.
That’s what generates long-term changes because it didn’t make any difference telling someone to go take a course on ethical reasoning, algorithm evaluation, or auditing… if they’re still not opened up to the state of self-awareness ‘I’m professionally deformed’. It’s a beautiful process where you identify what you don’t know. So my recommendation would be to start there.
Ignacio : Very good advice. I think it’s quite simple to do what doesn’t mean that it won’t be difficult at first. You have to accept that you don’t know what you don’t know and be willing to learn and explore things that might sometimes go against what you’re used to thinking or believing. Thank you very much for the conversation was very interesting, do you want to add anything else or a final message?
Gabriela : I’d like to invite everyone listening to us to keep the conversation going and not consume too much content. I liked that this was a conversation where we could flow with the topics because here it’s not just about processing information, we’re not machines, we won’t simply take in input and output. I think what’s important here is that we go back to reviving the contexts in which we live, the importance of having a childhood different from mine, studying something different things, and how that puts us in certain advantages and disadvantages when it comes to interacting with new technologies.
I believe we’re in a situation where ironically, these technologies allow us to get closer but it’s up to us also to transform that digital closeness into human connection. So I invite you all to join in and start those reflections. I’m always available to chat. You can also find me on my social media channels.