A security camera pointing left on a white background. A security camera pointing left on a white background. A security camera pointing left on a white background. A security camera pointing left on a white background. JANUARY 2021 critical
future
tech
Photo by Siarhei Horbach on Unsplash

Issue #3

January 2021




Welcome to 2021.

We all want to put last year behind us and forget about it. We've been through a lot globally. Unfortunately we'll need to endure some more before it gets any better.

But it will get better.

At least 2020 brought the start of world governments' coordinated stance against Big Tech's influence and monopolies. Whether or not this will lead to meaningful change remains to be seen. We'll be here for that.

For this third edition we're joined by Founder and CEO of Eticas Consulting Dr. Galdon-Clavell for a deep dive discussion on algorithmic fairness and accountability.

- Lawrence

What Happened in December


Governments Vs. Big Tech


The U.S. government and 48 states and districts are suing Facebook, accusing it of abusing its market power in social networking to crush smaller competitors. The suit argues that the company should be broken up. Simultaneously, at least 12 states have filed an antitrust lawsuit against Google alleging an illegal monopoly over the online search market. The suit also revealed the two companies had agreed to "cooperate and assist one another" against antitrust investigations.

The E.U. unveiled a landmark law — the Digital Services Act and Digital Markets Act — to curb the power of tech giants. Meanwhile France fined Google and Amazon $120m and $42m respectively for dropping tracking cookies without consent.

Australia is keeping busy with plans on moving forward at forcing Facebook and Google to pay for news, suing Facebook over a misleading VPN app and may fine Google $400m over its Fitbit takeover if it doesn't wait for the competition watchdog's approval.

Beijing authorities have ordered Ant Group, Alibaba's parent company, to break up its fintech division. This follows the government imposed halt to the company's IPO after Jack Ma's speech against China's government in October.

Fairness & Accountability


Only 7 of Stanford's first 5000 Covid-19 vaccines were attributed to medical residents due to its distribution algorithm. See the video of the staff confronting the administration.

Amnesty International has released a new report condemning Facebook and Google as being complicit in censorship across Vietnam with "state-sponsored harassment rampant" on YouTube and Facebook.

Also, apparently Amazon employees abused their power to watch Ring footage for fun.

Worth Checking


To get these delivered to your inbox, subscribe to CFT's monthly newsletter at the end of the page and join the conversation on Telegram.

And now, onto the interview.

Conversation with
Gemma Galdon-Clavell, PhD

Founder and CEO of Eticas Consulting

This conversation, which took place December 8th 2020, has been edited for length and clarity.

Lawrence — Can you tell us how you became interested in this theme of algorithmic accountability and how you ended up founding Eticas Consulting?

Dr. Galdon-Clavell — I think it's been a combination of different things. There's the fact that I've always been involved and cared about social justice. There's the personal part that draws me to things where the issues of inequality and fairness are involved. Then, when I was doing my PhD I was heavily influenced by the surveillance studies community. I was very lucky to join a European collaboration that worked on technology and the social impact of technology in urban space.

My PhD which initially was very much sociological and drew from criminology and social sciences ended up being a lot more technological than I initially anticipated. By the time I published my PhD that was quite novel: there were not a lot of sociologists or people from the social sciences that worked on technology. I started working more and more in technology on looking at how technology impacts society and getting a lot of interest and funding to do research.

The key thing that propelled me to work differently — from a consultancy and foundation and not from a university which would be the normal thing to do — is that at one point my university asked me to leave because I had too much money and I was too junior to manage that money. They said "You need to start something on your own and we'll transfer all the funds but you cannot stay here".

I'm hoping it doesn't happen anymore. That was 10 years ago, but back then it was very unusual to be a young scholar with funding and that was not welcome at my university, the university of Barcelona.

At the time I started a consultancy because I had no alternative but in the end that really bad solution at the moment became the space that has allowed me to shape my career and my research interests in ways that I feel have a lot more impact that they would've had if I had stayed in academia.

So Eticas, which initially was just a space where I would finish my projects, started getting new projects to work on, new things around technology. Initially I worked very much on security technology. Since then I've managed to work in lots of different fields from education to migration to smart cities, the world of work and the future of work and robotics so I've been able to kind of use my perspective in lots of different fields and what I found throughout my work is that there's some very concerning things about the way we do technology.

I have found consistently in all the areas where I've worked that the way we currently do technology always benefits and adds privilege to those that already have privilege.

We are always developing technologies to reinforce the power of those that already have power. We don't have enough technologies that work to mitigate inequality or bring about fairness. Our way of doing technology very much reflects the way that society works and that is a way fundamentally unequal or that produces inequality. That's why I decided to focus on algorithmic accountability because that's probably the heart of where these decisions always impact positively on the most powerful and always impact negatively on the most vulnerable. I think that's where I'm most useful.

A dark starway with red and bluish tones with two centered arrows pointing up and down.

"I have found consistently in all the areas where I've worked that the way we currently do technology always benefits and adds privilege to those that already have privilege."

Photo by Cheng Feng on Unsplash

L So what is algorithmic transparency and accountability? What are we talking about at the end of the day?

G Well basically there are more and more things around us that are being decided by algorithms. For instance when you take your phone and you ask your maps application how to go to work the way that your phone decides the best and most efficient route to go to work is decided by an algorithm. You don't have a little person making that decision and checking how the road is this morning. It's an algorithm that uses past information on how people have moved in that specific space. It also uses updated information on what's happening at the moment and it tries to give you the best recommendation and that's awesome you know? The fact that we can foresee the best way to get from A to B thanks to algorithms is fantastic.

But then when you realize that the same algorithms that use past data are being used to decide whether you get a job or if you get a mortgage or if you can go to university or if you go to jail then that becomes more problematic.

If the algorithm that decides the best way to get to work makes a mistake the worst thing that can happen is that you are five minutes late. But if the algorithm makes a mistake in not giving you a job or by sending you to prison we're talking about an impact on fundamental rights and vital opportunities for people. That's what is happening.

We have all these algorithms around us deciding more and more things about our life chances. It's not only about what ads we see on social media and not only about routes on our phones. It's also about whether we get unemployment benefits, whether we're labeled as a population at risk that deserves increased social care. There are all these things that are getting decided by algorithms.

We have no visibility of that. If you ask anyone on the street "Have you been impacted by an algorithm lately?" they won't even know what you're talking about. That's problematic because there's all these things making decisions for us that we can't even locate or think about because we have not been informed. So algorithmic transparency is about requesting that these technical systems have the same level of accountability that we have for non-technical systems.

If a civil servant decides that you should not get unemployment, that civil servant needs to justify that decision. There's a procedure that you can follow if you feel that you should be getting unemployment and it's being denied to you by your state. But if it's an algorithm that decides that, it is unclear how you can proceed in front of that decision, how you can defend yourself because one tends to assume that technology is neutral and that it's better than humans at making decisions. We don't even have a path to fight against or to question those algorithmic decisions.

At the end of the day it’s not about creating something new, it's about making sure that the guarantees that we have in the offline world are translated into technical decision making processes.

L In a way it seems that algorithms are used almost as a shield that can excuse them from any outcome you know? "It's the algorithm that has decided. It's scientific, it's empirical, it's based on data" in an attempt to cut short any argument that you can have against whomever is using the algorithm.

G Yeah but you know what I found?

When you say this it sounds like someone really evil is behind this hoping to have technical systems and allow them to hide their real intentions. What I've found in my work is that there's a lot more lack of information than bad will. Oftentimes the people that are buying those algorithms believe that they work because the people that are selling these algorithms are selling promises that they can't really fulfill. They're telling their clients that these algorithms are perfect that they're better than a human and that's a lie.

But the person buying it has no way of checking this because often they don't have a technical background they don't know how algorithms work so they just trust the seller on what it does. They believe that they're making better decisions and it's only when you have an algorithmic audit or someone campaigning against something and exposing how badly and how non-neutrally these algorithms work that they become aware that they have a problem.

In a way there's hope because it's not about bad faith. It is about a lack of information. Thus the need to talk about where algorithmic fairness and transparency is and also to make sure that algorithms cannot be implemented without the necessary safeguards that ensure this accountability, transparency and auditability.

L It's great that you mention that there is a need for bringing more accountability into this decision making process.

The E.U. is currently working on the Digital Services Act which is all about bringing more transparency and accountability on algorithms. In the many years you've been working in this field, what have you observed in terms of how governments have reacted to the pervasiveness of technology, how it affects society and how are they responding?

G Well I think the governments are responding in very similar ways to how the private sector and even society is responding and that is: slowly. One of the frustrations we have is that we thought that with the GDPR that Europe passed in 2018 (but was actually published in 2016 so we've had it for 4 years) is that: even though we have a law, enforcement continues to be a challenge.

So we have all these principles that in theory are captured by laws but that are falling short in building the specific practices that will ensure that those legal precautions are translated into technical specifications and I fear that the Digital Services Act will suffer from the same problem. I think it's great that we have more pieces of legislation that specifically address issues related to technical systems but we need to go from the principles to the practices. It's great to have GDPR that talks about the explainability of algorithms. But unless we say explainability means that you have to audit every algorithm that is being used that impacts on people.

Algorithmic explainability means that every country needs to have a registry so that citizens can know which algorithms are impacting their lives. Unless you start defining what the practices that go together with those laws are, we'll have the same problem we currently have with GDPR. That we have the law where basically no one is complying with it. We have a law that most organizations, public and private, are choosing to ignore.

L It's true that the GDPR and the Digital Services Act are in principle positive but then their application is another matter. How do you apply it, how do you ensure that it's enforced and how do you audit it?

G Exactly, the biggest challenge is implementation. We have the principles. We know the principles of responsible technology, responsible A.I., responsible robotics. We know that it's about transparency, it's about fairness and equality. We have the principles. We need the practices. I think that's the next challenge and the next step in furthering the field of studying how technology impacts society. We need to stop being abstract and start talking about how we solve this. We've been very good at saying what doesn't work. Now I think we need to define What does it mean to do it well.

A silhouete of a man leaning against a wall looking at his phone.

"Algorithmic explainability means that every country needs to have a registry so that citizens can know which algorithms are impacting their lives."

Photo by Maksim Istomin on Unsplash

L Do you have any advice you'd provide to the people working on the Digital Services Act such as concrete steps in order to have actual fairness and accountability in how the algorithms that are employed?

G Just ensuring that you have actual practices. Don't stop at the level of the law but also develop the regulations, the examples and standards. I think that's the most important thing. GDPR is great but it's too high-level and because it's high-level people just act as if it didn't exist. Once you go down from the high-level and start talking about practices then it's a lot clearer for everyone what it is that they should be doing.

Right now I have companies or governments that come to me and are like "We understand the need for our algorithms to be explainable but what does it mean? What does explainability mean?" And then we tell them "Well, if you audit your algorithms that's one way of making them explainable." But the regulation doesn't say this so they're lost. I think the Digital Services Act would be a lot quicker in becoming a set of practices if the regulation itself already included references to specific practices and standards that need to be used in order to comply with the principles that are being defended by this act.

L This goes in line with my next question, specifically regarding Eticas Consulting. What sort of entity approaches your company and says "I feel the need to be audited. I want someone to look at my inner working." What is the reason? Who approaches you actively and not necessarily forced by external factors?

G They can be quite diverse. I always say we work with pioneers. Usually we are approached by a person, not so much an organization. It's one person inside an organization who realizes that they have a problem. And they realize they may have a problem for different reasons. It may be that they're personally concerned about issues of fairness and equality and how technology works or maybe sometimes they are realizing that unless they audit their systems some people may start asking questions. Some people are already realizing that unless you audit your algorithm you may have an issue with whether citizens or clients trust you and your system.

But in any case it's definitely people that are ahead of the curve in realizing that this is an issue that needs to be addressed and also that addressing this issue brings good things to them in terms of building trust with clients or citizens and building better technical solutions. At the end of the day when you audit an algorithm you make it better because you are realizing how the algorithm may be discriminating against women or against other minorities or against older people or against people from a specific region. The kinds of discrimination that we find in algorithms are very diverse.

Some of these discriminations are discriminations that go against the purpose of those who are implementing the algorithm.

We had a case of the Apple card when a very rich person got to Twitter last year saying "Both me and my wife applied for the Apple Card and my wife was rejected. My wife has better credit than me. She makes more money and her history of paying is better than mine." And yet the algorithm at the bank that provides the Apple Card decided that she should not be a client. So that kind of discrimination is bad because is discriminates against women but it's also bad for business because you want that woman to be your client because she's obviously a good client.

So I think there's more and more people realizing that an algorithm that has not been audited may be making decisions that not only are unfair to society but also inefficient for their business model.

L So there is a combination of being aware and actually wanting to do the right thing but it's also wise in terms of business to anticipate these issues that may arise from unfairness let's say.

G We often say that people contact us for the right reasons like: people call us because they care but they find a budget when they realize the impact of not caring

L I like that you said that it's actually individuals within companies that reach out and that in a way have some sort of sensibility towards these issues of fairness or inclusive technology. This is about individuals and it's precisely the purpose of having conversations with people like you.

The idea for Critical Future Tech is to broadcast the importance of being aware, specially for technologists because they are the ones that end up implementing those tools and algorithms.

If those individuals are unaware of the consequences of what they are building it can be dangerous because they are, in my view, the "last line" of understanding what is being built and raise the alarm on unintended consequences. Do you think we can bring this sort of awareness for technologists and other professionals in order to avoid unfair or biased algorithms and therefore negative effects from what they create?

G If we acknowledge that algorithms are making decisions about social processes you need to understand those social processes in order to create code about them. I often say that engineers cannot code a world that they don't understand because they haven't been trained to understand it. That's not their job, it's not in the specifications of what they do.

But if they want to work in the field of socially impacting algorithms they need to start understanding how society works or at least work with those who understand it. As an algorithmic auditor I cannot work on my own. I always work with a team because auditing an algorithm requires that you mobilize knowledge from lots of different fields and areas and I feel more confident if I bring in people that are from a specific field that is needed to audit that algorithm. That is always people from more social backgrounds than technical backgrounds.

So I don't think we can hope to have engineers that know about everything in the next few years, I don't think that's gonna happen. Maybe later on but right now I think that we need to demand multidisciplinary teams in the spaces where the algorithms are being implemented. So make sure that you not only have an engineer but also someone who understands the law, someone who understands society, someone who understands social impact. And then we can start having a conversation. And I would say one more: someone who understands the environmental impact. When we start taking into account the amount of energy that is being used in A.I. we may decide not to use some of the algorithms that we are developing.

But we're not having this conversation yet so I think that's another field that we'll need on the table. If you're auditing an algorithm that uses natural language processing we need people that are trained in linguistics. If we are working with large sets of historical data we probably need an archivist, someone who knows about how to categorize data from the distant past. There's all these already existing professions that can bring so much to the world of algorithms and we need to start mobilizing all this knowledge to make better algorithms.

A man programming in the dark in front of multiple computer screens.

"If we acknowledge that algorithms are making decisions about social processes you need to understand those social processes in order to create code about them. Engineers cannot code a world that they don't understand."

Photo by Jefferson Santos on Unsplash

L Start ups, specially tech companies, want to "move fast and break things" and experiment and don't want to be put back by legislation or best practices. Do you think that the private sector is willing to be put under the lens on how they create their products and tweak their algorithms — for their own advantage of course — in order to make them more fair?

Do you think the private sector, Silicon Valley, is willing to be subject to having auditors asking questions and look into how their products work?

G It's interesting that Silicon Valley complains about this because I don't think any of us can imagine the CEO of Pfizer — for the Covid vaccine — complaining that they have to go through clinical trials before selling the vaccine. In every area of our life, in every space of innovation, we accept that there are laws and regulations that protect people. If you want to sell toys you need to prove that the plastics you are using are safe to be used with children. You need to follow a procedure of advertising what the toy can be used for, where it can be sold. There's all these procedures that guarantee that when a kid takes a toy from a shop the toy has gone through the necessary procedures to prove that that kid can play with that toy.

The same with food. You go to a supermarket and things may be more healthy or less healthy but everything in a supermarket is fit for consumption because someone's validating a procedure that makes sure that whatever ends up in a supermarket has been audited before, it has been checked. What I don't understand is why Silicon Valley is asking for an exception that we have never given any other sector of innovation or industry. Why do they think they deserve that space of lawlessness and why do they think that that's good for people? I don't think anyone would defend that not controlling toys, not controlling food, not controlling medicine would be a good thing for society.

Cars don't need seatbelts. Cars don't need speed limits. We built seatbelt and speed limit because that's the way we mediate between the potential of the technology and its social impact. It's the way we compromise on the safest way to use the technology. The Pfizer vaccine or any vaccine against Covid needs to go to clinical trials because we want to ensure we test that innovation before it gets to people. And when Pfizer goes to clinical trials they would never dream of saying "We don't want to go to clinical trials" but also they understand that unless they go to clinical trials people will not want to be vaccinated. In the end having all these controls is a guarantee for trust building and you need trust if you want people to use your technology. I find it really hard to understand how we keep on allowing the Silicon Valley guys to demand this space that we don't give anyone else. You would never buy a car without a speed limit or a pollution limit controller. We would never allow a supermarket to buy food that comes from unsupervised sources. We have procedures to protect society.

How come Silicon Valley keeps insisting that people don't need to be protected from Silicon Valley? We've discovered that Silicon Valley is impacting the democratic process, how people form political preferences, how children interact with new gadgets. The impact from these technologies is so massive that for them to defend that they should not be controlled is just unthinkable and incomprehensible to me really.

L Well it's understandable that they want to protect their businesses. I would argue that there are maybe two reasons why they were able to pull it off so extensively as they have.

First because it was done in the abstract: you just access the website. It's non-material. What's the consequence of keeping up with your friends when you go on a social network? I think a lot of people didn't see it as being something serious or to be taken seriously.

Secondly, the Internet was a sort of wild west where you could kind of do whatever you wanted. There was a lack of regulation. Governments, legislators didn't understand why or how there would be a need to regulate the web in a way in order to protect against the consequences of biases, the attack on democracy as you said. So it was a very fast moving space that governments didn't keep up with.

This seems to be changing now, first in the physical space: antitrust, anticompetitive behavior. These are the first real things with what you can push back. Maybe after that we can start moving to more abstract issues, like algorithmic accountability, transparency and other areas. At least for me that's how I see that they've been able to get so far without being questioned.

G Yeah. I just hope it ends and it changes because it's just unsustainable. It's just not safe for society. We are harming a lot of people with algorithms that make wrong decisions. We are eliminating democratic safeguards that we have in the offline world but have not been translated into the online world and that is hugely problematic so I think it's high time these issues get addressed.

L For my last question I'm going back to Eticas Consulting. Without giving out your trade secrets, what does an algorithmic audit consist of? What do you look at? Do you look at data? Do you look into the algorithm itself? How does it work?

G It's a whole process and it's not only technical. The first thing for us is to audit how a social problem has been translated into data inputs.

Imagine you have a problem: you want to distribute unemployment payments in a better way, you want to automate the way that you give people the right to claim unemployment. That's the social problem you're trying to address. You want to be more efficient in distributing those funds.

Then we see what data you are using to make the decision on whether someone can claim unemployment. We often find a lot of issues here already. Oftentimes people or engineers are using the data that they have and not the data that they need. In the end they're making bad decisions because they didn't frame the problem correctly at the beginning. This first phase of how you translate a complex social process into data inputs is very important.

We also look at the biases or inefficiencies that your input data may have. Not only in terms of bias or how it could discriminate against people but also inefficiencies or problems with quality. The databases that you are using: how often do you clean them? Can you guarantee that old data is updated? Usually we find that algorithms are relying on databases that are old and not cleaned and not kept and so again that's another source of problems.

Then we look at the social problem and ask: who is going to be impacted by this algorithm? Out of the people are gonna be impacted by this algorithm, who are the vulnerable groups? Who are the groups that need special protection? Who are the groups in the past that have suffered from discrimination? If the algorithm makes a mistake the impact on them will be way greater. We want to ensure that the discrimination stops with the algorithms. That's a process of inquiry where we interview people and use the literature to see who can be considered a vulnerable population in the context of this algorithm.

We look at the technical aspects of the algorithm. We see how it works. If there's machine learning we see how it learns. And we build the technical remedies to ensure that the vulnerable populations are protected through statistical means and that biases are identified and not built back into the way of making the decision.

The last phase of the audit is to look at how the algorithmic decision is being incorporated in the organizational setting. Usually an algorithm will come up with a recommendation or an outcome that a human being has to implement and oftentimes that's an area that auditors usually don't pay attention to. We've found that sometimes this getting together of the algorithmic decision together with a human decision is like "the worst of algorithmic bias meets the worst of human bias". So it's really important to also audit how the algorithmic decision is integrated into the decision making process to ensure that all the safeguards that we have built inside the algorithm are also sustained up to the point of that decision having a consequence.

So that's like, very quickly, what an audit is about. It's a very broad process that is not only technical. It involves engagement with the teams, engaging with experts, engagement with the affected community sometimes also if we can to see how they perceive the algorithm. What are the problems with the existing non-algorithmic processes that they have identified to make sure those existing problems are not replicated in the algorithmic system.

L Sounds like a really fastidious process. How long does it take on average?

G We can do it in very little time like 1 or 2 months but it usually takes a lot longer because it takes the client a long time to get the data. It takes a long time to be able to answer our questions because basically they've never organized their data. In that process alone we see many existing problems because if you don't even know where your data is and what your data standards are then it's impossible that these algorithms will work well. In the end it becomes a process of several months, usually because it takes a long time to organize all the technical information.

L All right! Thank you for your time Dr. Clavell it was lovely talking to you. Hopefully we'll get to talk once again in the future. Take care and goodbye.

G Likewise. Hopefully, yes. Goodbye.

You can connect with Dr. Gemma Galdon-Clavell via the email info@eticasconsulting.com and the websites eticasconsulting.com and eticasfoundation.org

If you've enjoyed this publication, consider subscribing to CFT's monthly newsletter to get this content delivered to your inbox.