Critical Future Tech turns 1 year old! π
It's been a year since CFT started. With it, 12 editions, 12 people with very different backgrounds who took the time to discuss technology's impact and its multiple ramifications in our lives and futures.
Now, onto the next 12 editions.
For this edition, we're joined by Frederike Kaltheuner, Director of the European AI Fund where we talk about shaping technology via policy, regulation, walled gardens and the role organizations such as the European AI Fund play.
- Lawrence
Lawrence - Hi today, we have the pleasure of talking with Frederike Kaltheuner.
Frederike is a tech policy analyst, researcher, and advocate for justice in a world made of data. She currently manages the European AI Fund, a philanthropic initiative to shape the direction of AI in Europe. Before that she was a public interest tech policy fellow at the Mozilla foundation.
Frederike, welcome to Critical Future Tech.
Frederike - Thanks for having me, hi!
L I'm very curious to discuss what the European AI Fund is doing. But before that, I want us to get a bit into the mood on policy and those sorts of topics, which can be a bit annoying too many technologists I would say. And that's why I also really want to talk with you and have your perspective on that.
So before that, can you tell us a little bit how you ended up working in policy in general towards tech?
F So I've worked on the intersection between society and technology for the past 10 years. I have a background in philosophy and politics but realized β I think when it was about time to choose a master's program β that a lot of societal, political, philosophical questions of our time are closely intertwined with questions about technology. And after my degree, I worked briefly as a data analyst. I worked for a research Institute and a think tank but I really got into tech policy when an organization that I worked for a few years, Privacy International.
They had a job opening for a policy officer and so I applied and I don't think, to be honest, I really knew what policy was, but they nonetheless gave me the job and it really opened up this entire world and field to me and I've been working on, I think tech policy in a narrow sense ever since I think the past five years I would say.
So tech policy, what does policy mean? Policy means the rules that shape how technology can be used, developed and researched. And the reason why I find this area super interesting is that technology is constantly changing and as a result of this, existing rules sometimes no longer really make sense. Or you need courts to reinterpret how they apply to a change in context. And then I think technology poses lots of questions that are genuinely new and in quality, not everything is new, but sometimes there are genuinely new questions and then the field of tech policy is trying to find answers.
All of the issues that are in the news every single day from disinformation, hate speech online, free and fair elections, the rules that govern data, the market dominance of big companies, the infrastructure on which our society is run. So it sounds like an issue, but it's a really fascinating field.
If you think about, I don't know, 20 years back, our world has really radically changed in the past 20 years and we have a really short window of opportunity to define the ground rules that our technological societies should follow. And that's why I find it really interesting.
You have powerful companies who work in this area who have spent a lot of money on lobbying. But the European AI Fund is a philanthropic initiative born out of the idea that there are not enough resources for organizations that work in the public interest that work critically on technology.
And it's a pooled fund, which means that 10 different foundations came together to pool their resources and focus on this issue. So it's not narrowly about AI. I always say that AI is the entry point for a lot of especially European foundations to support critical work around technology.
L Yeah, I mean, I also agree that there are pressing issues: climate change, inequality. And sometimes I've been faced with the question: why would tackling, say the way we access information, be more urgent in a way, and it's not necessarily more important, but it's maybe more urgent because for instance, with the filter bubble and algorithmic editorialization, if we don't have a way of talking about the same thing, then how can we solve those bigger issues?
I don't know if you share that sort of approach to the problem as well?
F I don't like to rank, I mean, of course you have to rank problems. I think climate change is a really existential problem. If the planet no longer exists, we don't need data protection to put it in this very extreme way. But you're right, I think we need different people working on different things.
And what I think is really important to understand is: if you work on discrimination, if you work on anti-racism, if you're a trade union, if you're an environmental activist, whatever you're currently working on, there's now a tech element to many of these fields.
So for example of course that we still face old discrimination, very old discrimination from sexism to racism but what is new is that there's now a tech dimension to this. So we now have automated discrimination. We now have discrimination that's embedded in infrastructure.
That has already been the case. But in digital infrastructure that changes some of the discussions and logics and dynamics around it.
And I think that really clearly links to a key mission of the European AI Fund. So we fund organizations that already work on tech policy or digital rights, but we also fund organizations that have very specific domain expertise. So these are organizations you don't necessarily think of when you think about an AI fund and we're not supporting them because we think everyone should be working on AI, but we are supporting them because they came to us and said, we want to build our capacity when it comes to speaking about tech issues. I mean, for a long time, the internet is not a separate space. It's an aspect, a dimension of everything. And that's why I don't think " is this more or less important than climate change"?
Public spaces are where activism forms, it's where elections are shaped and formed. And so it has this sort of almost infrastructure level where we have to make sure that we have healthy, thriving public spaces where we can tackle the really important challenges of our time. So I think, yeah, I always find it difficult to rank these issues, but I think it is, it is one of the important issues that we need to tackle.
L Sometimes you speak with people that will tell you: "Well, there are bigger issues than that, such as climate change", right? But I still need a way to organize, you know, and I need to do so in a safe way or a way that doesn't compromise the people that are organizing for instance.
F Exactly. When you talk about especially young climate activists face tremendous harassment online. So that's the core tech policy issue: how can we make sure that people aren't harassed? We've seen some very invasive surveillance technology being used against environmental activists. I think these issues are closely, closely connected, but I see it more as: climate change is a crisis that we need to avert. Whereas versus tech, the question is really what kind of world do we want to live in?
And now is the time to set the rules. Do we want this world to be democratic? Do we want this society to be open? Do we want it to be a world where people aren't discriminated against? So these are sort of the reasons why at least I care about these discussions.
L The fact that we need to have those conversations and when I'm saying "we", I'm using "we" in a very general way. I'm saying "we, the people" that have some ability to enact some change, right?
Whether you're a policy researcher, AI researcher, technologist, company owner, I don't know. But it seems that there is a big fragmentation on how the efforts should be done and the ability to actually do something. And where I want to go with this is that when you, for instance, look at AI and the way that AI is deployed in society, it is mainly dictated by a handful of large corporations, right? The research that is put into it, how it is deployed, how it is sold.
So how can policy and governments, relating both of them together, tackle the fact that it's mainly private companies or private actors that deploy technology nowadays without much questioning or much ability to look into it because, you know, you can't really look into the inner workings of some of those companies and how they do things.
So how can policy and governments tackle that like of sometimes cooperation from the private sector?
F I want to pick up on something that you said. I think the word "we" is very interesting in this context, because risks and benefits are very unequally distributed.
So, if you're relatively privileged, if you live in a country that has, let's say universal health insurance, that has certain kinds of a safety net, for example if you're very wealthy, certain problems affect you less than others.
And I think that's one of the reasons why they're also different views on this issue, just because something does not affect you or me doesn't doesn't mean that it doesn't affect others. I think what you're asking in your question, it's not just private companies that deploy technology, but you are right in that for example, the public sector deploys technology. And if you think about AI technologies the government or the public sector is a huge investor in technology as well.
But you all right that the enforcement of rules. That's for a company to decide to a certain extent and you are right that this poses a very new and difficult challenge for governments or for democratic governments, I will say because if you talk about like social media companies, they have a huge influence on how public discourse works for example.
So if they make decisions around content moderation β and they do this globally β they do make these decisions for 2 billion users, but they're not democratically elected. They have very little accountability mechanisms.
I think there has been a lot of development in the past few years. Initially the U S tradition of freedom of expression meant that there was very little interest in interfering in the content moderation decisions that companies make. Europe has always taken a slightly different approach. And there are currently laws being discussed, the Digital Services Act for example. The main purpose of this act is to really acknowledge the fact that there are companies that are very big, that are sort of platform intermediaries that have particular obligations.
And the idea behind regulating them in this particular way also means that it has to be democratic governments that regulate these things. We cannot leave it to platforms to make these decisions. And we see a bit less of this in the U S but there's also a tendency to move towards this direction.
What I always find important when we talk about governments and tech companies is that most governments around the world aren't democratic governments. Governments are also consumers of let's say surveillance, technology, or other. They also use technology to abuse human rights to sensor journalists. There's a lot of attention on companies at the moment. But governments aren't necessarily the better actor. It gets really dangerous when companies and governments are pulling in the same direction, because then there's very little opposition.
L It is a double edged sword. Depending on how you use it you may do good or harm. Let me ask you, you know, there's this logic, which is let's deploy this technology and it will benefit things such as, for instance, surveillance on the streets to recognize individuals because it's gonna safeguard us against terrorists. But the next government may use that against its own people, right? Do you think that part of the answer could be through the technology itself?
Do you vouch for a sort of technocratic approach? Not instead of, but in complement of a policy approach as well?
F I think it depends a bit on what we're talking about. I think conceptually there's a difference between a technology and intermediary platforms. These are conceptually different kinds of entities.
And the responses in governing them or in policy, they have to be different. So a platform, an intermediary can use face recognition technology and they use a lot of personal data.
But sometimes the term "tech policy" is a bit confusing because: what is tech? Right? Tech is a very ill defined word. We talk about tech policy, we talk about anything from infrastructure to very specific techniques, to tools, to the entire industry, to platforms. So I'm always a bit nervous about the ambiguity around this.
You are right and I love the fact that you talked about this. I don't want to say double edge sword. I prefer the wording: there's often a time shift at risk with technologies. So technologies that make a lot of sense. You can use them very responsibly today, or you can sort of like the way a company currently runs things, but once infrastructures are put in place, the purpose for which they use can always change in retrospect.
And that is a huge problem. It's sort of, of course we need to prevent crime, our societies need to prevent crime. And of course you can argue about the extent of policing, but you do need some form of police. But then the question is the moment you deploy powerful technologies, like for example, face recognition, you need very strong mechanisms to make sure that these powerful tools can not be abused and there are contexts in which rules alone don't make sense.
There are contexts when the power dynamic is so unequal, sometimes the enforcement of rules is very difficult. One example is, and I think that's the reason why there are special rules.
For example, in the workplace setting, it's not enough to ask people for their permission. If you're desperate for a job, you have to say yes. If you're worried you're getting fired you cannot say no. So, your consent, is it really freely given? And I think the same applies in the context of dominant platforms when platforms are so dominant, it's very difficult to enforce existing rules.
L Yeah, that you would say in the sort of like technological centered solution in that case, you're still limited from the root, what it collects and just prevented to collect anything that could go beyond that.
F It's true but also sort of the way that rules like data protection in theory can talk about the lack of enforcement, but in theory, principles like privacy by design and by default, purpose limitation, data minimization, these are all essentially design principles that encourage you to design services and products in ways that minimize even the potential for abuse at a later point in time.
L That needs to come consciously from the company that develops it, right?
F I think in principle, if you just forget anything you know about data protection, if you think about these principles, they actually make sense. They're responsible design principles. Think about: do you really need to be collecting this data? And the other one is like design things from the get-go, that they are secure by design and by default. So these are good principles.
And then it gets more tricky. I think it also gets tricky in a context where it's not individuals that interact with a product or a service, but when it's companies interacting with products and services, but the tools are then used on people.
So B2B software one example would be hiring software. The client is the company that wants to hire people. The people this is being used on, they're not the client. They are just sort of the subject to it.
The same with credit scoring tools. I find there's a lot of opacity in B2B where those who are actually affected by harm, sometimes don't even know that something is being used. That makes it very difficult for them to even realize that for example, they have been discriminated against because you would only be like "well, I haven't been hired but I don't know why".
You can't find out. Maybe this was like a glitch in my data, or maybe there was sort of like a mistake that happened, but this requires a lot of information to even make that claim. Which again is one of the reasons why we say, sort of, it's important to have accountability and transparency requirements.
L That's exactly the thing, right? It's the ability to explain the decision that a system is making on important decisions for someone, whether that's hiring, loans or criminal charges as we've seen in the US happening.
F But it can also mean if you cannot explain the decision well then maybe you should not be relying on this tool in certain, very sensitive contexts, right?
L That's where the government, in my opinion, needs to come in, right? Because otherwise it is simply deployed on the markets, made available and then it's there for being used by customers.
And those customers may want that for analyzing crops and see if the crops are healthy or not, or they may want to use it to screen applicants and filter them out. And the outcomes are very different on those two applications. Let me tell you this. You know Facebook just released their sort of Rayban AR glasses.
I'm sure you've read something about it. And so that's for me an example of technology being deployed in the wild. And has anyone weighed in on the impact that it may have on society and people themselves? From a perspective that wants to protect individuals, you know, not from a think tank or a CNBC special on how it will boost Facebook's valuation. I don't think so.
There may be people talking about its ramifications, but there's no group or think tank that chipped in when I'm deciding on whether or not to build and release that product, right? So that's just an example of something that will inevitably mess with our concepts of privacy, our concepts of being able to go anonymous in a crowd and not being recognized by a system owned by a private company.
And so again, what can governments do when those things happen? What is your opinion on that?
F So basically this is not a new debate. I personally was too tired to comment on it because I'm like, we've already discussed this, I think 10 years ago with Google glass.
I think sort of, you also have to wonder: are they're releasing this product? Do they actually want to release this product? Is this a genuine product or is this just a sort of publicity stunt, right?
I don't feel panicked about this product, but the reason for this is that there are existing laws. I can already see that people would be filing complaints against this product if it were to be released. And secondly, I think as important as privacy is, that is not the main concern for me in this product actually.
What I find most interesting is that you can record that you spend a lot of money on a product and that you can record video, but you can only share it on Facebook, Facebook live. Like they control over a very tightly closed garden. I think that's what I actually find way more concerning than the "oh, everybody has a phone with them at all times".
I think it's not good if we are surrounded by these walled gardens where you have to buy-in in a certain tech ecosystem and then you can't produce a video that you can share in a different setting.
So interoperability is something that's quite important and is one of the many tools in the toolkit against the dominance of tech companies or the power of individual companies.
L I agree with you. And thank you. We just did this deviation from the topic. It is not about the product or whether or not it will work. It is more about the trajectory that is being put out there. And that can be worrying for me.
F But what is the trajectory? I mean, there's also a trajectory in the different direction where more companies now use privacy for marketing their products.
But I think we have to be really careful. Things that are good for privacy are good for privacy, but they might not necessarily be good for other means that are also important. So for example, when you had Google banning third-party cookies in its browser, really good for privacy, but it also means it's actually quite bad for competition and by doing this it also harms their competitors.
So we have to be a bit careful. I don't think the trajectory is as clear as it was a few years ago.
L For me the trajectory that this strategy represents is more data acquisition and that's the end goal for companies that have a heavy AI centric approach, right?
They need the data in order to extract and permission from that and enhance their products and services. So. I can of course record someone in public with my phone, but it's very different than having something on my face that is, you know, like ubiquitous and I can simply start using it on top of that.
Well, it's going back to Facebook. It's not just, it's not going on a folder with your video for your convenience, you know? So that's, that's a trajectory. It's more data acquisition in order to enhance and worsen the asymmetric power dynamic between the user and those companies. So that for me is the trajectory. That's my interpretation of it, I don't know if that's the correct one.
I want to go back a little bit to the European AI Fund, because it was just a deviation that I created here for us. There are many, many areas that we can tackle in this broad tech term that we were using.
So what are some of the categories that the European AI Fund is looking to work with or to bolster?
F So we are a fund, so we do not work on policy directly.
L Not policy. The areas that you want to affect in a way.
F We want to build the policy capacity of organizations that work on digital rights and AI, and at the same time we want to build the tech competencies of organizations that are doing amazing work in their domains so that they can also be loud vocal voices on the debates around technology.
This is what we're currently supporting, and then we have a separate research grant around Europe's tech response to the pandemic. And these are the two activities we're currently supporting.
So the overall vision is for there to be a healthy ecosystem of public interest voices. That's what we're currently supporting.
L How does that materialize? I mean besides funding, what else do you provide to support?
F Until now it's primarily been funding. We also sort of want to be a convener, so bring different experts and organizations together but we paused this a little bit because we felt that there was a collective zoom fatigue.
We didn't want to overwhelm people, but this is something we're starting now. So it's like strategizing together how to work on certain policy proposals. It can be bringing together the organizations we support with experts.
L Any specific challenges or opportunities that you see by using this approach in the European Union, from the differences between countries and different perspectives and so on?
F It's important to know that the fund has just launched. It launched exactly a year ago. We should actually celebrate our birthday, I just realized. And that's the strategy so far.
Because of the way we framed our open call, we currently do not support organizations from all over Europe and also the funders that fund the fund do not come from all over Europe.
So we have some work to do when it comes to this given that we want it to really support policymaking. We currently support a lot of European network organizations and as a result of this they're based in Brussels, in London or in Berlin. That's sort of a direct consequence of how we frame the focus. But I think we are in the process of developing our next strategy and we also work with external consultants who came up.
We are about to publish a study that really looks into what we think the fund can do next. And this will be public, which I'm very excited about because often foundations do these sort of scoping studies but don't publish them. So we really want to share this publicly and also hear people's thoughts and comments.
L All right. I look forward to it. I'm already a subscriber to your newsletter for sure. And I look forward for when you publish it.
Frederike, it was a very interesting conversation for sure. I mean, I'm by far very unaware of the policies and how all of those intricacies work. But I do know that it's one of the lanes that we need to look into when it comes to shaping the way technology is to be used and to be developed.
That's one of the purposes of having this conversation, and I appreciate that you came in and shared your time and your perspectives on those matters.
F You're very welcome. Thank you so much. And I'm also very interested to hear more from you, about what you're doing and the work you're doing in Portugal.
L Well, I think what we were doing is pretty similar in a way to some of the tenants of the European AI Fund, which is basically bringing people together. We don't have funds for that, but I mean, at least we bring certain mentalities and foster certain discussions around these topics in order to just connect the right people that can eventually, at some point in time, make positive change in the areas of technology.
Tell everyone where we can follow yours and the European AI Fund's work.
F Sure. You can follow my own work on Twitter. I'm about to publish the outcome of my Mozilla fellowship from last year, which is a book about AI pseudoscience and snake oil and hype. I'm incredibly excited about it. It's a collection of writing with very amazing contributors. I learned so much writing this book.
You can follow the European AI Fund. We have a website, we are on Twitter, we are on LinkedIn and we also have a newsletter, in which we share sort of tech policy news, work that the organizations we partner with are engaging in. There's a lot of job openings at the moment in this field. So we always have a job section where we share who's hiring, et cetera. So I highly recommend the newsletter.
L Thank you for sharing. Well, again, thank you. And I hope we meet again and we have the opportunity to talk once again in the future.
F Sure. Thank you so much. Bye.
You can connect with Frederike Kaltheuner on her website and Twitter.
"The People's Declaration". A European call for an end to Big Techβs destructive business model.
"Chinese AI gets ethical guidelines for the first time" by Xinmei Shen.
"We are all data farms". The Age of Surveillance Capitalism, with Shoshana Zuboff and Rosamund Urwin (audio).
If you've enjoyed this publication, consider subscribing to CFT's monthly newsletter to get this content delivered to your inbox.