The weather is getting better. For some it's preamble to summer. For others it's business as usual, and the past month's torrent of lawsuits, anti-trust probes and hearings is proof of that. Luckily, we're here to help you distil all of that info.
For this issue we're joined by Cade Diehm, designer and founder of The New Design Congress. We deep dive on the concept of infrastructure as an expression of power, questioning ethical design, weaponizing blockchain and explore what's behind digital identities.
Finally, we're happy to introduce new illustrations for each of CFT's issue, kindly created by our friend Bruno Prezado.
In the U.S., Florida's governor DeSantis has signed a bill barring social media companies from blocking political candidates but the state is already getting sued as the bill is most likely anti-constitutional.
The Biden administration has revoked Trump era orders that sought to limit social media companies' protections.
India is continuing to crack down on tech companies. The government has ordered social media to censor references to the "Indian variant" of COVID, having sent police to Twitter's offices after official's tweets were labeled misleading.
Meanwhile, WhatsApp has filed a lawsuit against the Indian government seeking to block regulations that experts say would compel the messaging app to break privacy protections.
Back in the E.U. Facebook's marketplace may face an antitrust probe. The Union also accused Apple of App Store antitrust violations following Spotify complaints.
Italy has fined Google €102m for excluding the Enel e-car app from Android Auto.
In France, Google is nearing a settlement on an antitrust case over its online ad auction system.
Germany's Federal Cartel Office launched two anti-competitive proceedings against Google and Alphabet while a court ordered Facebook to stop collecting German WhatsApp data.
Facial recognition firm Clearview was hit with complaints in France, Austria, Italy, Greece and the U.K.
The D.C. attorney general has sued Amazon on antitrust grounds, alleging the world's largest online marketplace is engaging in unfair practices with third-party sellers.
The company has extended the moratorium on police use of its facial recognition software, although Ring is now the largest civilian surveillance network in the U.S. with 1 in 10 police departments accessing videos from millions of privately owned home security cameras without a warrant.
Amazon had sales income of $53m in Europe in 2020 but paid no corporation tax. But if employees are sad, they can always shut themselves in "Despair Closets".
Apple's CEO defended the company's App Store dominant position over the court case being brought by Epic Games. In the U.K. the company is facing a class-action lawsuit for having overcharged 20m users in the App Store.
Apple has hired ex-Google A.I. scientist Samy Bengio who oversaw the A.I. Ethics team. Bengio resigned in support of colleagues' firings after the Timnit Gebru controversy.
Facebook has threatened to make iOS users pay after reports that only 4% of iOS users are opting in to ad tracking.
More than 150,000 activists and parents have signed a series of petitions urging Facebook to drop its plans to create an Instagram for kids platform all the while the platform is calling links to depression inconclusive.
A San Francisco judge has approved a class action for more than 10,000 women accusing Google of paying certain female employees less than men.
Senator Ron Wyden revealed that the Pentagon may be surveilling American citizens without a warrant.
In the Netherlands, local councils are snooping on citizens via social media.
Apple is being criticized after having confirmed that it is storing Chinese customers' data inside data centres based in China.
Google reportedly made it difficult for smartphone users to find privacy settings.
"“Technically” Responsible: The essential, precarious workforce that powers A.I." by Cade Diehm & Caroline Sinders
To get these delivered to your inbox, subscribe to CFT's monthly newsletter at the end of the page.
Lawrence — Today, we have the pleasure of talking with Cade Diehm. Thanks for being here Cade.
Cade — Thank you. That's really good to be here.
L All right, awesome. Cade is a weaponized designer, anti design ethicist, and tech pessimist. Right? Heavy, heavy words, but we'll get into it.
He's also the founder of The New Design Congress, a nonprofit organization developing a nuanced understanding of technology's role in social, political and environmental accelerants.
He's worked with augmented reality cryptocurrency and was an early contributor to Signal. And he has also worked with Berlin based NGO Tactical Tech, Berlin being where you currently live.
C Yes, exactly.
L That is an interesting background and I'm very eager to talk about some of the weaponized design and being a design ethicist but also a tech pessimist. That's a very interesting mix from what I've read on your essays and also from The New Design Congress.
But first I wanted to ask you, if you could give us a background on your path. You're from Australia, is that correct?
C Originally from Australia, yes.
L Originally from Australia. And you ended up being in Berlin, living in Europe. Can you tell us a bit how that came to be?
C My career has always been beyond the shores of Australia. Australia is quite a wonderful place, but has ambitions to replicate the same kind of technology scenes that you might find in the United States. When I graduated from my university studies, I then spent a lot of time commuting if you like between the United States and Australia working as what I would maybe call like a Silicon valley outsider. And what I mean by that is like, you know, alongside working with agencies in Australia until I got disillusioned by that, which was very quickly, I was also at the same time doing remote consultancy for, you know, tech companies during the initial app gold rush when the iPhone first came out and that's where I cut my teeth as a junior designer.
And that, that work being kind of both outside and inside it at the same time, sort of touching it, but without necessarily being immersed in the culture full time was quite formulative. Both in my understanding of geographies and also in how I understood technology to actually interact with people's lives and societies.
And so the move to Germany really came about as a result of seeing the trajectory of Australian technology as a collective set of organizations and people and belief structures and then also the opportunity with Tactical Tech and having worked on Signal the opportunity to then apply those skills in a civil society organization. It was really, really appealing to me.
L So you have this sort of natural appeal to a social approach to the way that technology and design are to be used right? There should be some sort of societal impact positive one, hopefully.
Do you know why you have this will to work on that angle? Why not just go for the regular, you know, like Silicon valley, make it big, regardless of consequences. Why do you equate social value and impact in your work?
C So I have a physical disability. I have a form of osteoporosis and I'm also queer and I have lived internationally and found myself in lots of different situations in lots of different environments that makes it very clear that that's quite difficult to ignore and that the rhetoric of digital infrastructure at scale is actually quite a dangerous one.
I mean, obviously I raised my own personal circumstances as an example of the personal interest that I have, the selfish interest if you like in critiquing these technologies, but it extends beyond that. I don't believe in the myth of progress. So I don't believe in this idea of the society that we have today represents the best possible outcome of that even with all of its flaws. I'm also not what you would call a near Luddite. I'm not somebody who considers technologies to be inherently evil, but that doesn't mean I'm apolitical with this either.
What's made it very clear to me that we really need to seriously think about traveling in a different direction is of course the rise of authoritarianism across the world and also climate change. And those two are intrinsically linked and also intrinsically linked to technologies and our current structure of digital technologies and digital infrastructure. And so for me a big part of my motivation alongside the personal interest is also sort of realizing that if we don't do something about this, then like the current trajectory that we have is completely unacceptable.
We have to rethink this from the foundations of our assumptions, we have to really, really critique ourselves in the work around us and the world around us in order to transcend or break free from this trajectory.
L I was going to connect this to something that I wanted to bring maybe after but I think it makes sense right now, which is The New Design Congress, right? And that's pretty much aligned with what you just said.
So The New Design Congress, which you founded at the beginning of 2020, is "a research organization that recognizes all infrastructure as expressions of power and sees interfaces and technologies as social, economic, political, and ecological accelerants".
From what I understood, essentially, you're providing some sort of consulting to companies and product teams and even politicians on how to understand these relationships between society and technology. Am I correct? Can you tell us a bit more?
C We have this on the surface, that claim, I think, is one that lots of people would agree with: infrastructure is an expression of power. Anybody who builds some form of infrastructure is trying to either express or implement some form of power. It can be collective power, you know, a credit union as an example of financial infrastructure that consolidates power. A highway is a city that is expressing its desire for economic power through infrastructure. Whether it's a road, a telephone, a credit union, a bank. Whatever the technologies we develop and deploy into our human world, our expression, that they're manifestations of a desire for a particular kind of future, which in itself is an expression of power or a desire to consolidate power.
However, when we then start talking about how to critique real-world examples of this, that initial provocation that I just mentioned gets muddied and a really simple example of this, although I very much particularly don't want to frame all of this discussion from the perspective of social media companies, but a really good example of this is the way in which disinformation is critiqued as being caused by social media companies.
Whilst they're complicit in that situation of polarization and disinformation campaigns, they are themselves not the root cause of that but evaluated as though they are. And so two years ago, Mark Zuckerberg was grilled in front of Congress about the role in creating the Trump years, if you like, and disinformation associated with that, that then accelerated of course, with the COVID pandemic and the raid on the Capitol in the United States in January of 2021.
The mistake here is that the critique is based around Facebook being a core contributor to this rather than a profiteer here off of existing structural systems. And so this cause and effect where we kind of evaluate digital technologies within a cause and effect relationship helps us to avoid the underlying power structures that are associated with that. In this case, that power is manifesting through the digital technologies of social media companies rather than necessarily being caused by them. And so that's really helpful for a certain kind of leadership, because what it does is it helps to prevent us from evaluating or critiquing the underlying structures of societies that have led to this point.
When we talk about the second part of The New Design Congress understanding, which is that we see digital technologies as social, political, economic, and ecological accelerants, what we're saying here is that there's two parts to the understanding. One is that people develop and deploy infrastructure to express power and then these collections of infrastructures combined with the surroundings to provide accelerations of certain kinds of configurations of societies. And so there are two framings which are symbiotic and necessary for each other to exist. One is dependent upon the other and vice versa.
Between those two together what you have is the beginning of a framework that if you apply it rigorously, you can then start to really delve deep into both the popular discourse, but also the underlying issues that one might be trying to address. And that's where our work in consultancy, although that's really part of what we do, that's where our consultancy work kind of lies.
It's quite a unique proposition because it's drawn not just from the political or social sciences, but also from threat modeling and digital security. So the background in working in digital security for the last half decade and, and the connections and the expertise that I've been lucky enough to meet and collaborate with all of that is kind of reflective in The New Design Congress in our broader understanding and framework and theory of change.
L It's interesting that you apply a sort of threat model to something that exists, which wouldn't be perceived as a threat by most people.
Can you help me understand though, when you say, for instance, Facebook is more of a profiteer for sure, but also a conveyor of these fake news, they don't have a factory producing fake news right? But we know about those for instance with the Russian disinformation and even other countries.
So what is in that case the infrastructure? What is this infrastructure if it's not Facebook?
C Well, it depends on how far along you would like to intervene in the ecosystem, if you like.
So one could say that this all starts with neo-liberalism and the post 2007, 2008 global financial crash, the collapse of institutional trust in specific countries and the widening wealth and inequalities. You could point to that and say that that is the underlying generator of disinformation in a generational sense. But once again, if we then leave it in the hands of, well, Facebook is the cause and we can avoid having that discussion about the deeper issues that we have in societies.
You know, take vaccine disinformation. So on one side you have the distrust in the Chinese and Russian vaccines and on the other side, we have AstraZeneca and the history of distrust in the EU and in Australia and other countries around its effectiveness or its safety.
And then as we're recording now, there's a new report that's come out that suggests that European influencers were given money to spread disinformation about Pfizer and Moderna so the reason why these work is because of the collapse of institutional trust in the medical systems around the world.
The point is that disinformation itself, as we understand it and define it in this conversation and in the popular discourse, is but a symptom of a larger set of infrastructure sort of colliding together. What the problem is is that none of the problems I think - and maybe this is a little bit ambitious of me to say here - but none of the problems that we have actually like, insurmountable, but we have made them somewhat insurmountable by our continued focus on the surface layer of which talking about Facebook disinformation and holding those companies to account within that context represents a very neutered or impotent response to an unfolding crisis.
You know one of the underlying theories of this is that there has been a depromotion of the humanities, like a downplaying of the importance of systems thinking in a broader collective sense. And so a lot of what we see here is for people looking to implement policy, there's this really great opportunity to sharpen our skills, to have a deeper understanding of first, second, third order effects and being able to project our interventions deeper into societies.
I'm getting a little bit off topic now, but essentially what I'm trying to demonstrate here is that there are different ways to intervene in these systems and what The New Design Congress is trying to do is position ourselves as a provocation to start that process in different contexts.
L I understand because If you don't even start by having those conversations, definitely you're not going to enact any change, whichever level you're looking to enact.
C That's right. But to introduce this concept a little more concretely there's a lot of discussion, for example, around the complicit responsibility of a company like Facebook in radicalizing people online with certain kinds of information.
Part of the response to that is a campaign to deplatform certain kinds of perspectives. Now, I am not a sympathizer to particularly irrational or reactionary discourse, but we're talking about revoking certain sections of U.S. law that make companies responsible for material that's hosted on their site. But the thing is that in the case of Facebook and social media companies it isn't actually what's on their site that is the problem I think. I think what the real problem here that has been demonstrated and explored very, very concretely is the role of algorithmic editorialization and how algorithmic editorialization and this idea of taking relatively mundane content and then feeding extremely reactionary material into that automatically is in itself a step further into the Facebook stack. It's a step further into the system that Facebook operates.
As an example, there's a subtlety between demanding that Facebook polices all of their platform content, which has devastating effects across the internet and really only further entrenches large tech companies with the resources compared to making them responsible for the editorialization of their sites, which has much more interesting consequences and is a much deeper exploration of these kinds of systems.
So if a tech company then has to justify why certain things were editorialized into people's feeds inappropriately, then not only are you having a different conversation than "what about free speech and de platforming", because the role of disinformation, isn't just the material itself or the reactionary opinions associated with it but it's also to do with how it then is presented to people. And it's the "how it's presented to people" that is more important than "what" I believe.
L It's not only the messages, it's how the message is conveyed and how it's packaged as well.
C Marshall McLuhan in the late sixties wrote about the medium being the message and why right now we're having a discussion about the message and not the mediums. And that ties back to: if we have a discussion about the medium we're having a discussion about infrastructure is power.
L One of the things that you mentioned and I want to relay this a little bit to the essay about questioning the word and usage of ethics. I think it's kind of related in a sense that the essay that you did on "Design ethics? No thanks", my understanding is that there's a lot of these conversations in industry and more and more you talk about design ethics and ethical mindset when building tech products right?
And you do see corporations like Google. Facebook I'm not sure honestly, but you do see more and more of that conversation. And so my feeling is that you are worried that this conversation is going to be framed and packaged in a way that would essentially serve the stance that companies want to have.
My understanding is: let's rephrase or let's change the meaning of what an ethical framework is in order for us not to have to do the things that will probably affect our bottom line.
Is that the critique that you are aiming for?
C So the underlying for the argument that I present in "Design ethics?" No thanks" is that the push towards ethics within a technological practice, be it from a design perspective and not only design, this could be engineering, project management, any discipline related to the development of infrastructure, you cannot do that ethically just through practice alone and through intent alone.
A really good example of this, which isn't in the essay, but I've written about in other contexts is the Spotify design team, which is considered to be a rather distinguished place and has a design ethics component to their work. They write about it, they publish it. They've, they've done talks pre pandemic with design publications. It's nice but the deeper question is: can one practice ethical design whilst working for a company that basically exploit's the labor of the music industry and the creative class?
You can proclaim your desire to rid the world of dark UX patterns or behave ethically and have a team that's diverse and inclusive. That's all noble goals but what difference does it make if the project that you're working on is basically creating an underclass of poverty of talented individuals in the music communities? And that's my concern with design ethics is that it represents in its current form a way of further protecting technology disciplines, particularly the people who actually do the development of infrastructure from the outcomes of their work.
L So what you're meaning is that let's apply an ethical mindset to very specific things like the UI, the notification and attention hijacking. Those are the things that ethics applies to. It doesn't apply to, as you said, the layer of drivers or creatives or whichever population is going to be exploited for their labor, for the creativity, whatever that is, that is outside of the scope of analysis.
And because it's outside of the scope of analysis, someone comes and say, "you're doing unethical stuff" and they will say "well actually our interface, you know it's pretty ethical in that sense". That's the way that you're going for if I understand correctly.
C Six years ago now I was a key team member of a web conference in Australia and we would invite people like distinguished programmers who worked at places like Airbnb to come and talk about the latest and greatest techniques that they were using in their work. Airbnb in particular has a very strong development team that has pushed the web forward. Facebook too, obviously because they have produced React, which is used almost ubiquitously amongst internet projects and so forth.
The parallel here is: did we platform people who ultimately played a major role in creating a wave of systemic inequality across the world, through the use of platforms like Airbnb and the resulting dismantling that Airbnb has accomplished to city livability and regulation?
The answer to that is yes. We did platform people who played a major role in that, and it's why design is given so much lip service to its importance. It's because without a good design, without a skilled team the systems are useless. The interface layer is an ideological end point, but it is also where the battleground is because the interface layer is eventually where whether people will or not accept what they're using and also how visible the underlying system is.
L But to what extent can we say to a designer or a senior product manager like: "Hey you did this" right? Because he's most likely unaware that there should be other analysis angles to the impact of work.
Because the impact is measured on KPIs that affect the company. They're not measured in other KPIs, like livability of a city, like gentrification or cost of living because a city is flooded with Airbnbs and you can't rent a place in Lisbon.
So what is the role of the technologist and the designer that is putting his hands on a keyboard and moving ahead with this vision? Is he responsible? How can he navigate that?
C Well in my critique of design ethics, what I'm not saying is that one should care about this. I think that my critique specifically around design ethics is that if you do care about this and you care about your work, then design ethics is not the answer because design ethics and understanding your role only through the lens of your intent, without answering the question: how does a profit based service overcome the interest of capital, which is essentially to grow exponentially until there is nothing left to profit from or exploit.
If you're in service of that, and that bothers you, then design ethics doesn't offer an answer to that question in its current form.
These systems, if you focus them entirely inwards and not outwards or on a systemic level, then your desire to be ethical, however that manifests politically and in your practice, it means nothing ultimately, because if you don't have an answer for how your desires to be ethical, whilst working within a profit based structure, if you don't have an answer to how that overcomes capital then you're not going to be evaluated on your intent. You're going to be evaluated on what you contributed to.
So even if we're talking about the interface layer and the ethics of Big Tech, for example that itself isn't even a fully realized evaluation of really what's going on. Like is it meaningful to have discussions about ethics when lithium is mined by small children in Bolivia? Not Bolivia, but you know what I mean? It's a very bourgeois thing to be able to care about.
The point that I'm trying to make here very, very clearly is if you're okay with that, you don't have to engage with that. If you are a designer who loves to design, then by all means go and design. But if you're someone who cares about this and you want to become political, then these structures that are being developed such as design ethics, such as the speculative design that came before all these human centered design, these sorts of things, none of it answers the question. None of it will provide a framework for you to make meaningful change that you will need to make as a designer in order to actually help move the needle in a direction towards social or environmental justice. Berlin scene, blockchain and mistakes of the past.
L Thank you for that explanation, which I think is very on point. You're truly correct when you say that this is only the surface of the issue when we're talking about this very high level ivory tower sort of issue. Let me ask you something.
Do you find that there is more of this vision in Berlin, for instance with the ecosystem that you are in? Do you find that you have more people to talk to about this in Berlin for instance?
C I get very frustrated with Berlin because on the one hand there's a big degrowth movement here, which I think is really important and there's a really big right-to-repair movement. Like the CCC, for example, that entire community has a large number of people who are very involved in degrowth and the right-to-repair. I think the right-to-repair is one of the most important struggles of technology and like of our current generation of technology politics. The right-to-repair being that you should be able to repair whatever it is that you have.
On the other hand, there's a massive cryptocurrency scene here, blockchain scene, which goes out of its way to avoid the history and I've written about this too in another essay called "This is fine: optimism and emergency in the peer to peer network" where, in the same way as the peer to peer networks that came before it such as BitTorrent, and as an idea, you could download files off of other people participating in a peer to peer network with you.
That started off an entire revolution around copyright in the early 2000s, after the early file sharing program Napster was basically sued into oblivion by a conglomerate of music and movie industry companies, distribution companies. At one point BitTorrent was so powerful as it started to gain mainstream traction at the time that people were literally predicting the end of record labels that like Disney, Warner and Sony, these companies would actually be irrelevant within five years.
There was a lot of writing about the sort of end of that. And like what was truly going to become like a radical reform of copyright law and intellectual property law through this practice of this protocol. The reality of that of course was that BitTorrent - which was written and implemented by a programmer named Bram Cohen - didn't account for user privacy and so the design of the protocol was weaponized. It's another term that I've talked about a lot about called weaponized design, which is a system that harms an individual or a community whilst performing exactly how it was intended to. So nothing breaks, nothing changes the system and through its use it causes some form of harm.
In the case of BitTorrent, because there's no privacy on the network, the legislators that we're collaborating with the copyright holders and these large distribution networks and distribution companies basically made BitTorrent, uploading very, very illegal or a very kind of stiff civil penalties and financial penalties. Once that was done, the copyright holders could go after anybody participating in a file sharing sort of redistribution thing, literally by making a legal claim that every time someone uploaded a part of a file, a part of a song that was worth, you know, this incalculable loss.
And so they sued people all over the world for tens or hundreds of thousands of dollars. And in many cases, we'd go after people who really didn't have that kind of money and then basically ruined a whole bunch of lives.
They use this network and this protocol and its inability for us designers to see how it would be used in the broader community of the broader world. And they used that to destroy an entire popular movement that would redefine copyright for a more collective ownership.
L So relating that to the blockchain scene, it's lessons learned? Like avoiding that?
C Well they haven't. They haven't, right? And so what we're seeing is the same kinds of push towards decentralization through blockchains, but without understanding the role of how a centralized power, when it is challenged will evaluate these technologies. In the case of blockchains, we have an extra component to that.
If you take the NFT craze that just happened at the beginning of 2021, we have a situation where you could have a very popular NFT at work that allows people to buy the original of a song and own the proof of ownership of the original. If that NFT company goes out of business or falls on financial hardship, it might be bought by a company like Spotify. Then what do we have? We have a company like Spotify who already has experience and a system in place for exploiting the creative labor of musicians ruthlessly, who now have an extra avenue, which is a cryptographically secure tool for exploiting.
When you talk about me being a tech pessimist, which is like in my sort of tongue in cheek biography, this is my problem. My problem is that that whole scene which is very prevalent in Berlin, but you know, it exists all around the world, that whole group of people sees these things as exciting futures and will downplay much of the threats that their work enables.
And I find that particularly frustrating because the outcome of that just by not embracing the threat, by ignoring or downplaying the threat, the outcome of that is that this technology will never reach its full potential. I mean, we got really close, the copyright war nearly won by the people who sought to decentralize intellectual property.
L What you're saying is something that you can observe throughout history in technology because it's a mix of the Silicon valley culture of technological determinism and solutionism where technology by itself will solve the issue. So this utopianism that we will find an algorithm or a system or a solution that by itself fixes other deeper underlying issues. There is hope that "well actually blockchain, because of the way that it's built and it's decentralized, no one can control it" and so on.
Well, isn't that the same about the internet, right? That it was decentralized? That anyone could hop on and publish and have a voice? And nowadays we see the gardens that are Facebook and Apple and Google. So in theory the technology has this potential which gives this optimism to the communities that are behind it but at some point it's hijacked by commercial entities.
How do you think that happens? What is the thing that is going to make blockchain become weaponized and used against its original purpose?
C So that's a core research project at New Design Congress and we are doing a lot of work around what we call alternative forking and this is this idea of going back into history, looking at those exact examples that you're talking about, and then exploring the alternatives that were not followed: the forks in the road that were not followed.
A really good example of that, which we can talk about next is digital identity. Things like what are some of the things that we could've done in order to make these systems more resilient? And I think looking backwards. And theorizing and then in certain cases turning those theories into workable examples is a really good way of provoking different kinds of thinking. Part of the reason why we have this problem right now is because the political pool of Silicon valley and the Californian Ideology - which is not the only reason why we have this situation, but I'm going to use it as a shorthand for a number of different belief systems - that optimism prevailed over almost everything else.
The digital space being separated from the real world even today it's still seen as a real thing. What you see when you look especially towards people who are left out of these conversations is the ways in which these technologies or these movements fail, but also a way in which one can find that resilience that we're looking for. It can be found in anything from interesting subcultures that have sprouted up over the last 20 years, 30 years. Some of our research, for example, is in the economic resilience of the Furry fandom, which is like people who identify with anthropomorphized animals and there's like a global subculture around that.
But you know other places too like trans identity is a really interesting way of framing the role of digital systems and the way in which an individual comes to understand their gender and how that might change over time is actually a really powerful way of exploring the failings of our current structures and our current systems.
And I think those are the tools that we need and that's kind of the research work that The New Design Congress is doing alongside our consultancy. Our research is literally based around exploring or surfacing the ones that work and then helping to platform those people when they want to be platformed.
L Talking about digital identity you said something that I also think touches a little bit on the responsibility that people perceive, going back to the social media example, that the online and the offline are sort of separate things, which we know, obviously that that's less and less true.
But we do come from an age where 20 years ago my father would say "well that's happening online and online it's not real". It's like saying online bullying isn't real. Well, it is real to the person. What do you mean it's not real. Because it's not happening in the physical world? But psychologically it's very real.
So what is this notion of being online versus offline and who you are online? Are you the same person? Is it an extension of your identity? Is it something totally different?
And talking a little bit about that, I want to ask you, what is it that you find interesting about this concept of digital identity and why is it something that appeals to you?
C Yeah. So one of the core research projects right now, and we're actually collaborating with a few different organizations around this is this notion of digital identity. And this is a longer project of which I don't think that we'll come to an answer to in the next couple of months, or maybe even by the end of the year, I don't think it will be there yet.
But what I can tell you is I know what the problem space is. There's this really amazing paper by a fellow named Steven Lubar and it's from I would say the early nineties? It's called "Do Not Fold, Spindle Or Mutilate: A Cultural History Of The Punch Card" and it describes digital identity through the lens of the early punch card system.
When you think about it, all forms of digital identity essentially can be boiled down to this simple technology: it is a mathematical representation of a person in some way. And when you look at the history of digital identity, it actually starts with the U.S. Census. So like if you're statistics in the United States in the late 1800s they started taking population surveys and they used punch cards and identity in that context.
Of course, the most famous, terrible example of early digital identity is IBM's collaboration with the Nazis in the 1930s, which directly led to the Holocaust in terms of how it was made more efficient through the use of computers and the deployment of that infrastructure. But even as late as the seventies and early eighties towards the tail end of the civil rights movement, you see the university of California students protesting as part of the civil rights movement against their academic records being digitized.
If I start with a provocation, which is that digital identity is actually a relatively new concept and it's only two and a half generations old and in terms of its actual implementation it's only really one generation old. I would say the first polish of digital identity comes as the username and password in the early Web 1.0 technology. It's like the email, username and password that's our understanding of digital identity and it's not really changed since then. And even that at its time when it was being rolled out, that itself wasn't fully normalized at all. It was actually seen as a huge transgression by a lot of people, even people within the spaces of computer science that you would otherwise consider to be embracing computers.
And so in all of those cases, the central argument, which they considered at the time to be a civil rights argument was that the digital identity is an incomplete and concrete unmalleable representation of a person. It's incomplete, it doesn't take into account the complexities of the person and it's permanent and thus doesn't fit within the model of how people see each other.
And that digital identity concept, which has remained largely unchanged since has led to all sorts of things, which in any other circumstance would be grounds to completely overhaul the entire system or rethink digital identity altogether. We have catfishing: the act of impersonating someone else and using a digital interface representation of digital identity in order to trick people into anything from going on a date with someone who you're pretending to be, all the way through to convincing a person, a CEO of a company to transfer funds by catfishing them as their chief financial officer. Digital identity plays a major role in that.
It also plays a major role in policing as I said. Digital identity is used extensively in courts and in dragnet surveillance in order to build cases against people and digital identity itself is often interpreted within larger contexts, even though digital identity itself is very separated and isolated from those spaces. You can use different data points to build any kind of narrative you want and digital identity plays a major role in that.
But digital identity even has a complexity issue locally. Like it's only through digital identity that services like Gmail are possible because digital identity is a security tool and it helps. Digital security and digital identity helped to maintain the structural integrity of the system that we consider to be vastly exploitative.
Without digital identity, without the username and password or biometrics or anything like that these systems would not be possible. And so when you sort of understand that digital identity plays not just a major role in policing and is an incomplete and unjust representation of a person, and then you also realize it is one of the foundational pillars of the centralized tech structure that we have, the more that I talk through it, I'm hoping that to the listener and to your Lawrence, that you're beginning to form doubts about its legitimacy, because ultimately if we were to dismantle digital identity we would dismantle one of the core components of the essential power structure.
What's left, of course, out of that is very scary and it's a huge unknown, but I'm excited by the unknown because I think from that, with the learnings that we have comes some kind of alternate, perhaps more nuanced and sophisticated alternative.
L I understand. So the exercise here is to kind of ask: if this weren't in place, because it is a pillar of a system that exploits us then what could replace it? People also want some guarantee that you are, who you say you are in a way right?
L There needs to be trust. There needs to be, I guess, other parameters, but mainly trust. Right? There have been mediators which are part of this structure that you've mentioned, mediators that guarantee that you are who you say you are with very rudimentary, even privacy invasive methods like, "give me a photo of your driver's license".
Who is asking for that? Who is going to look into that? Where is that going to be stored? And how secure is it? Because that's, for better or worst, part of my identity. My citizen card, my driver's license, my passport, those are things that other humans can look into and say "I trust this document" or these artifacts.
So you're looking for an alternative that removes this mediator in identity providing. For instance, you know that Google is testing out the Knowledge Panel for your identity. Have you heard about that? In the sense that you look for Cade on Google and then there's like a small card as if it were taken from the Wikipedia, but it's something that Google kind of validates, like it's your certified Google identity. And so someone is going to look at that and you're gonna say, "well if Google says that Cade shows up here, then that's valid and therefore I trust that Cade is this person that he says he's, or at least Google says that he's that person.
So you need to find an alternative. I guess that's the exploration that you're doing.
CA lot of this is built from this idea of the global commons.
Another really great example is GitHub and I'm going to use GitHub specifically because GitHub is a supply chain tool, even though for someone like you or myself or people who might be listening to this podcast, they might consider GitHub to be just a regular tool that you use or a social platform where they can collaborate with people. To the rest of the world that's actually a social network, true, but it's also a supply chain tool.
So it sits beneath something like Facebook because GitHub is a supporting structure that helps something like Facebook get built or Google or something like that. And these tools were built at a time when one could reasonably expect that access would be available worldwide for these kinds of systems.
And of course, what we're seeing now is a fracturing of the internet. It starts with the discussion about like, Russia, China and the United States. The European like Horizon 2020 and the subsequent funding initiatives are around basically exiting the American led infrastructure to some extent. But then, you know, as late as last year Myanmar, for example, putting up a firewall; different countries in the middle east disconnecting from that network.
Having said that it's important to realize that we're entering a period of time where that kind of ubiquity of internet structures and the global commons is actually at its end and what's coming in its place is much more nuanced and much more fraught with politics. I think that the time is right to have this conversation now because no longer do I think even more so than with the social engineering attacks, that digital identity is weak against.
I also think there's an additional layer here where like the politics of digital identity are about to become extremely complicated if they aren't already, which they are, of course, but they're about to get even more complicated. And so I think we have a mandate to rethink these, because I don't think that as much as Google would like to be considered an identity provider, actually don't think that's going to work on a global scale as much as they want. They might end up being an identity provider for the United States, Australia and the European Union and we would be worse off for that. But I don't think that it's a ubiquitous global commons and that's a lot of how our current understanding of digital identity is drawn from, from this idea of like a ubiquitous set of systems that you can trust just because it behaves in certain ways.
When you divorce yourself, when you deprogram yourself from the ideas of what digital identity today is and the kind of shortcuts that we take, or the cultural norms around designing for digital identity. When you pull away from those, and you also understand the era of global connectivity is coming to an end, then I think that sets the stage for a completely different set of computing, first principles around digital ownership, around interaction, around connectivity which we haven't really deeply explored in any meaningful sense.
I mean, obviously there are always people doing interesting work, but there's no critical mass yet, or no consensus around even the fact that this is happening when of course the reality is it's happening in real time, all around us. I think even if we are resistant to my argument, that digital identity needs to be completely dismantled and replaced, it's coming whether we like it or not. And the other thing I just want to quickly mention on that too, which is something that you mentioned about like ownership of digital identity.
My personal stance is pretty radical, which is that I think the best identity is no identity. I think that identity should be something that you cultivate outside of a digital system somehow. That's my current thesis. I don't know what that looks like yet. We're doing some initial testing and that's why we're researching it, but that's my current hypothesis. It's not enough to just say the custodian of data should be the person who owns the data.
Because once again, in the same way, as we just talked about all infrastructure being an expression of power, even if you encrypt the data, even if you give the keys to the person whose data it's based upon the digital identity, that still exists in structures beyond your control in a broader network of situations of which the average person will have a very hard time understanding or threat modeling.
You know, I might have a terms of service consent with something like Twitter, around how they may or may not use my personal data, but how does that then extend beyond my relationship with Twitter directly, or my relationship with some kind of data provider, because it's not just me and the other provider. It's thousands, potentially hundreds of thousands of individual actors in a global network. And so it's not enough to just say, "well we'll just give the ownership and the custodianship to the person whose data it's drawn from", because the systems are too complex for that.
It's not enough to just have this ownership reconfigured, because that only works in the moment. It doesn't work once that consent is given and that the data or the identity leaves the possession of the person.
We have to go deeper than that. There's this really interesting collection of systems where if you just pull the pillar of digital identity out of it, then suddenly you're free to think about things in a completely new way.
L Well, there is definitely a lot to unpack and I appreciate your lengthy answer because I always appreciate an answer that not only goes towards what the question was, but also gives a lot of material for the listener and myself to think about.
I wanted to say though, that we do need to start somewhere, right? And I think having better ownership and control over your data, which is what we have right now in the systems that we have is a first step? All of that combined with a higher level thinking, which is what I think people like you and initiatives like The New Design Congress are trying to introduce in the conversation, right? So there are some first steps that we can take but we need to think about this further on how it will evolve, how should it evolve and have those people closer to where the decision will be made, which is another point also in a previous podcast, which was: if you don't have the right mental tools to think on those terms, like for instance, what you're saying, then you'll never tackle those issues when you are thinking of a solution.
You need to advocate, you need to have those people involved in your team, in your environment, in our ecosystem, this discussion needs to be happening. And that's also one of the reasons that I'm doing this project is to bring people like you and disseminate new ideas, new ways of thinking.
Unfortunately we are a bit above our time. But I really enjoyed listening to your thoughts and all of your process. I really appreciate that you came on Critical Future Tech.
C I appreciate your time too and for the invite. Can I just quickly mention the Para-Real series?
L So that's one of the things I was going to ask you is if you could talk about it and then also tell people where they can keep engaged with the work that you're doing so that people can get access to it once it comes out.
C Absolutely. Thank you. So one of the outputs from our research is a livestream series called "The Para-Real: Finding The Future In Unexpected Places", which is a series of 12 interviews with individuals or communities who are using the tools that they have today to build their own livelihoods and build collective power in spite of platform exploitation and the flaws and the systems that we have today.
So specifically this is looking at people and the pandemic really helped with this to sort of set up a baseline of the resilience of different groups of people in different economic communities. This is, yeah, it is really going deep beneath the surface of the popular web, if you like, and looking at different ways in which groups have mobilized around interests or around creativity or other commonalities to then build this collective livelihood .
We streamed the first episode last Sunday which was the 23rd of May. It runs for 12 weeks. So 11 weeks from now, I guess we'll make it even 10 weeks from when this podcast goes live all the way through to the end of August.
And it's a fascinating series because it's essentially case studies that we're going to use to inform the work around, you know, what comes off to design ethics, what comes off, what comes in a truly descaled or degrowth world in which people have greater control over the systems that they have, like, what are some of the ways in which those kinds of people have looked at the systems that we have today and repurpose them in a way that helps them more than it would have to not have those systems.
It's an optimistic series but it's not us proposing technology will save us, but rather that the ingenuity of communities is what we should be listening to. The first episode we did was with a virtual reality documentarian named Joe Hunting. It's an absolutely gorgeous episode around aesthetics and being a filmmaker in virtual reality.
The one that's coming up this weekend, which would be the 30th, is with a black trans artist named Danielle Braithwaite Shirley and this is around her critique of basically the class structures of video game engines and how the people who were in the room to make video game engines and video games narrowed the possibilities of what games could create.
L Is it live like, can you look in retroactive?
C Of course. So the website that you view it is stream.undersco.re. You can also find out more about it by following me on Twitter, @helveticade or going to our website, newdesigncongress.org, and that's being produced in a collaboration with another tech organization called Reclaim Futures and they're kind of like an anti-capitalist tech conference. And this year we're running a series of five streams that they've been producing with us.
L Another group that I'm very keen on having a discussion. I will link everything, don't worry, in the transcript of our conversation, which was lovely and I really appreciate it.
Cade, anytime you want to visit Portugal, you're welcome. I'll show you around. It would be lovely to have you here show you a little bit of the tech scene and the vibes of Lisbon or Porto if you prefer the north or Algarve, which is where I'm at right now in the south. It's a tiny country so it's easy to visit everything.
C I'll take you up on that I think. I think I will take you up on that.
L Take the time now that we are almost able to travel freely and visit. I would like to go to Berlin at some point because I would say in Europe, it's probably the place where you can get the most counter-culture approach to many things.
Thanks again Cade. And I will keep you posted once the episode goes live.
C Lawrence it's been a pleasure. Thank you so much.
L See you around Cade. Thank you.
C Thanks. Bye.
If you've enjoyed this publication, consider subscribing to CFT's monthly newsletter to get this content delivered to your inbox.