Critical Future Tech

March 2021

Issue #5

March 2021

As we proceed into the year, we hope that 2021 will treat us better than last year. And indeed there are hints of hope, as governments across the (mainly western) world rollout vaccines.

Meanwhile, let us look at how things evolve between tech companies, government and society at large.

Following February's headlines, we'll be talking with the creator of Faces of the Riot.

- Lawrence

What Happened in February

Big Tech, Governments & Regulation

A new U.S. bill has been proposed which would make it harder for tech firms and and other firms to become troublingly large through acquisitions. Still in the U.S., tech CEOs Mark Zuckerberg, Sundar Pichai and Jack Dorsey will testify on March 25 before the House Energy and Commerce Committee on the rise of misinformation on their platforms and how they plan to address it.

U.K.'s top court ruled Uber drivers are entitled to benefits like paid holidays and minimum wage and should be classed as "workers" and not self-employed. Meanwhile the U.S. though, California's vote to classify Uber and Lyft drivers as contractors has emboldened other employers to eliminate salaried positions. The company has had an aggressive stance such as hiring a prominent critic to focus on the treatment of drivers and removing UberCheats from Google, an app that told drivers if Uber underpaid them.

Twitter has been fending off orders from the Indian government to block more than 1,100 accounts that the government argues are spreading misinformation about months-long farmers' protests against new agricultural laws. Weeks after, the government announced new rules to regulate content on social media which oblige companies to remove content within 36 hours of receiving a legal order.

Following pressure from the Pakistani government, Google and Apple have taken down apps made by the persecuted religious minority Ahmadiyya.

In a win for Apple, the North Dakota state senate voted down a bill that would have had app stores regulated.

Microsoft says it won't give money to Congress members who voted against the U.S. Electoral College confirmation.

France has setup a team of computer scientists to ponder how to regulate digital platforms and Apple is now forced to add repairability scores to comply with a new French law.

Tech Vs. Workers

In Alabama, Amazon workers won another fight in their effort to win a union voice even though the company had intensified its aggressive efforts to discourage a U.S. warehouse union. Amazon has even tried to offer $2,000 "resignation bonuses" to bust the union drive. Despite the threats, there is now another ongoing unionization effort in Iowa.

Alphabet Workers Union members say they were blindsided by the unexpected decision to form an alliance with Communications Workers of America. Members of the union have also alleged that contract workers were silenced about pay.

Google fired another top researcher on its ethical AI team, a day after Google announced a reorganization to its AI teams working on ethics and fairness. Days before, the escalating internal issues over the firing of AI ethics researcher Timnit Gebru let to the resignation of two engineers. The company is now vowing to make changes changes to how it reviews its scientists' work in an attempt to rebuild trust.

A senior Facebook engineer who collected evidence of the company providing preferential treatment to right-wing pages was reportedly fired by the company.


Amazon's sidewalk surveillance Ring now reportedly partners with more than 2,000 U.S. police and fire departments. For example the LAPD used Ring footage to investigate police brutality protests. The company also revealed that in plans to install AI-powered cameras in delivery vehicles to 'improve safety'.

Minneapolis police obtained a search warrant ordering Google to turn over account data on vandals accused of sparking violence in the wake of the police killing of George Floyd last year.

TikTok owner ByteDance will pay $92m from a U.S. privacy settlement that alleges that the app failed to get user consent to collect data in violation of a strict Illinois privacy law.

A California judge approved a $650m settlement that Facebook will have to pay over a 2015 class-action lawsuit over its use of photo face-tagging. The company is also getting sued in England and Wales following the Cambridge Analytica scandal for having "lost control" of the data of about a million users.

Italy has fined Facebook $7m for misleading local users over how their data would be collected and shared with third-party services.

Fairness & Accountability

Google will pay $3.8m to settle discrimination allegations and a €1.1m fine from France over hotel ranking practices.

Amazon will pay $61.7m to settle claims it withheld tips from delivery workers.

Reports found that Facebook knew about violent extremists before insurrection and did nothing.

Australia Vs. Tech

Google signed deals with major Australian media outlets to pay for news content while Facebook on the other end ended momentarily blocked news viewing and sharing in Australia after failing to reach a payment agreement. The company later reversed the decision after the Australian government agreed to tweak its News Media Bargaining Code.

Following Australia's lead, Canada is now also looking into making tech giants pay for news.

Meanwhile Microsoft has backed Australia's proposed media laws, even taking it further by calling for similar media rules aimed at Google and Facebook in the U.S. and Europe.

Facebook Vs. Apple

The skirmish between Facebook and Apple continues. After Tim Cook saying that "it's time to fight the Data-Industrial Complex", Facebook striked back against Apple privacy change, having also launched multiple anti-Apple campaigns following claims that Mark Zuckerberg told staff the company needed to 'inflict pain' on Apple.

To get these delivered to your inbox, subscribe to CFT's monthly newsletter at the end of the page and join the conversation on Telegram.

And now, onto the interview.

Conversation with the creator of
Faces of the Riot

This conversation, which took place February 6th 2021, has been edited for length and clarity.

Lawrence — Without getting into any specific details can you tell us a bit about yourself?

Faces Of The Riot — I'm a college student from the greater D.C. area.  I've had a huge interest in computer science since the beginning of high-school. During high-school I got a pretty big interest in using computer vision with machine learning for just pretty much every application and as I went through college my interest narrowed into computer vision for detection and tracking and stuff like that.

A woman's hand drawing layout mockups on a paper sheet.

"In a lot of corporate structures, your boss tells you to build something and you're used to not knowing what that's going to be used for. It's important that technologists think about it, even if it's your boss that tells you to build it."

Photo by Alex Kotliarskyi on Unsplash

L So you created this website almost as soon as the Parler leaked videos came out. Why did you do it and can you tell us what were you thinking, what lead to you wanting to do this?

F I'm always interested in machine learning and was looking forward to using a large dataset. I never had the opportunity to until now. So it started with "this would be a really cool personal project" and then in about 10 minutes I realized that this would actually be super useful for helping people identify and generate FBI reports about the people who were that at the riot and that were on the FBI wanted list. I transitioned very quickly from personal project to "this could actually make a difference". The thought process behind making it public was that these videos were uploaded to a public platform; they are publicly posted already. The only people at that point who were being held accountable for showing up to the riots were a representative from West Virginia because he was famous or people who made good news headlines like the Berserker or the guy who stole the podium. But they are not the only people who were there. They're not the only people who were driving this. Everyone who was there had a part in this. Everyone shares some blame in this and they should be held accountable for what they did.

L Including the ones that were just listening to Trump and then went home? The videos also capture those folks right?

F Yes and that's why clicking on a face immediately brings up the video associated with the face because we want the user to have no option but to get instant context. If you want to generate a report about a face you're gonna have the video at the right timestamp. You don't want people who weren't there to be reported. The FBI does its own investigation and it's a lot easier for them if they get relevant reports and it's only fair for the people that were at the riot if we provide the users with context and not sensationalize everybody's actions. Getting the videos on the website was a top priority and was one of the first things that the new UI did. The first version of the website was terrible, I created it in like 5 minutes then I got a friend to make it a lot better.

L I've been following the website as you know. We spoke a couple of weeks ago when all the faces were displayed at once.

F Yes that was the first version. I didn't like it but I thought it was important to get it out there. And getting that context included was really important because it gives the people the ability to see what happened.

L Yeah you made it way more complete in the sense of the info that you provide.

F And that's important because it's not right to assume every single person was beating up a police officer and this context is important. I got 6000 pictures of separate people. There were more than 6000 people at this riot but I got a few messages already on Twitter of people saying "You hear on the news that there were thousands of people at the riot but I go to your website and I see page after page of different faces, it's a lot harder". That's a huge part of this. It's a lot more impactful when you can go trhough a gallery and just see how many people went there. Hopefully the next time something like this happens in America at least the people who'll have seen this website will say "We probably shouldn't do this. Look at what happened last time. Look at how much impact it had on our country. Lest we forget, let's not have another insurrection".

L I reached out to you because a colleague of mine sent me the link with the leaked Parler videos hosted by Tommy Carstensen, the same place you went to get the videos too. And I was like "Ooh the videos are here!" and I told him "Dude I was thinking if we would put this on a map and have some sort of understanding of the scope".

F Our original goal was to include a map of where the faces in the videos were but that ended up kind of falling through because there was already a map and also the faces can be hundreds of meters of where the video was taken so it wouldn't really be accurate and I don't want to implicate people that were far outside, not even near the Capitol.

L My reaction was kind of similar to yours in the sense of "We need to document this really freak event and be able to transmit how insane it was". For me and my colleague it was "You have hundreds of videos and you could just cluster them and dive into that and almost experience what happened in and around those clusters".

F I have 90% of the Parler videos with associate geolocation metadata. There are other videos of the riots that have been reviewed by people but have no metadata so I didn't include them. The number of videos, after deleting the duplicates, is around 650 videos. Around 11h30m of footage that people uploaded to Parler which is insane.

L I want to know your opinion on the involvement and responsibility of Parler in regards to January 6th and what led to that day.

F I think Parler played a big part. It has existed since well before the riots but a lot of platforms leading up to the riot started banning people that were talking about storming the Capitol. They were shutting down a lot of those forums discussing that. Parler not only kept those discussions open but actually advertised saying you can discuss anything you want, whatever, anything you want on our platform. I believe they weren't necessarily instrumental in creating the riots but they definitely did not do their due diligence to stop that discussion on their platform. I don't think they were the driving lead behind the riots, it just happens that a lot of the people that were at the riot were Parler users, which is why so many of those videos were uploaded to Parler during the riots.

L Everyone is bashing on Parler but what about the rest of social media companies? I'm sure many of those people also have an account on Twitter or Facebook right?

F And Reddit as well. Reddit had a pretty big issue handling the amount of discussion related to storming the Capitol. Parler I think just didn't handle it as well as other social media companies.

L Well it wasn't part of their business core. They are all for freedom of speech meaning they don't do any moderation so anything goes.

F Which is an issue and this actually shows why a website that has complete freedom of speech doesn't really work. For instance if you look at a case study for this is Voat. That turned into a disaster and they recently shut down. A big part of their stance was "we don't moderate anything" and then they had to start moderating because I read people were posting sexually explicit images of children.

A woman's silhouette in front of a wall of white and red fluorescent light bulbs.

"IEEE has a code of ethics and when I created this project I kept that in mind. It's up to people who make a website or create something like this to think about the ethical implications."

Photo by ThisisEngineering RAEng on Unsplash

L I don't remember reading about it but I remember finding out about it 2 years or so ago and I was "this isn't for me".

F There was a lot of questionable content and it proves you do need some moderation. Parler claimed to have a moderation team but that team was just not removing posts that were potentially inciting violence.

L I think Parler played their part but their role was more that of a scapegoat for the other companies that were able to point a finger and say "Here is where the issue is coming from, so let's remove them" which makes them look like the wise ones when in fact we all know that all those people are also on those social media platforms. And now there are reports coming out that actually a lot of organizing was happening for instance on Facebook.

F This stuff didn't necessarily begin on Parler like the movement of "Stop the steal" and news that were verifiably false about a rigged election. When Twitter started flagging all of our former president's tweets saying it contained false information, stuff like that was happening way before Parler was put into the light. This is where these movements started and coalesced, on those social media platforms, before they started flagging them and removing them. So these companies like Facebook are also responsible for the creation of these movements. Those people only moved to Parler after that.

L Yes and the splintering of these communities, where they go to the platform that will allow them to discuss and perceive reality the way they want to perceive it, I think it's even worse. With them in the public eye at least you can discuss and debate but when they move to isolated communities it's a problem. For instance when Reddit banned r/the_donald, that community then migrated to which then migrated to some other domain. When you enter these platforms you see they are extremely strong echo-chambers.

F It's weird it's like you're looking into a smaller society that attached itself to a movement. Even the r/conservatives subreddit — they aren't extremist to the point of storming the Capitol — but they have very strong conservative views. I don't think this is very relevant to the article but if you try to debate them about their policies you usually just get banned. They don't care.

L Yeah you see people complaining they were banned for voicing another opinion. But that's not just for r/conservatives. But what you say is relevant because we're talking about moderation and in some ways censorship, for instance Trump's deplatforming.

F But he did violate Twitter's terms of service so in that way he absolutely should have been banned from Twitter. The reason that a lot of people were shocked is that this was the first time that such a public figure was banned and held accountable for violating a social media platform's terms of service. They said "he did violate our terms of service" and that's why they banned him. Usually that doesn't happen to famous people, which it should.

L I think there was a lot of internal pressure from engineers within Twitter. This brought back the question of "what should be the standard to do this?" People say that should have happened before, if you look at Twitter's ToS they say you cannot bully or harass. And there's similar or worse behavior from other world leaders but nothing is done so what's the deal? Do you think at some point these companies should be regulated?

F I'm not saying Trump shouldn't have been banned just because other world leaders weren't banned, I'm saying those other world leaders should also be banned.

At least apply the same standards to everybody.

Yes and it's not saying "Lower the standards to allow more bullying", it's saying "apply the same standards to people who violate your ToS equally to all users". Not picking and choosing who you apply your ToS to. It's supposed to apply to everyone.

L It's a very dicey situation. I think it's going to need a strong resolution from, in my opinion, governments. A lot of people understand why they need to ban him but many also said that it was wrong that private companies get to decide that.

F Twitter is such a huge platform, it's almost to the point that it is like a public utility because so many millions of people use it.

L It's a public square for people to talk.

F Yeah. Personally I wish the government didn't have to regulate it but I think it needs to. And that's also why small government is never gonna work because of issues like this.

L The thing is these companies have such an impact on society but they are private companies. They work for profit, for stakeholders. They don't work for the public.

F Because of how much they impact the public then that's when the government is required to make sure they maintain their ToS and policies, that's actually necessary but I don't have a lot of political knowledge so I'm not sure I'm the right person to say this.

L Well even if you don't have a deep background it's still good to discuss these themes. It's important to observe that something is weird and needs to be checked and fixed. I don't know how though [laughter]. But getting back to the website, what sort of feedback did you get since you created it?

F I've actually had pretty good feedback. I can count on one hand the people that messaged me saying things like it was unacceptable or that it was dangerous, that we were gonna get sued into the ground. It was like 4 people. I'm very much not concerned posting information that was made freely available. We haven't received any takedown request from anyone, any cease or desist. It seems the very small pushback comes from a conservative base and the rest from people worried about privacy concerns. The way that the website was created I don't think there are any privacy issues. However this technology could easily be used by someone else to create a website with a ton of privacy issues. I didn't invent facial detection and clustering. I just decided to apply this existing technology to this dataset. This is one of those times that people can see what this technology can do. People read that the government has this facial recognition database or the government has billions of faces in their database. If you read any article about Clearview AI, over 3 billion images of faces, scrapped from social media, YouTube, NSFW website, etc. That database is private, the government's databases are private. People don't really understand this has been around for years and years. A lot of people looking at the website realized this.

L It's becoming way more accessible for regular people to do this, although what you did does entail technical skills. This isn't a no-code solution.

F Yeah I didn't have a formal schooling on how to run a facial detection program. I don't want to say it's not hard at all but if you have some basic Python and coding background it's not the most difficult thing to do. I think people that realize this are starting to be concerned. "What if someone decides to do that on every video on YouTube? Or every picture ever uploaded to Facebook?". It becomes concerning.

L That's a cool secondary positive effect from the website: people feeling that it's doable and it's not just the NSA that can do it.

F I think a lot of people didn't even believe the government was doing this to the extent that they were. They just thought "It's just the government, this technology isn't available to the people yet". I did my best to publicize the website — and as a side-note we don't make any money off of this. We are lucky that someone decided to host our website for free. We're not gonna run ads or get donations or profit in any way.

L It would also defeat the purpose or the stance you're trying to have.

F Yeah and it's also just wrong to make money off of an even like this. So the reason for me to try to publicize it was that I thought "Hey this is going to be useful to people, I want as many people who can use this to know about it". With that came a lot of people realizing that not only does this technology truly exist but this technology is available to anyone and anyone can do this. That was a wake up call I think for certain people.

L Besides that positive side effect do you think that the website has helped in any way any sort of investigation or identifying people?

F I got messages from people saying "I recognize this person, I submitted an FBI report, is there anything else I can do?". This is someone they recognized and liked this person's Facebook to the FBI report and they're like "Oh and also their Facebook as other photos of them at the riot". Those are people who wouldn't have gotten noticed otherwise. These are people posting videos of them in the Capitol or at the stairs and you know their Facebook just slipped by the FBI.

L I think so far there have been over 200 arrests from that day.

F There are still people who are on the FBI's wanted posters who haven't been identified. Still, even if someone doesn't recognize a person personally, they recognize a person from an FBI poster and they submit to the FBI "this person was seen on this video on Parler". The FBI can look back at who posted that video and ask if they know that person. The more information they have about where that person was seen at the riots the better their investigation goes. It's not just "do you know this person" it's "can you match this person on an FBI wanted list".

L It's like crowdsourcing the effort to identify people which is cool. When you realize "hold on this is not only a cool project where I can apply my knowledge, this can actually be something worth doing", did you feel that you had a sort of responsibility to do it? How did you feel in regards to that?

F Kind of, yeah. After realizing that I thought I can't just abandon the project. It would be wrong because I knew it could end up being useful, which it turned out to be. I can't just not do it, I'm already half-way done with it, I kind of have to finish it. This is a viable, useful project, I can't ignore that now.

L In a way you took your skills and technical knowledge and applied them to something beneficial, in a situation where you thought "this could actually help out". In that regard what is your take on how technology can both be applied for positive things but also for negative things. What is the responsibility of the technologist in that sense?

F IEEE has a code of ethics and when I created this project I kept that in mind, which is why there's not facial recognition, there's none of that. It's up to people who make a website or create something like this to think about the ethical implications. There are some cases where you think "wow this website is going to be amazing" but you also have to think that this website could hurt people which is why we spent a ton of time removing faces of children, police officers and journalists who were there before even publishing it. I knew there were gonna be people in the database who were just, you know, a police officer behind a riot shield or children who were dragged into this by their parents. We were encouraging people to message us on Twitter with any results who weren't rioters and we were removing those immediately. When creating a technology like this also comes a responsibility to maintain it to a standard that sticks with your mission purpose and also is ethical. You can't just abandon a project like this, walk away saying "this is too much".

L It would seem that, from the products that we're used to, overall ethics many times isn't something that has a lot of weight in some of the decisions and the outcomes that you see from technology companies. Why did you have this sensibility? Why did you think on those terms?

F A lot of the companies that abandon any sensibility of ethics see "well if we stop being ethical, look at how much more money we can make" and a lot of it is driven by profit. I'm fortunate to not have to worry at all for this [project] which makes it a lot easier for me to not even consider making something unethical like that. A lot of those unethical decisions can come from someone higher up in the company who just tells you to do something. You know, you're a programmer, you don't even know half the time what you produce is going to be used for. So you have no clue if it would be used for something unethical. Then the company applies your tool for some project that's really unethical. There are companies that base themselves around creating technologies that inherently have tons of ethical implications and it's really up to "how does that company regulate their user base, who they allow to use their technology, how fair and thorough they are with those checks and balances". That has a huge impact. How careful are they with their data storage and pruning their data in order to avoid false-positives and false results. Really a lot is driven by the fact that unethical practices do have a greater potential for profit.

A man holding a smartphone in his hands overlayed on a dark background.

"I think people get very caught up thinking "My god we're going to be making so much money!" and then they just forget about what they're actually building. A lot of times I don't even think it's on purpose, it's just that they get caught up in their project and forget to think about this."

Photo by Eddie Kopp on Unsplash

L When we see people who choose environmentally conscious brands and companies, do you think that at some point people will switch to an alternative social media that has that essence or is it a far cry?

F I think people will realize that there are a lot of companies that have been doing unethical practices and they'll search what are the alternatives to these companies and they'll just give up because a lot of the times there isn't a viable alternative anywhere near useful or widespread. Those companies that have a lot of these ethical issues only introduce those technologies after they've become critical to your every-day life. Maybe it's on purpose but maybe not. They realize "now that we have the technology and user base why not collect all that personal data?" I think people get very caught up thinking "My god we're going to be making so much money!" and then they just forget about what they're actually building. A lot of times I don't even think it's on purpose, it's just that they get caught up in their project and forget to think about this.

That's something we've learned a lot about in college. Part of every project is examining the ethical impact and a lot of people don't do that when they start a project.

L I'd say that's uncommon though.

F I'm lucky to be in a college that values ethics in computer science. A lot of people don't have the background or knowledge that you should have in order to consider ethics. The software that you put on the internet isn't going to be on the internet for five minutes. You don't hit an undo button when you make a website or you release a tool on GitHub. Sure you can undo it but someone has already archived or copied your software. Before you publish something you need to make sure that you're OK with anything that could happen as a result of your production and weigh the benefits and risks of publishing it.

L It makes me happy to hear you say this. That's precisely one of the reasons of having created Critical Future Tech: to raise some awareness that technology has consequences, good and bad, and when you create something you need to ponder the outcomes and mitigate the bad outcomes.

F Yeah for instance I don't like facial recognition at its present state. Facial detection is fine but facial recognition is very inaccurate. It requires an extreme amount of human oversight to rule out false-positives. If you search a database for a face you're gonna get hundreds of false-positives. You're supposed to have someone who's really good at matching faces looking over the results. Because of that I don't think facial recognition is ready to be used in major applications yet. That raises the question: are the people who first created the first facial recognition algorithm unethical? No, because they didn't create an algorithm to specifically match faces, they created an algorithm that was able to match data points to other data points. Then someone else created an algorithm that was able to pull out feature points from faces and then someone else created an algorithm that combined the two and could recognize faces. Facial recognition is way more pervasive than we think. The iPhone's face unlock? That relies on facial recognition. Anything that uses your face to authenticate. That's a good example of how that technology is used safely but there's also a lot of cases where it's not the case.

L There are way too many examples of the technology not being ready and still being employed by for instance police forces.

F I think eventually facial recognition usage for police forces is going to be useful but just like it took a very long time for DNA evidences to actually be useful — there were tons of false-positives in DNA investigations — eventually we're going to see facial recognition algorithms that are impeccable but that's a long way off. People say "Let's ban facial recognition algorithms" but that's not at all possible. It's completely impossible to ban the creation of a facial recognition algorithm. It's based on machine learning models that are used for completely different things. At what point do you draw the line? We must ban anything that can identify an object in an image? Ten years ago no one thought this would be possible but now people are concerned.

L Usually what happens is because now you're in too deep, the only option becomes regulating it and creating rules on how to apply and use it. For instance the deep fake abilities are going to be so good that there'll be the question of "can video still be admitted as evidence?".

F The deep-fake is already there. There are already a coupe of algorithms and processes that, if done correctly, a human is not able to identify the difference. However machine learning pops up and people have created algorithms that are able to detect whether or not it's likely to be a deep-fake. Maybe instead of only applying regulation on hindsight we should think about what impacts this technology could have and how we should regulate it before making it widespread. That's really hard when, for example, deep-fake isn't created by a body that has the power to regulate. It's created by an independent user, just some random person or team of people who wanted to do that. It's really hard because they can't regulate that. As soon as you start having the government regulating the work of an independent user, that's a complete disaster.

It's important that technologists think about it, even if it's your boss that tells you to build something.

In a lot of corporate structures, your boss tells you to build something and you're used to not knowing what that's going to be used for. It's standard compartmentalization of information, especially in the government. For good reason, they don't tell everybody what their work is going to be used for. There needs to be a higher focus on the people who design and architect those projects because they're the only ones who know the full scope of the project and they're the only ones who can evaluate whether or not this project is going to be ethical.

L What are the plans for Faces of the Riot, what comes next?

F We're adding a search function so that you can search by the title of a video or by the ID of a video and that's pretty much it. We're going to continue to maintain it and look through the database and remove false-positives and possibly process more videos of the riot as they become public. But we don't have plans to expand this technology into some huge facial recognition database or really make this some unethical monster technology. We definitely don't want to do that. If there is another event like this, another huge organized violent rally in the U.S. or anywhere else in the world that becomes big enough and there is an availability of videos like this, we would replicate this. It's not based on whether you're a democrat or a republican. We don't care "what side you're on", There's a better way to solve something than storming the Capitol and calling for the death of the vice president.

I'm curious here because I'm talking to someone face-to-face: what do you personally think about this website?

L I totally loved it, instantly. My colleague that worked with me on our own app sent me a link saying "Check this out, someone just created a website with a huge list of faces from the Parler videos" and I instantly said that we should try to cross that data. In my mind I was always thinking "let's build something that may help whoever needs help in any way to figure out what happened, how it happened and who was there".

F As we were talking someone I just got a message from someone saying "I just found someone that works with me and I checked their social media. I found out through their posts that they were in the riots and I've reported them to the FBI".

L If that's the only result from this website, I'd say that's already worth the effort.

F On a personal level I'm really happy something I did ended up bringing some justice. Not just because I did it but because there's a tool out there that can help people.

L Going back to your question, when I saw the website I thought "this is someone that has the technical skills, there are no ads, there's no tipping jar" meaning that whoever built this fundamentally believes this is the right thing to do and just did it. As a technologist, if you have the skills to do this then why not do it?

F I was curious on how much could we have made off this and with Google ads it would have been a few thousands of dollars already but then I'd be selling myself out and abandon my principles for a couple thousands of dollars. I would have felt disgusted with it.

L Well wait until you go work in Silicon Valley and see how you feel then [laughters].

F I've gotten a couple of people who contacted me on this and said "Hey I work for so and so company, let's have a chat, send us your resumé". It has already paid off in that sense and that's the most I could hope to ask for something like this.

L For sure, take those opportunities. I joke when I say that, I love all the Big Tech companies, I use all of them. I'm just questioning what they do. I'm a firm believer that you can bring change from within. That's also why I'm putting this content out there.

I think we've covered most of the topics I wanted to bring up. It was very interesting talking with you and about the project. I hope you too enjoyed this conversation.

F Yes thank you, it's been a great talk.

L All right. Nice meeting you. Bye.

F Have a good one.

You can visit the project at and connect with Faces of the Riot on Twitter.

If you've enjoyed this publication, consider subscribing to CFT's monthly newsletter to get this content delivered to your inbox.