Almost two years since the last publication comes an issue I've been wanting to release ever since the project was in its early inception.
In this discussion we dive into the details of architecting a nation-wide project sustained by European funds, the approach for having research and the private sector come together, and the reasoning behind focusing on Responsible AI as the project's defining ambition.
A delightful peek into how technological ecosystems get shaped through intricate and nuanced relationships between corporate, research and governmental interests and influence.
- Lawrence
Lawrence - Hi everyone today we are joined by Paulo Dimas, VP of Product Innovation at Unbabel. Hi Paulo.
Paulo - Hi there.
Lawrence - A quick disclaimer, me and Paulo worked together for many years at Unbabel so I have some affinity with him.
Nonetheless, this conversation is about the Center for Responsible AI, which Paulo is one of the pioneers in its inception and I'm very happy to have you because I've known about it for a couple of years but it was too soon to have this conversation. Now there are some results and some interesting stories to be shared. So thank you for being here.
Paulo - It's a pleasure.
L We've been talking about this for a couple of years already and I'm happy to see the project coming to fruition. So can you give us a quick intro on yourself and how did you end up in this situation of creating the Consortium?
P Yeah, so about myself. I've always been very interested in Artificial Intelligence since I was very, very young.
I started doing research when I was 16 and I remember at that time I started translating a book on AI from the library of INESC, a research center here in Lisbon.
I felt a bit frustrated regarding the research environment because lots of exciting work was being done, for example on speech recognition and speech synthesis but that research never reached the market. It was never productized. And so I moved on into the university and studied at Técnico - Artificial Intelligence - but also founding a students association that had as a mission to create the next startups.
At the time we didn't call them startups. They were just companies. But today they would be called startups. And in fact this association still exists, it's Junitec. Really proud to be one of the founders. All my life after that was very much into creating new products.
I joined Unbabel 8 years ago as the first head of product of Unbabel. We were 12 at the time. Then I moved into the domain where I'm more happy and excited, that is the domain of inventing the future with the next game changing products.
I'm leading the Unbabel Labs team here. We join people from AI, from product, from design, from engineering. And so from this team comes the next game changing product for Unbabel. We're inventing the future here as the team's mission is that.
One year and a half ago we had the opportunity to be involved in this big initiative that is known by the name Center for Responsible AI.
This opportunity came from the Portuguese Resilience and Recovery Program, aligned with the Next Generation EU Program. That is: the funding that was assigned by European Union to recover from the pandemic. It is a big fund at the European level and in Portugal the government decided to allocate a chunk of around €2B to drive innovation in Portugal, which is a really big scale.
Typically this kind of funding is allocated to big industry players but this time we had the opportunity together with other Portuguese startups like Feedzai (Feedzai is one of the founders of the Consortium) to, in a consortium model, apply for this funding with the idea of creating a critical mass of talent in Portugal around Responsible AI.
Why Responsible AI?
Because first of all, Feedzai already has a long track record on Responsible AI. Pedro Bizarro, Pedro Saleiro, they've been working on Responsible AI for quite a while. And more than that, they have not only been doing research on Responsible AI, but also applying that research on their products, making Feedzai more competitive because of the advances they have been doing, for example, on fairness.
Starting from that point, with a vision that the next generation of AI products will be driven by the principles and technologies of responsible AI, like fairness, explainability, energy efficiency, and privacy, just to name a few, we decided to aggregate around these pillars a group of ten AI startups, one law firm (VDA) and seven research centers, together with five industry leading companies.
And this ecosystem you know, has been very virtuous in the sense that these industry leaders - and we are talking about literally the Portuguese industry leaders from pharma to retail, hospitality and then two hospitals, one in the private sector and another in the public sector - these industry leaders bring concrete problems into the consortium, into the center.
This facilitates a conversation both with the startups to create products that address these problems, but also with the research centers. They inspire everyone to address the problems that society and businesses like these ones are facing.
So we have created this virtuous model where we have problems coming from the industry in each of these areas. These problems inspire the consortium to build the next generation of AI products that are then based on advances, on research that go from fundamental research to more applied research coming from the research centers.
L But why not just a Center for AI and why the term responsible?
P So first, because we believe that the future of AI needs to be responsible.
I remember at the time when we started we were using GPT-3, it would generate content that was not acceptable from many dimensions. It would generate toxic content. So we understood at the time - more than two years ago - that we would need to solve these challenges of bias in AI models. It was a big issue at the time. And so in order for AI to really move forward we will need to address these challenges.
We also need to address the challenge about trust. How can we trust generative AI, for example? How can we trust these large language models?
This was one of the motivations for us to address these challenges, to create a critical mass in Portugal that would also be used as a worldwide beacon to attract the best AI talent in the world.
For example, by having a research track on privacy preserving AI we have been contacted by people from Harvard University interested to work in this domain. We felt like not only is this domain the future, but this domain is also very much aligned with human values, with human centric AI. And this attracts a group of people that we want to have working on these challenges.
This was a strategic decision to create this critical mass on this domain for these reasons.
L That's not an easy battle because you're competing with very high salaries from companies that are outside of the EU most of the time and a lot of professionals don't care much about the ethical or Human part of the problem, right? They just want to be [working on] cutting edge no matter what.
But you do have some examples and we've seen a younger generation, maybe a bit more preoccupied with the environment and the impact of that technology. So you hope that this center kind of leads the way to lure talent that is more geared towards these principles?
P Yeah.
L Is that what you hope?
P Yeah, exactly. So we believe that regarding the use of technology, this younger generation will be much more aware of the risks for society of AI.
But in a sense there's a bigger picture. For these AI products to reach domains like, for example, healthcare, they really need to address these challenges. We really need to have trustworthy AI if we want to solve challenges like treating patients.
And so that's something essential to unlock those opportunities. So we'd combine these two factors so that the factor of society being more aware of, for example, discrimination between people of different races or different genders, different social status, together with the value that can be created by applying AI to new domains.
For example I can give the case of Unbabel where one of the products we are developing in the context of the center is a product that will allow AI to be used on high risk domains, on domains where we need to translate content related with clinical trials.
On those domains, typically, you have to rely on subjects matter experts - people that really understand the domain. But those people also commit mistakes and so we believe that with an AI that you can trust and that explains itself we can unlock this domain.
And so this will drive value from a business perspective for Unbabel.
So not only do we have the risks for society of an AI that discriminates, an AI that has biases, and also consumes a lot of energy (that is also one of the pillars) but we also have the business value that is being created by solving these challenges.
And there's still a long way to go on solving some of these challenges.
L So in a way it's trying to resolve those issues which are under this responsible umbrella with a business case, like in a sustainable business way, right? But oriented towards responsible technology, responsible AI.
And you're still trying to figure out whether or not that's viable?
P Exactly. The idea is that we not only develop and do the research on these domains, we not only discuss all the principles and even the regulation but we are also looking into the challenge of creating products, having an impact on people's lives.
And this is something that we have learned, by interacting with other centers and other universities, namely in the U.S., really makes this initiative being developed in Portugal, quite unique. Because we go from fundamental research to concrete applications that have an impact.
As another example of a product, one of the problems we are solving is to allow a patient that suffers from a neurodegenerative disease that makes them impaired in terms of speaking or typing or doing any kind of movement, to restore the communication of that person with their family.
And so we are combining generative AI with non-invasive neural interfaces to restart this communication. This is something that is gonna transform families because these people will be able to reconnect with their families.
And this is much more than what a typical center discussing responsible AI is doing.
Of course this is a much bigger challenge but we have the critical mass to do that. And this has been amazing. One of the things that has been surprising for many people is how research and research centers in particular and industry are collaborating.
L That's one of the things that I wanted to ask you because I know a bit of the story and the effort that took place to unite some of these entities that historically, one might say, do not really interact much among themselves. I'm talking about the research bodies among themselves, but also with the private sector.
I remember some interesting stories that you were telling me about when you were connecting all of those individuals pitching the idea to them. There was some skepticism and it wasn't easy to have all of those people basically working together.
Can you tell us a bit about how that effort went? You know, starting from just having the pitch to the moment that people were listening and kind of receptive to the idea, which wasn't the case from the start, right?
P Yeah, that's one of the biggest challenges: how can you create an environment where research centers and industry collaborates, in this case, with startups and the industry leading companies, the bigger companies.
We have been learning a lot and I think we have found a model that is working in a sense that research centers and startups get united around the problems that need to be solved, that need to be addressed.
The problem of restoring the communication between someone that can't speak or type with his or her family; the problem of democratizing physical therapy or the problem of interacting with people aware of their cultural differences and not only language.
By bringing these challenges into what we call product pods - a collaboration model that joins people with different roles from the research centers, startups and industry leading companies - we have been able to inspire the researchers to address the challenges that we are facing because it's very interesting and you're talking about and I have some anecdotes about this.
So for example, the first kickoff event we literally organized the participants in tables. It looked like a wedding.
We had the people sitting around tables, and we sat the startup next to the industry leader and next to the research center for lunch. Then they started sharing what was their product, what was the problem, what was the research agenda of the center. And then people started meeting each other, learning more about each other and they started getting united about the product and about the problem that they are facing now.
For example, at an event we did a few weeks ago, November 25th, I remember having a conversation with a startup that was looking at a research poster in the space of the event and they were: "I think that we can collaborate with this research center from what I'm seeing here because there's this area, a high risk area for us. We don't know whether this is going to work or not and we don't have the time and the energy for us to do research on this domain. And so we're going to collaborate with this research center so that they can derisk this domain".
That's something that naturally emerged and that is now part of this product pod model, that is: using the research center to de-risk the feasibility of some of the ideas that the startups have.
And in the beginning of the year we had five product pods (and they were not really a product pod, they just had the startup) and by the end of the year with all these collaboration events I think we have around fifteen product pods at this time that have the three roles represented: the industry leader, we have the startup, and we have the research center, at least one.
And so it has been very rewarding to see all this collaboration emerging and also collaboration even between research centers, which sometimes is very hard because the groups are a bit closed on their research agenda. But we are observing this kind of collaboration which has been very exciting.
L And it seems to be pretty much about the members of the Consortium.
So an external startup: can they come in and participate? Or is it closed off to the public and others that are not part of the Consortium?
P No, it's an open consortium. One of the principles of the consortium was to be open to as many partners as we could have at the time, for the funding also that we applied for.
And then we created this kind of core partners but the consortium is open for more partners. And we are still designing the model that will allow more partners to benefit from the consortium ecosystem. Of course the public funding has finished, so there's no public funding at this time but that doesn't mean that we will not apply for funds in the future.
But before that, we have high profile companies in Portugal like Critical Software, like Brisa and other companies that are also interested in joining.
And we are designing the way for them to join and benefit from this ecosystem.
L So it's one thing to convince the private sector and also researchers to join this novel idea.
How was the convincing of the government that this was the [right] sort of bet? First of all AI, and then responsible AI. You had to present this to the government basically to get the funding. So how was that received and how knowledgeable was the government? How open to the idea was it?
P There are two perspectives on that. You have the perspective of the science and technology ministry, and you have the perspective of the economy ministry. And sometimes there are some conflicting perspectives on these types of initiatives.
On one side what counts is kind of on the economy side, like GDP growth, jobs creation and so on. And then on the other side we need to do research, we need to advance our research ecosystem and so on. There's always a kind of tension between these two perspectives.
But this program was designed in a way to facilitate this kind of combination of perspectives because this program - the PRR program - forces the existence of research centers and companies in the consortium.
So it was quite easy for us to create these partnerships that would in the end facilitate moving forward with the creation of the consortium. So, in the end, it was not very challenging because, as always, this is always about the people that are involved.
We have the privilege of working with many people from the research domain. Some leading researchers like Mario Figueiredo, like Arlindo Oliveira and Feedzai, with Bernadete Ribeiro and others.
We had all these personal relations already so, for us, it was quite easy, quite organic to form these partnerships and then create this critical mass.
L Right. Because I guess there was a lot of competition to get access to the funds?
P Yes. There was a lot of competition.
When we are talking about an investment of, in our case €77m... It's maybe the biggest investment on responsible AI in the world. As far as we know in this model at least - in an open and a cooperative way.
Reaching this scale, that can really be transformative. So we want to create a critical mass so that we can then attract the best research talent, the best PhD students that instead of going to Paris. And so it's impossible to retain this brilliant generation of young people in Portugal because the scholarship value is very, very low. It's not competitive by any means to other countries. This needs to be a collaboration between the private sector and the public sector.
And because this is a model that we believe needs to be extended to more companies, so that the students can have a salary that is reasonable, and can also work on their research program and do their PhD here in Portugal.
I think that's really important.
L Well this leads into the question (that I had for the final part of this dialogue) of the legacy. So you were mentioning 2025 as the end year of what? Of the runway? What is this date regarding the consortium?
P The consortium exists to execute a set of twenty one products that use Responsible AI technologies and follow Responsible AI principles.
The execution time frame for these products is three years. So we started in January this year and we should execute the products until 2025.
What's going to happen after that? That's something that we started thinking about since day one, on how to maintain this momentum, how to continue attracting the talent to these areas so that you can bring more talent even internationally, and so on.
What's important for us is to create a legacy. Something that we all hope will continue after 2025. And what's that legacy?
First of all, we are creating this economic value for Portugal. We are creating highly qualified jobs and that's something that is tangible, that's happening as we speak - already more than 90 qualified jobs have been created in these domains.
The second is about creating critical mass on these research domains so that we can attract more researchers. That's something that will endure so we're going to be producing a lot of scientific outcomes in this area. This is something that we hope will attract more people to continue to work on these domains.
Then there's the domain of regulation. As we speak we have regulation that is going to be transformed into legislation in Europe at member states level. And the consortium is positioned to work and facilitate the certification of AI products under the European Union AI Act. This is something that we're going to create: the structure to do that.
L So you think Portugal will be able to tip the scales in that sense?
P I think so.
L Even with our reduced scale and position?
P Yes, if we make the right strategic decisions. If we bet on the right bets..
Going back to your first question about why we pick responsible AI and not AI in general: if we are spread too thin we'll never make a difference in any domain. We need to go deep on these domains to really make the difference.
We have great universities that are recognized all over the world. We are generating a brilliant generation of highly skilled young people that can have a job at any company in the world.
It was the first motivation for us to start this initiative: how to keep this talent in Portugal. Because if we keep this talent in Portugal we're going to create this virtuous cycle where this talent will attract more talent.
This is something that is challenging.
We need to have people that are international references on these domains because they need to be like beacons, the magnets to keep this talent here. But we also need to have the challenges so that this new generation gets attracted to these challenges.
And then we have to have the salaries because they need to be reasonably well paid because if not they will, they will just...
L They'll go work for Meta.
P Yeah. Like most of my daughter's friends, they've all gone abroad because of the salaries.
So we want to give a contribution to keep them here and with this talent I bet that we can make the right strategic decisions. I think we can really create this critical mass and really position Portugal at the forefront of AI and responsible AI in particular.
L Those are great words to finish on.
I'm a big fan as you know. I've been following it since, well, the start and I do believe it can make a difference and I look forward to seeing what the next chapters are next year.
Just to finish, where can people follow the work of the center? Any website or a page, something that people can go to.
P So we have the shortest possible domain that contains the words Center For Responsible AI. So it's centerforresponsible.ai.
L All right. centerforresponsible.ai will be linked in the show notes.
I appreciate your time, Paulo. Thanks for sharing this experience. A lot of countries are trying. I think it's great that you can share how we are doing it here.
P And can I just say something? I know that your podcast is listened to everywhere in the world and so I want to invite everyone to participate on this journey.
And so feel free to reach out to us, we also have our LinkedIn page. If you are excited about the future of AI, of responsible AI, of fairness, of explainability, trustworthy AI, just reach out to us. We have internship programs, we have a lot of initiatives and of course Portugal is an amazing place to be.
L Indeed. Thanks for inviting the listeners and I do hope that people will engage.
Thanks a lot, Paulo. See you around.
P See you.
You can connect with Paulo via LinkedIn.
"Can Democracy Survive Artificial General Intelligence?" by Seth Lazar and Alex Pascal.
"How an algorithm denied food to thousands of poor in India’s Telangana" by Tapasya, Kumar Sambhav and Divij Joshi.
"Adobe Is Selling AI-Generated Images of Violence in Gaza and Israel" by Matthew Gault
If you've enjoyed this publication, consider subscribing to CFT's monthly newsletter to get this content delivered to your inbox.