A man sittin on the lotus position on top of an Ionic style greek pilar.
Illustration by Bruno Prezado

Critical Future Tech

Issue #16

On this sweet sixteenth edition we get to talk with the very accomplished Portuguese academic, researcher and writer Arlindo Oliveira.

We discuss, among other things, AI's unwarranted side-effects when developed and deployed without much ethical and societal consideration, tech regulation approaches between countries, as well as the challenges for Portugal in terms of attracting and retaining talent.

- Lawrence

Conversation with
Conversation with Arlindo Oliveira

Portuguese academic, researcher and writer.

This conversation was recorded on Mar. 10th 2022 and has been edited for length and clarity.

Lawrence - Hi everyone and welcome to another edition of Critical Future Tech. Today we have Arlindo Oliveira, a Portuguese academic researcher and writer, author of more than 150 scientific articles and papers in conferences.

Arlindo specializes in bioinformatics, machine learning and computer architecture and among other things, some highlights are, he's a senior member of the IEEE and has been Dean of Instituto Superior Técnico (IST), the largest engineering public school in Portugal, part of Universidade Nova de Lisboa.

One of the reasons that I wanted to have you here is for you to share your extensive experience in artificial intelligence, machine learning, all that you have seen evolving over the years. I would like to make this discussion about that and then other questions.

But first of all, I was curious to know if you can tell us how you ended up getting into those fields? What attracted you and what still attracts you about AI and all of those nowadays?

Arlindo - It's ancient history because when I finished my degree here at technical, which is the engineering school of university of Lisbon I somehow felt that the idea that computers could learn was very interesting.

So I managed to find an advisor in the University of California, at Berkeley, that was interested in that area. And therefore I got my PhD there working in machine learning. The group was also expert in other areas like optimization, VLSI design and so on so I had the opportunity to put together machine learning, algorithms, complexity and so on. That was in the early nineties and since then I've been working in machine learning and applications in bioinformatics, applications in computer design.

Of course in between I also had a number of management duties. Mostly technical, also in INESC, which is a research institute connected with Técnico.

So it's a big difference from 30 years ago when I started working on this. Not many people knew what machine learning was until now where not only millions of people know what machine learning is, but also there are literally certainly hundreds of thousands of people creating tools and platforms and the software for machine learning in a way that would have been thought about 30 years ago.

You had to handcraft everything. You wanted a neural network? You had to code it in C and then train and then develop optimization methods and so on. So it's a long way from when I started working in this area in 1989.

"We sometimes worry too much about things like explainability. The human brain is not explainable. So if you just want to use explainable things, we should not use human brains because we don't understand how they work."

L It's almost a commodity. You can just deploy a model, train your model as a service integrated in your product and deploy it to your users. Almost as a no-code solution. Soon enough it will be the case. Which is interesting, this question of: now that it has become such an easy thing to deploy a model, to deploy an algorithm that is in one way or another making decisions, affecting the user - sometimes on a scale of millions.

If almost anyone can do it, is it why suddenly these matters of paying attention to what AI does, the impact, there's so much more conversation about responsible AI or accountable AI or transparent models, explainable models.

Is that because it's mad so much more available and the fact that anyone can almost deploy something that is good enough? Because this wasn't as big a mainstream topic as it is nowadays, right? What is your opinion?

A Artificial intelligence is already an old field, right? People have been working on this for 70 years now and there were a number of times in the past when there was a lot of enthusiasm about artificial intelligence. It was sort of in vain because AI turned out to be more difficult.

I mean really AI, real intelligent behavior in computers turned out to be more difficult. In the sixties and then in the eighties, again, people have been discussing AI. Not as much as today, of course not as much. We are still very far from real artificial intelligence, but we are closer than we were. So this last bout of machine learning in AI with the deep learning and with this convergence of data, algorithms and architectures, these really raised the bar because now these systems can do a lot of stuff that previous systems could not do, right?

And now of course, people are worried that these systems can actually start to replace people in some positions. And also they have a lot of impact in our everyday lives because when you use analytics, in the social networks, in the internet, in the search engines and so on you do use AI a lot, right? Although we may not realize, we use AI when we go to a search engine or we use AI when we are in a social network and so on.

So all of these things, there are these two big things: analytics, which provides value by analyzing the data and automation, which replaces humans in certain tasks. There are two aspects of AI applications today. Analytics and automation are really impacting everyone and this of course has raised the bar. Still, I think in some respects, we are very far from what's called strong AI or artificial general intelligence. And some of the worries that exist today are a bit misplaced, but others are not right?

I mean, certainly the influence of large platforms in our way of life, trying to control how all those platforms can control society, making sure that people know that they are making [their] data available and that it can be exploited against them and so on. So those are important things.

At some point, we sometimes worry too much about things like explainability, right? I mean the human brain is not explainable. So if you just want to use explainable things, we should not use human brains because we don't understand how they work, right?

And in some ways, I think we go too far. I'm actually reviewing a number of ethics issues in European projects and I think there's sometimes too much focus on things that I understand are desirable, right? I mean, it would like to have warranties that the system works as expected, but 100% warranty does not exist. It does not exist in AI systems, it does not exist in software systems, it doesn't even exist in mechanical systems.

So you have to live with some level of warranty and I think that's an important issue. There are other matters especially related to privacy and with the exploitation of data that I think are important.

So I think Europe is at the forefront of this. I think both China and the US will learn a little bit from us. But because they have more lenient systems they're also ahead of us in terms of many applications because they can work in an environment that is not so bureaucratic and heavy.

So this is a good/bad thing, right? We are ahead in regards to human rights, privacy safeguards and so on but I think we are falling behind in regards to technology development and so on. So let's hope that we'll find our niche in this larger ecosystem but of course, China and the US are playing a very important role in the development of AI.

"Both China and the US will learn a little from us but because they have more lenient systems they're also ahead of us – because they can work in an environment that is not so bureaucratic and heavy."

L They are. And even specific companies, right? Such as Facebook and Google contribute a lot in terms of open source and new ways...

A Yes. And they contribute and develop new technologies, but then they also somewhat more open in regards to the technology they developed in the past. Although they are also a bit on the safe side because it's not enough to know the technology.

You [must] have the computational resources to do a few of the things that they did like creating a large language module like GPT3 and so on. So they also know that even if you know the algorithm and the methods and you have the code, it's not trivial to run against them because they have resources that are at the moment incomparably larger than any university or research institute.

L I believe in the "Surveillance Capitalism" book , Shoshana Zuboff does mention that there is no problem in telling you how the algorithm works. If you don't have the data required also to train it and make it work as it id intended.

A Yes it's the data and not only the data, also the computational resources.

L Yeah. That's a good point. It's interesting. So since you mentioned where Europe stands in regards to the US and China, and I already understand that you feel that we are a bit behind the curve. We have been technologically speaking, you know a bit behind. Not necessarily in a regulation perspective.

Europe seems to be a bit more proactive in trying to anticipate some of the issues. Would you say that you agree?

A Yes. Well first of all these three large blocks are very different from each other, right? The United States has lots of large companies, they have lots of private initiatives. China has a very large, very powerful centralized state that can do things that no other block can do in that way.

And Europe is somewhere in between those two blocks that has an added difficulty: the market in Europe is more fragmented, mostly because of language, but also because of cultures.

So when you think about large companies like Google or Tencent, they are easier to develop in a market where you have hundreds of millions of people, or even billions of people with the same culture, with the same language than in Europe, right?

In Europe, we have to go country after country. For this reason innovations in Europe, the larger companies in Europe tend to be a lot more like business to business, because it's easier to internationalize in that way while some of these large companies in the US and China are business to consumer, right? Are B2C. So Europe – because of the structure of the European Union – faces a more difficult time developing large companies that are in the B2C category.

We have almost no very, very large companies in the B2C category in Europe – they are all American and Chinese – but we have a fairly solid record on B2B and on innovation on processes and so on.

And of course we're also in the forefront of the concerns with human rights, with privacy, with safety and security of these types of systems. So, as I said, this on one hand hinders us because it raises some difficulties, but also creates opportunities as companies can explore the challenges and the new areas that are created by these legislations and these concerns.

L Exactly. I also think that the fact that we care about the impact on our society in Europe can be an opportunity, even though it slows us down in growth or in experimenting.

I was reading last week about these new zones (Zonas Livres Tecnológicas – "Regulatory Sandbox Zones") to be established in Portugal, which I had no idea about. It's sort of a mechanism that would allow for greater flexibility in terms of regulation to experiment on new technology in the health, aerospace also and other areas. So there are ways to innovate, even though you can reduce some of the regulation to speed up development.

Because the feeling from external companies sometimes, especially the US if you speak to a US company, is that in Europe, you may need to comply with things that you didn't need to and GDPR is a good example that was established in Europe and then implemented in California for instance, with their own version.

So in that sense I think it's good for the end consumer and that's sort of like why we're having this discussion because there is an enticing problem. There is a complex challenge to solve. Which is great. That's why we also love technology and we're problem solvers.

But why are you solving that challenge? Right? How is image recognition going to be used and deployed? On which populations? And who is it gonna benefit or suffer from that? So that's the sort of questions that, within companies and within governments, we can ask.

So regarding Portugal: what is your feeling on Portugal? And where does it stand on Europe and the European technological panorama and worldwide. But starting with Europe.

A We are a medium sized country in this block, right? In the European block. We don't have very large companies in this area but I think we have very good human resources.

Our universities are good, our graduates are excellent. And I also think we have the right frame of mind to try new technologies and new approaches. And the fact that we have a number of unicorns – that is well above what could be expected from a country like Portugal – I think it bodes well for our ability to develop these sorts of technologies.

We face those difficulties of the small markets, right? Portugal in Europe is a small market. So most of the companies, the unicorns that we created are actually in this B2B area, as I said previsouly.

But I think the technologies are very interesting – the visionaries, the leaders that have been creating this – I think they are very interesting. I think we are, and we will be more limited by human resources.

We have a serious demographic challenge in Portugal. As you are certainly aware, the most difficult issue for companies in this area today is to find the right human resources and we have to go find them all over the world. Remote working is only going to help a little bit in the sense that some people can work for us, even if they don't come to Portugal (but it will be nice if they were to come to Portugal anyway).

And I think this challenge with qualified human resources is a serious challenge.

We are going to have a lot of financing in the years to come with the recovery and resilience program and also the next framework. But I don't think we will be able to use that money wisely and well unless we significantly increase the human resources we have working in this area.

And I think that there are no easy solutions for this. I think that we, as a society, the government as the government and the companies as private entities have to address this problem. They have to help us educate people in this area, both young and old, but then they also have to find people abroad.

So we have to renew ourselves a little bit as a country. Open to qualified immigration, to qualified people that want to come here, because otherwise we'll have all sorts of problems.

We won't have people to work, we won't have people to pay the pensions and I think this is probably the most important challenge for Portugal in the next decades. And I don't see any clear evolution of this but maybe we'll be able to find something in the next few years.

"We need to make Portugal as attractive to foreigners as foreign countries are attractive to Portuguese, balance that flow and hopefully reverse it."

L I appreciate you saying that because it truly is a daunting challenge. We are a small country in terms of population and I speak with a fair amount of recruiters that contact me or contact colleagues of mine or friends of mine that are in the engineering world.

And some of these companies – many of them are foreign, some of them American, other French – they are all saying: "we want to double our engineering staff. At the end of this year, we want to hire 30% more. At the end of this year we want 100 engineers" and I'm thinking "and now you're looking in Lisbon? 'Cause I mean, I don't know where you're going to find all of those people.." you know?

And it's great but I don't think we have the throughput to do that. It would be a shame for those companies to realize: "we love Portugal, but actually either they import talent [or we don't come]", as you were saying... which is a short term solution for sure and it's happening regardless. But not at the right pace for us to be able to leverage this momentum of companies looking to open an operation center here, an engineering team here in Portugal.

So it is truly complex. Like how fast can we get these people up and running so that we don't lose this momentum? We do have a lot of advantages. Great engineers as you were saying. Unfortunately, the best ones, many times leaving for the US and other places in Europe.

So how do we also retain the right minds? You know, the ones that are entrepreneurial, not too risk averse. It's also a question that I don't have an answer for.

How do we not also lose those people that we are going to work hard to train and educate and lose them to a Google just because we don't have a Google to retain them here?

A Yes. I mean people will always be leaving Portugal for other countries because they want to know the world and they want to explore other opportunities. What we need to do is to make Portugal as attractive to foreigners as foreign countries are attractive to Portugal and at least balance that flow and hopefully reverse it.

Not by not letting people go out because that is also good, but by having more people wanting to come in, both from Europe, but also possibly from the United States. It's not easy to bring people from the United States. The cultures are very different and most Americans don't think of coming to Portugal, but some of them might.

Then, of course, the Middle East and Asia are a big source of highly qualified people that may want to live in Portugal. And I think we still have the ability to integrate many more people than w e are doing now. Portugal is still a very homogeneous country in terms of population much, much more than other countries like the United States or Northern Europe or so on.

But I think hopefully we'll have the ability to integrate more people, young people, that will balance a little bit the demography in Portugal which, as I referred a couple of times before, I think it's one of our largest challenges.

"Most young people that work in AI just worry about the specific, technical problems they are working on."

L Yeah, and that's a good tangent. Human resources are required in order to develop technology and great technology requires great people.

Getting back a little bit on the technical side of things You can never decouple humans from the purely technical part. So initially you said a lot of the fears that you may read about AI are sort of like very distant. When you hear Elon Musk saying it's the biggest threat... is it now or is it in 15 years? ONn a hypothetical, a truly general intelligence agent?

Because there are some concerns right now and it's not necessarily robots taking over in the streets but practical issues. So if it's not the killer robots that we should be concerned about, because that's a bit too far down the road, what are some of these practical concerns that me as an engineer, or an architect or a machine learning engineer should factor in when I'm actually developing a new model?

You know, what's, what's the sort of things that I should at least consider, in your opinion, that are realistic, right?

A Well actually I think most young people that work in AI don't worry about these things a lot. They just worry about the specific problems, technical problems they are working on.

But when you get to a certain age like me – and of course there are many AI researchers of my age – they start to have a somewhat larger view. And so I think they worry about things like privacy infringement because for the first time you have systems that can basically survey individual citizens in a country.

So privacy issues and of course abuse of power by states is an issue. This is not so much an AI related problem. It's a problem related to the people that are misusing AI. But still, it's something that AI enables states and organizations to do. So this is one one issue.

Then there is the issue of misalignment, right? There's the issue of when you create and develop powerful systems – for whatever task – and the specs are not good enough, right?

I usually use the example: "suppose you develop a supercomputer and an intelligent system to stop global warming". Right? And you tell it: "do whatever it takes to stop global warming". And the system decides to exterminate humanity because that will certainly stop global warming.

And of course, this was a problem with the specs, right? We did not specify exactly what we wanted to do. This is of course an extreme case, but you can have less extreme cases that are more real.

You may want to optimize the electrical network or some distribution process or the economy, for instance, and you develop specific AI systems for this. But the specs are not well aligned with what you really wanted and the systems run amok.

So this is called the alignment problem. This is because the difference between a normal software system and an artificial intelligence system is that if the AI system is reasonably smart, it can find ways around that we did not anticipate.

Usually we anticipate what the software system can do. Not always, but in most cases, right? Cell phones make calls and send messages, but do not kill people. But for cars? It is a bit more tricky, right? I mean AI based cars can do a lot more stuff and they can harm people. I think it's one set of issues that people are worried about, and it's not so distant, right? We are not talking about the next century, we are talking about the next decade.

Then there is the problem of biases, right? Which is a very serious problem because humans have biases and the systems copy and duplicate these biases. I think this is a serious issue. If you are starting to have a system like this, making decisions about hiring or selection for fellowships or admissions or whatever, you have to be very sure that the biases are under control.

And then you have, what I think is probably the largest challenge of all, I think, which is the concentration of economic power that AI enables, right? Because if you think about the largest companies today and the largest companies of 30 years ago, they are very different: the number of people they hire and also the value they have.

So there is much more concentration of economic power, right? I mean, something like Google, Amazon, Facebook, or Microsoft or Apple, they have power comparable with many states. This is for the first time in history because they have the data, they have the money, they have the resources and so on.

So this leads to concentration of power, mostly economic power, but economic power brings with it other types of power. And more people are left out of this, right? In the end, it doesn't matter who does the voting of the election, if you can control the media and if you can control the messages and if you can control the way content information is presented to people.

This concentration of economic and decision power- in an ever decreasing number of individuals and organizations is a risk to society because more and more people will be left out of the decision process for all practical purposes.

And this of course not only creates economic inequality, which is a problem in itself, but it creates the ability to decide inequality, right? I mean, people that are outside the loop that don't work in the right fields, that don't have the right information. They will basically have very little influence because all the decisions will be made in very limited environments by companies and people inside the government.

So I think there's a concentration of economic power because AI enables you to grow without hiring more people. Companies before AI – before the digital to be fair – if you wanted to grow a company you had to hire more people, right? But now, no. You have AI systems, you have the digital, you have the internet, you can grow a company arbitrarily large without hiring millions of people.

"Probably the largest challenge of all, I think, is the concentration of economic power that AI enables. This concentration of economic and decision power – in an ever decreasing number of individuals and organizations – is a risk to society because more and more people will be left out of the decision process for all practical purposes."

L Which shifts the balance of power. But having said that, how do you feel about the trend over the next [decade]? Because again, we are talking so much more about this and governments are starting to pay more attention and are starting to themselves be educated as to what this actually is. There's a lot of, in a way, technical gatekeeping, which also inferes power to those companies, because in some instances you had Google or Facebook saying: "well, we can't really explain why this happened because our algorithms are way too complex". So they're sort of shielding themselves of responsibility.

And maybe that's why we're asking for them: "okay so make it observable, make it explainable so that when you have a mistake, we can kind of understand". Or there are no excuses as "the thing just evolved and now it suits us, it's great for us". But when there are mistakes, we just go with the: "we don't really know how it works" or "it's too complex".

So having said that we are talking a lot about it with this conversation being an example, just one of thousands.

So is it going to make a difference? How are we going to actually tackle this? As you said, these companies are starting to have so much power – more than actual governments – and economic power to almost blackmail in some circumstances.

So how is this going to end up? Are government's going to do something? Can engineers inside of those companies do anything? What is your opinion on that?

A Things like the Artificial Intelligence Act and the Digital Markets Act, it's the attempt by Europe to reign in, to limit the power of these large platforms.

The platforms are not the only challenge but certainly they are amongst them. So to limit the power of platforms and to impose rules on the behavior of this platform that make it more controllable and more observable of what they do.

Now, having said this, the fact is that they are right in a way. The behavior of these complex systems... don't forget that this – for instance, the social network – is not just the software of the social network, right? It's a complex system that involves the software, the algorithms that choose the posts and then all the people in the social network. Millions of people in the social networks.

And sometimes this complex system behaves unexpectedly. You cannot expect to explain it because you cannot explain the behavior of a hundred million people that are being fed news by an algorithm. It's simply unpredictable. And this is like a chaotic system, right? It may be just one single post that leads to a war for instance, or at least leads to a serious problem.

So I don't think you can explain these systems. The best thing that you can do probably is to make sure that the algorithms themselves are not the crucial part of the problem. For instance, if an algorithm only reinforces opinions that someone already has, it is relatively easy to understand that it will contribute to radicalization, right?

People will believe that their beliefs are the only ones that are true.

So this is the sort of thing... Europe is addressing these issues, right? The only criteria for designing the outcomes should not be to maximize the time we spent in front of the platform. There should be other criteria, like providing good information, providing real, true information.

I think states can impose some rules on these algorithms. But if we hope to have a fully controllable system that we understand what's going on and we can control, I think that's hopeless.

I mean, it never happened. But I think if we impose some rules that the news aren't biased, that everybody gets exposed to a relatively unbiased sample of posts and of views. I think those things may contribute.

The Digital Markets [Act], Digital Services Act, I think they are good attempts from the European union to address these issues posed by platforms mostly.

And then there are other methods like autonomous weapons, right? Those are also very difficult issues.

Again, having AI makes it possible to create autonomous weapons which are serious, not only if they are owned by states, but even more serious if they are owned by terrorists or by extremist groups. But this is the sort of thing that is hard to control because artificial intelligence technology is not like nuclear technology that is only accessible to some large states.

AI is basically accessible to almost everyone and is becoming more accessible every day.

"States can impose some rules on these algorithms. But hoping to have a fully controllable system; that we understand what is going on and that we can control, I think that's hopeless."

L We are talking about this but it's such a complex and vast topic. And I appreciate that it is hard to explain and maybe sometimes impossible to explain some of the behaviors of the systems that are being mentioned here.

But again, we should still ask, well, what is the impact of an error? And if an error can be traced back to a possible "small genocide", then maybe you shouldn't deploy that until you kind of know how to handle those situations, but that's, you know... It's just me going off.

Just to finalize, I wanted to ask you... For the engineers that are listening to us – some of them still finishing university or on their first jobs.

And so for the engineers of the next generation, what is some of the advice that you would give them in a sense of how to reason about all of these complex matters and how to approach them?

A Well I don't think I have much advice to give. I work with students every day and I learn more from them than they learn from me, even though they are just students but still it's true.

My advice is: if they want to work in related areas, that they get a solid basis on math, especially on math, on statistics, on related topics.

That they don't try to use these systems as black boxes that you run, you download and you run and you don't know what's going on and it sort of works. I think it's good that you have a solid basis to understand this.

That you try to understand what is behind the hood in these systems because when they misbehave or when they don't work, it's useful to understand a little bit of what's going on and should not just use them as black boxes as I said.

And then there is so much knowledge available these days on the internet that you can learn basically almost anything by yourself, right? And if you are smart, you can learn anything by yourself and you can work wherever you want. You can work at Google, you can work at Facebook or you can work at the university because there is so much need for good people that understand these things. Of course you can work with companies or for the government.

Don't be afraid to learn, try to learn. There are so many resources available.

And then decide what you want to do and go for it because the sky's the limit today for anyone that understands these technologies and wants to work on that.

It's a bit different from when I went to the United States, a bit more than 30 years ago. You had to go to a good university to... I mean you did not have to go but you had better chances if you went to a good university to know the top technology of the day. They had good libraries. They had good labs. They had good professors.

But these days it is much more democratic. You can discover the next big thing sitting on your desk, in your home or in your office.

So try to aim high if you want to work in this area. It's a very flat world these days.

Anyone can get anywhere as long as they are competent, they want to learn and they work hard, which is something that is always the most troublesome part.

"Try to understand what is behind the hood in these systems because when they misbehave or when they don't work, it's useful to understand a little bit of what's going on. Don't just use them as black boxes."

L Thank you for these words and for your time and providing us your take on all of these matters. I want to thank you again.

I hope you enjoyed this conversation and you find value in sharing that sort of perspective with others. Our own Portuguese way of looking at things.

A Ok. Thank you very much for the invitation. It was a pleasure.

You can connect with Arlindo on his website or on LinkedIn.

Worth Checking

Responsible AI Forum 2022 – Bringing together top Portuguese startups, research centers and industry-leading companies to discuss the next generation of AI products.

22nd March 2022

If you've enjoyed this publication, consider subscribing to CFT's monthly newsletter to get this content delivered to your inbox.