
It's been said that all things come to those who wait. But waiting
does not mean inaction.
Patience and perseverance separate those who can from those who
wouldn't, albeit it probably all depends on what you're waiting
for...
This edition brings two people from Feedzai - Director of Research Pedro Saleiro and Director of Product AI Anusha Parisutham - to discuss the dynamics between AI research and product, the challenges of productizing responsible AI and its possible future in our lives.
- Lawrence
Lawrence - Hi everyone and thanks for joining on another
conversation in Critical Future Tech.
Today, I have two people joining me from a company named
Feedzai, a
Portuguese company specialized in minimizing risk in all things
financial. We'll know more about that in a moment.
They are Pedro Saleiro and Anusha Parisutham.
Pedro is the Director of Research at Feedzai, specifically of the
group focused on FATE, standing for Fairness, Accountability,
Transparency, and Ethics in AI. A lot of big terms that I hope we
can better explore.
Anusha is the Director of Product in AI at Feedzai. With well over a
decade of experience as a product leader in global companies, she's
currently leading Feedzai's mission to prevent financial fraud, all
of that powered by AI.
The discussion that I hope we can have today is: how is the dynamic
between researching something that is more and more important — that
is, responsible and transparent AI — and how do you then market that
and how does the market actually respond to that value creation?
So, first, thanks for being here and taking the time.
Maybe I will start with Pedro. Can you tell us a little bit about
FATE? It is a research group within Feedzai, geared towards
responsibility and transparency and ethical AI. And I'm very curious
as to knowing what that consist, what do you guys do and how's your
day to day life? I'm super curious, can you expand a little bit on
that?
Pedro - Sure! First of all, thank you for this opportunity to
share a bit of our work in this space.
I'd say that it's very interesting how this group started. Pedro
Bizarro,
whom you already interviewed
in the past and I think in that interview, you kind of set the stage
for how it started...
But basically at Feedzai, because it's operating in a highly
regulated domain, which is financial services, since the early
stages of the company (around 2013), they started researching and
including some sort of explainability in their machine learning
models to detect fraud and other financial risk.
And explainability is a little bit like smartphones: you always need
to keep up and keep innovating because it's a very complex problem.
How can you make AI more transparent and explainable to humans? So
that humans can make better decisions, they can audit and understand
these systems.
So they were constantly innovating on that but around 2016 the
ProPublica stories, the Cathy O'Neil book "Weapons of Math Destruction", there was all of a sudden a growing awareness of potential, non
positive consequences of AI. Especially affecting end subjects,
people directly.
And in a financial services domain you are often making decisions
that directly affect people's lives: from blocking credit cards or
denying access to a bank account, or decisions about credit and
lending.
This is really impacting people's lives. So as AI gets widely
adopted and there is this AI revolution, you also become more mature
about the impact of AI. And back in 2017, I was a postdoc at the
University of Chicago and we developed the first open source tool
for auditing bias and fairness in machine learning models [the
Aequitas
library].
And I was very surprised when I got a message on my LinkedIn from
Pedro Bizarro, someone that I knew because he was a co-founder of
Feedzai, but I had never ever talked with him. And he was asking
questions about Aequitas and that was really unexpected. I had
recently moved to the US and I was not expecting that a startup from
Portugal would be interested in what I was doing about AI bias and
fairness and audits.
So we had an initial chat and we kept in contact and later, Pedro
challenged me saying that he had the budget to start a research
group on fairness and explainability and other responsible AI
topics.
This was kind of a very unique opportunity. You think that you can
only just work on these topics in academia or in the government
sector and public sector.And all of a sudden a private startup
caring about embedding this type of functionality in their products,
in their practice, was really an unique opportunity. And also to
come back to Portugal and make an impact.
So that's how it started. But I think I've been talking for a long
time now, I think I can ask Anusha — who is leading product AI — how
this is evolving across the company.
Anusha - Thank you for setting the stage Saleiro and
Lawrence, thank you for having me here.
So one thing that is very important, for not just embedded AI in
products, but to be responsible, to have responsible AI, is to have
a shared vision across the organization. And at Feedzai we've been
fortunate that our co-founder and Chief Data Scientist was the one
who started this.
So you have that executive function, sponsorship and messaging,
top-down. And I think that makes a big difference to align teams
across the organization on the importance of responsible AI and the
importance of having that responsible AI thinking as part of your
design. So just to add to Saleiro, to the question you asked, I
think that having the top down executive sponsorship and alignment
has been very critical and crucial for all the work that has been
done in that area at Feedzai.
L And just to
complement, it goes in line with what Pedro Bizarro had told me. I
was like "of course they have a culture that is more oriented
towards that" when one of the co-founders is himself engaged and
responsive and understands the need to approach these new
technologies with some caution and some respect.
When you hear about ethics, how do you bring ethics into companies?
It's many times from the ground up: "How do I sell it to my leaders?
How do I tell them that this is important?"
In your case — and that's the lucky part I guess — it was more of a
top-down sort of approach and it's just a great marriage with the
work that some of you already wanted to do, right?
So of course you guys are in an industry that has a high impact on
people's finances and possibly everyday life. It's interesting. You
really need to be able to explain, maybe to a regulator or someone,
why your product works in a certain way and that's why it's also
important to be able to have that capability.
A Maybe I can start
that discussion and Saleiro can add to it.
We talked about how selling AI internally, within Feedzai, was a
given because the sponsorship was top-down. But you still need to
sell responsible AI outside of the organization. As part of the
product to sell that value proposition. And I think selling that
outside, whether it's to regulators, financial institutions, risk
leaders within organizations... Basically, it starts with how you
sell anything, right? It's articulating " what is in it for me" or
the value to the person who's going to use it, the buyer, the
evaluator, or anyone who's involved in putting that product in place
in an organization. And what makes selling responsible AI harder is
demonstrating that value to other organizations. You have to start
with defining what we mean by that, because it can be interpreted
differently by different organizations.
You have fairness, that's one dimension of it. You have
accountability, you have transparency, you have the explainability
part. And in addition to that, something which we don't talk about
that often: it still has to be performant and it has to be cost
effective to put the solution in place.
So these are the things which the person buying the solution or
trying to use the solution cares about and you sell responsible AI
by talking to that and explaining how it clearly brings an impact to
what they're trying to do.
P
There is not just one flavor of explainability. So one thing that we
are doing almost daily, working together with product — myself and
Anusha and others involved — is defining different user journeys and
where explainability can play a key role in the way these different
personas interact with a complex and very sophisticated risk
management platform powered by AI, but also uses other components,
not just machine learning models.
But how can different users grasp what is going on? What's the
behavior of the model? Why is the model making a decision for a
specific case, but also in a global way? How is it behaving? How can
we make the data science process even more performant and efficient
and build better models by providing explanations that allow the
data scientists to be better and more efficient on their job. But
also the analyst, that is, our human in the loop that is making the
final decision (about fraud), or is often outreaching to the end
customer of a financial institution.
How can these analysts get good insights and context on how to
approach the end customer? How to really make the final screening.
Is this actually a crime? If it's not, is it really a legit kind of
behavior or not? All of these nuances, we are working on making
explainability part of this thought process for different personas.
So it's something that when we often see online very good intentions
on describing explainability and talk about specific methods like
SHAP
and
LIME
and others, it seems like there is just one explainability but in
fact, what we realized in early stages is that it's much more
specific and needs to be embedded in these different journeys. And
people from UX, people from engineering and so on. Different
requirements and different tasks. Different goals and different ways
of evaluating explanations.
L So how does it
work? From what I understand there's more under the responsible
umbrella, right? There is accountability, transparency, fairness,
that's the way that you have structure (sic) your acronym and I
guess maybe to some extent your team, I'm curious about that.
But how does it start? You start with a problem. Who defines that?
Is it the researcher? Is it the product owner? How do you define:
"this is the challenge we're going to tackle. This is the
initiative and this has stemmed from..." Where, right? Where did they come from? Because Pedro Bizarro
told you so? Or because it's being discussed in research circles?
Can you tell us a bit how that happens? How do you define the
priorities on what you're going to work on?
A Absolutely. Maybe I
can start by outlining the process we follow and then how we take it
from there into actually productizing it.
We get inputs from multiple sources. Product and research work very
closely together. We interact on a daily basis, we review ideas on a
regular basis. So it's a collaborative effort and you're looking at
the market. You're looking even outside the industry to see where
people are making strides in responsible AI, that's one aspect of
it. Product is also interacting with the markets and so we get ideas
on where we can make this experience better and bring more
responsibility into the embedded AI capabilities.
So ideas flow in from different sources, that's one thing. And then
what we do is research is really looking far ahead. Research is
actually future-proofing this area because they are looking two,
three years down.
They are trying to tackle problems, which are not yet top of mind.
Because if you're tackling a problem which is top of mind, you're
probably too late. You want to think ahead. And that's what research
is doing. They are really looking ahead.
But what we do with this close collaboration is we work closely with
research to see the results of experimentation. And Saleiro can
maybe give a particular example of one initiative, which they are
working on. We still touch base on a regular basis, internally. We
bring other stakeholders internal to the organization, people in
services, people and presales who also have insights into the
customer experience part. Even during the experimentation stage,
right?
Since we are so closely aligned, we know when it's ready to be
productized. And so where experimentation meets productization is
when you bring the human into the picture.
So you experimented, you've seen great results, you're building the
tools, but when you bring it into the product, you have to bring the
human into the picture and like Saleiro pointed out, you need to
think about the different personas who are going to be interacting
with these capabilities. Their different journeys and how you
influence that experience. It's like product management 101. You put
your customers and users, right, front and center. And so
responsible product management, which is putting responsible AI
capabilities, embodies best practices of product management also.
But what makes responsible AI more challenging is: these different
personas can come with a spectrum of skill sets. You're talking with
really top-notch data scientists at Feedzai. You're talking about
highly skilled data scientists at customers. You're talking about
citizen data scientists. You're talking about business users who
have some data science knowledge or data literacy. You're talking
about analysts who are completely on the business side and you have
like the auditors, the regulators.
So when you put a responsible AI capability — and explainability is
a good example — it's not enough if your data scientist is able to
explain the outcomes of the model. Your fraud analysts should also
be able to explain. And not just that. It can impact your consumer
experience because if your transaction gets blocked and you call the
bank, the bank should be able to explain why, what happened.
You're not just impacting a data scientist journey and experience or
a fraud analyst journey and experience. You're in fact impacting
your end consumer experience. And it's how you can tell this
complexity into very simple, intuitive layman terms. And that's a
long-winded answer, but I think you get the message here.
And Saleiro please add to it.
P Actually, I think it
was very complete Anusha. So I'd say that I'm totally aligned with
what Anusha said.
To complement it, it'd be just adding one thing which is: we may
think about these as something different or something that is
embedded in all these journeys. It's not that we [researchers] come
and say: "we are going to change the product and this will be very
different", no. People were already consciously developing products
with good intentions, with specific concerns in mind and even the
user experience and all these things about explainability because we
are in a highly regulated domain.
So it's more about: how can we add that extra differentiator factor?
How to embed these different dimensions or principles of responsible
AI in an already ever evolving product? Because we are constantly
improving the product as part of the journey of building great
products.
It's really about these different teams coming together with
different perspectives, different requirements. Someone brings in a
specific aspect that we may need to prioritize or study or
experiment as Anusha was saying. And we in research we are, you
know, what are the problems that we need to tackle far ahead? And
what are the opportunities in terms of research that we can bring
into them, into the product?
L It seems like a
very interesting feedback loop between the research and the product
realms. And of course your answers have generated more questions, at
least for me.
One of them it's a sort of a very practical question, which is about
experimentation. You mentioned experimenting on the research that
you developed and I'm curious because how do you experiment in an
environment or in a product where let's say a mistake could
potentially be damaging to the customer or to the customer's
customers, right?
I'm kind of curious if you can share how you approach experimenting,
right? Do you have a couple of customers that trust you and they're
willing to just try it out? How does that work?
P Maybe I will start
from a research perspective and then Anusha will complement on
specific clients and so on.
So first there's something that is more kind of a clarification.
Often there is this perception that because we are using AI, we
should have 100% correct decisions. And that's not the case. Neither
for human decision-making nor for AI powered decision-making.
That's the first thing we should realize. That we are not going to
make 100% correct decisions all the time. Neither the systems nor
just the humans. So what we want to build is really a great product
and an AI that is as accurate as possible in predicting financial
crimes.
And in the process of building these highly accurate systems that
are cooperative, where you have a component that is the AI making a
prediction but also a human that complements and reviews, and
interacts with this AI.
Our goal is: how can we leverage the expertise and the strengths of
the AI and the expertise and the strength of the human and create a
combined system that, working together as one, has better
performance, better fairness and increased efficiency in terms of
operations for different financial teams. Because when we are
talking about really large financial institutions — and we have some
of the largest banks in the world as our clients — we're often
talking about hundreds of analysts. So it's really big operations
teams.
So that being said, what we want to work with when we talk about
fairness is how we distribute these errors in a way that we are not
damaging particular population groups that are often already under
strain because of some social economic background or specific
contexts.
We are talking about location. We're talking about age, we're
talking about gender, we're talking about ethnicity and so on. It's
really about how can we make sure that it's not just enough to have
high error rates across groups. No, you want to get really good user
experience and good consumer experience.
You want to minimize errors across all groups. Not just for the
majority. Not just for the sake of just blind performance. We want
to be really high performance, so ideally almost near perfect
decisions. But because it's not possible to be perfect all the time,
we want to balance these errors across different groups so we don't
have specific minorities that get more affected and we are basically
creating feedback loops that will exacerbate inequalities that are
already out there.
We really want to be sure that risk management can be done in a way
that does not promote inequality and promotes great consumer
experience, reduces friction and attrition and then end consumers
are happy and feel safe and banks are very efficient and highly
performant.
That's something that is more kind of a thesis or a mission that we
at the company we have in this perspective of responsible AI.
In terms of experimentation we should talk about it. Often we go
directly to clients. Often we can team up with internal teams. But I
would say that now probably is the time to pass to Anusha to give
our perspective on this kind of collaboration.
A Absolutely. But you
set the stage for me because you said you have to get expectations
aligned and straight right off the board, and I think that's very
important.
Now, to your point Lawrence, what we try to do is identify customers
who are willing to be our design partners in this experimentation
stage because we have a shared vision. They want to be responsible.
They want to address fairness, accountability, explainability.
So we have a shared vision. But then comes the expectations part
because, you know, there's uncertainty around some of the AI
capabilities so you need to do a POC. You need to do more work with
real data and for that, you need to collaborate with the customer.
So finding the customers who can be those design partners or early
adopters who can help you with that experimentation and then setting
expectations on what we are trying to achieve, what KPIs they are
going to hit, but also what risks are involved with that. And then,
along the way, we make sure it's a very transparent, observable
process. I think that's one thing which is key.
When you see hesitance from customers it's because they cannot see
the risk. And because you don't see, you can't mitigate or you can't
put controls around that. So we are responsible by design, not just
in products we build, but even the way we operate.
As part of this experimentation, even with an internal team, we
share results of the outcomes of experiments. We share the good, the
bad. We share where we can improve. And I think having that open,
transparent conversation, not just internally, even with customers,
helps build that trust and so they are more open to partnering with
us to bring some of these new capabilities.
L Thank you for
sharing both. And from what I can tell, it seems that you guys have
a pretty healthy product culture. That is super-interesting.
What you mentioned at the start of your answer Anusha is something
that I actually wanted to ask you guys. You look for customers or
partners that already have a sort of inclination, like they want to
do things right, they want to be more transparent themselves. So
they want to use systems that are also transparent and responsible.
This conversation about fairness and ethical AI is quite recent if
you look at the history of AI as a whole, right? It's maybe 15, 20
years old and then the last 10 years, more and more. And so has that
made your job easier in selling these sort of products or do the
industry, your customers, still need to be educated as to why this
is important and why they should care? Or is it easy for you guys?
What is your opinion on that?
A Maybe I can start
with that Saleiro. Education is absolutely needed and I'll tell you
where you need more and how you can lead to that. So again, when you
talk about the responsible AI umbrella, fairness is one part of it,
which is relatively new. But accountability, transparency and
explainability, especially for organizations in regulated industries
has always been there.
I'll give you an example from my past experience in financial
services. I've been responsible for front office, back office
applications... and these are not AI but you still have
accountability there. An accountant will not sign off on the balance
sheet if they are not able to explain the numbers which go into it.
I was in capital markets so when we have a bond issued and we
calculate the coupon payment as a fiscal agent who's validating
that, there are counter-parties who are validating that.
So there's accountability there, there's explainability there and
there's transparency there. This is something a lot of organizations
are used to, some from the get go.
Now, when you embedded AI capabilities, that brought complexity to
it. Some of this explainability and transparency took a back seat
because of the complex nature. And fairness was not part off because
they didn't have that problem to worry about.
But when you approach this education, when you talk to customers,
you kind of have to approach it from areas that... they've been
doing this forever. They are responsible for their balance sheets.
They're responsible for things coming out of their financial
applications.
So when you bring AI into the big picture, you kind of have to show
them the way to continue to do that and challenge vendors who put AI
models in place but can not do explainability. So I think that's the
education part.
Then when you come to the fairness part, I think that's where you
need to go that additional step of educating how fairness can
actually impact. Fairness could impact not just the bottom line, but
also the top line. Because if there is a group which is disparately
impacted then it's not just from a cost point of view but that's a
group you're not servicing yet. So there's top line impact, bottom
line impact as well as a brand impact — a reputation impact. I think
it starts from areas that have been doing a great job already and
then bring them to the newer areas to understand how that impacts
their overall accountability.
P Yeah that's really on
target Anusha. I'll just add a little bit on the education part that
has to do with functionality maturity when talking about journeys
and also about efficiency in operations and so on.
So it's not enough to just say that: "you may have a risk, you are
not aware of the risk, we will just create awareness". That is a
good first step. But what we are trying to do is to close the loop
by not just creating awareness, we are already working on "how can
we make sure that the journeys for different users are ready and
this capability embedded in a way that you can already mitigate
that?".
It's not just about measuring or auditing. We are also embedding
functionality that allows these data scientists to not have an
additional kind of "cost" for embedding fairness in the processes or
in building models. Because often really large organizations might
be wary of: "how much would this cost to fix? What would be the
impact?" And that's where we start making a decisive role. Because
it's not just about creating processes. Processes are very
important. We need to bring awareness and measure in the angle that
Anusha mentioned for something that they are used to. And we need to
educate them on: "this is a new dimension or a new risk that you
also need to assess". But also how can we show you that you can even
differentiate from your competitors in a way that, by design, you
can deploy models that are equally performant and fair. And we are
showing that in the research that this is possible.
And there is something that is common sense or common in data
science or for someone in a data related field kind of coffee shop
conversations.
There is this notion of there being a huge trade off. And it
sacrifices a lot of product performance to get fairness and what we
are showing by doing experimentation and building advanced AI tools
is that it's not enough to just use off-the-shelf things out there.
We are developing in-house these specific skills in analyses and
capabilities. We developed something that is
Fairband
that is an example of how we can automatically find performant
models and mitigate unfairness at the same time or mitigate biases
at the same time, automatically embedded in the data science loop.
So, this is a differentiating factor. That is also appealing for
these types of clients.
L Now as you answered
and as you went further, I was thinking it is interesting, maybe
even funny in a way that it's the regulated industry that may
innovate and lead transparency and fairness in AI because they
actually need it fundamentally, right? Because they are regulated
and because they have higher scrutiny, let's say.
So what I want to ask is: could it be from the FinTech AI that some
advancements go forward. And I think you guys are a perfect example
of pushing the boundaries with Fairband, right? Could that sort of
research then be adopted by other industries or other companies
deploying AI, for instance?
P Yeah, absolutely. So
if you look at the
EU bill on AI risk assessment, that was presented for discussion earlier this year, the EU
approached this in terms of regulation of AI as defining high risk
applications.
If you traditionally think about highly regulated areas or domains,
they are domains that are high risk by nature.
Think about healthcare, think about insurance, about hiring, law
enforcement and financial services. That's how the EU commission is
looking at it. We first need to start with high risk and then
naturally what gets adopted in high risk will then be adopted in
other, not so high risk domains.
And I think that's what's probably going to happen but I think
Anusha should also complement this.
A Well, absolutely. I
agree I think the regulated industry will pave the way. And that's
because for some of these topics, there might not be urgency" around
it. People could question it saying like: "why do we have to do it
now? Others are not doing it."
So definitely the regulated industry and not just AI specific
regulations.
GDPR is driving that from the data privacy point of view. It will
impact, again, how we use AI — how we use data for AI. And similarly
in the US if you think about it, if you have to do business with a
financial organization, or even if you have to do business with the
government, there are some regulations outside of AI regulations
that you have to adhere to.
And if you have to do business with organizations in regulated
industries, you're kind of bringing that responsible AI into the
fold automatically. And so I think the regulated industries will
pave the way and will then be followed by other domains.
L Thank you. I think it makes total sense and personally, I'm just happy to see Feedzai being one of those leading companies in that sense. Having said that... yeah, go ahead.
A I just wanted to add
one thing — and Saleiro said this — I really want to emphasize that.
And to go back to one of your earlier questions of like, you know,
education and awareness.
I'm happy to say that Feedzai is actually walking the talk. We're
actually showing and not just saying "you have to do this and create
awareness". We're actually coming up with solutions, which will help
organizations solve this in a cost-effective way and in an efficient
way. I just wanted to emphasize that.
L I totally agree.
And that's why I've been so annoying in sitting down with you and
just digging a little bit further on the mindset and thank you for
sharing about the product-research relationship. I find it very
interesting and the whole culture thing.
So I know we've been talking for around 45 minutes now. I have one
final question before we part ways. And it's a bit of an open
question.
You guys are lucky to be in a company that values, well, these sorts
of values, right? And actually wants to deliver a product that will
have a positive impact on society and customers and so on.
What would be your advice for technologists, product managers and
whomever you want that are within companies — technological ones —
where this conversation is not happening for one reason or another?
What would be your advice to those people that want to bring that to
the table, but they don't know how to do it?
A I can go first. It's
a great question. I'd like to say at Feedzai we've been fortunate
but that's not the case for a lot of other organizations and even
for Big Tech, because we are hearing this in the news. So I would
say from a product management point of view, AI product management
is responsible product management. It has to be that way.
So for product managers who embedded AI capabilities into the
product, it is important for them to make sure that products can be
trusted, are responsible, fair, and are explainable. And it is also
their responsibility to educate bottom up. When it's not top down.
So to educate their leadership team, their management, the need for
that.
And again, how do you sell it? You sell it based on what's in it for
them. You know, it impacts the top line, bottom line. It impacts
reputation. What is the opposite of that? Sponsor is responsible.
You don't want to be that. So I think as an AI product manager, AI
product management responsibility comes with it.
And if it is not top down in the organization, you should take it on
yourself to actually educate others in the organization and do that
bottom up evangelization of not just the product, but the need to be
responsible.
L Fantastic! Pedro, do you want to add something more on the technical side of things?
P Without overselling
Feedzai, first I want to really just highlight again what you said
in the beginning, that we are fortunate, absolutely. And I mentioned
Pedro Bizarro in the beginning, but I was a month working in the job
and I had a meeting with the CEO, Nuno Sebastiao, and he said
something like: "this is a pillar and we want to stand out in this
space as the ones that, when you think about all these responsible
AI principles, we want customers to think of Feedzai".
But of course, if you work in an organization that doesn't look to
these principles in that way, I think it's not a situation in which
you should feel your hands tied. There are things happening outside
as well.
I believe that in a few years, we won't be talking about responsible
AI. AI will be responsible and all these principles will be embedded
in the way you build AI, in the way you build tools.
So I just want to end with an optimistic tone in the sense that I
think it's a question of maturity of the AI.
I think we already passed the awareness stage and we are really
going towards the more solution and adoption space.That's where we
are moving towards. So I think there are lots of tools out there,
even Microsoft has a
responsible AI tool kit. So if you are a data scientist, there are already tools that you
can start using so that you can start doing your own role in your
job and you start looking through these analyses.
Also it's just not enough to think about these as problems, it's
more of an objective. So when you're doing your models, when you are
building your products, don't think of these from a negative
perspective of: "we don't care about that", but more about "let's
make these an objective".
So let's make sure that in our requirement analysis, when you are
building a model, let's not just evaluate performance, let's also
report fairness. If there is an issue, let me try to use the tools.
And I'm a believer that when we go to leadership and come with, you
know problems but also solutions — and I'm not saying that you need
to have solutions for everything, but if you start measuring you,
start making these a KPI. So I'm definitely a believer that if you
start measuring things and show this to leadership, they will start
optimizing for it as well. And I've seen these in different areas,
that organizations really want to strive for perfectionism. So they
really want to build great products. And I believe people have good
intentions in general. So if you start with it as an objective, I'm
very sure that people will optimize for this as well.
L That's a great
ending.
So essentially the trend is there. We are moving towards a world
where it's not even a matter of whether or not it's responsible. It
is embedded in the industry and everyone's work. And in order to get
there, it's all about the small actions that we can take
individually in order to bring that into our daily work.
I totally agree. It's a great way to end this segment. So thanks a
lot for sharing, really. And I think it's also with those small
moments that other technologists, product managers, designers,
whomever can be inspired and realize that it's possible. I mean, not
everyone is lucky to have your culture, but it's possible.
And there are many examples of people pushing things forward in that
sense.
I want to thank you again Pedro and Anusha for being part of this.
And I think we'll probably talk [again] in the near future.
P Thank you, Lawrence.
A It was a pleasure.
You can connect with Pedro and Anusha on their LinkedIn here and here respectively.
"Lawmakers want humans to check runaway AI. Research shows they’re not up to the job." by Issie Lapowsky.
"This Program Can Give AI a Sense of Ethics—Sometimes" by Will Knight.
"Why Are We Failing at the Ethics of AI?" by Anja Kaspersen and Wendell Wallach
If you've enjoyed this publication, consider subscribing to CFT's newsletter to get this content delivered to your inbox.