An Interview With Charles Radclyffe, Founder and Chief Executive of EthicsGrade

Charles-Radclyffe-EthicsGrade

TechRound recently caught up with Charles Radclyffe, founder and chief executive of ESG ratings agency EthicsGrade.

EthicsGrade specialises in technology governance, most notably AI ethics, scoring and grading companies on their governance of technology. The start-up now evaluates more than 280 companies from a range of different sectors, including energy, pharmaceuticals, transport and FMCG.

With the COVID-19 pandemic driving an increased reliance on tech, here Charles discusses the increasing importance of exploring our relationship with technology, how we get this balance right and how to optimise its benefits while managing its risks…

 

Tell Us About EthicsGrade

EthicsGrade is an ESG ratings agency. There’s lots of people who do what we do, but there’s two things that mark us out as slightly different.

Firstly, we focus on a real niche in the industry, looking at technology governance. This is unique as most people focus on the big-ticket items like environmental sustainability or social justice issues, however, our focus on technology governance also impacts on a lot of social and environmental issues. A lot of companies are making big claims in terms of the investments they’re making on tech, and we feel that they may not necessarily be thinking through all of the consequences of what they’re doing, particularly with regards to their wider ESG.

The second thing that makes what we do different is that we’re trying to help clients get a personalised ESG rating, built on a solid understanding of their values. The problem with the ESG industry is that there’s a bit of inconsistency between the different ratings providers and how they assess certain companies. This is really different to the credit ratings world for example, where your credit rating stays the same whether you go to one firm or another. The assessment of whether an organisation is credit-worthy or not has always been the same (depending on the methodology), and so perhaps the intuitive response to that is maybe we need to standardise how we think about ESG, so that we have greater consistency between the different rating providers.

That’s the really big difference between EthicsGrade and other ESG firms; we’re trying to help clients get personalised ESG ratings, working to understand what they care about, the values they care about, and then trying to help them see the investment universe through that lens.

 

What Inspired You to Start Up EthicsGrade?  

Firstly, I had this curiosity for tech governance and ethics that had been burning within me for a long time, going back over 10 years now. I used to run a data analytics company, and I realised that there was a big disconnect between the commercial excitement our clients felt towards investing in data, tech and analytics etc., and the very nerdy excitement that my engineers had around solving the problem. They got really excited about the details, and in some cases I felt the things we were doing were potentially crossing what I would call a creepy line.

Particularly as we were working with large financial brands, I was worried that there may be reputational risks that stem from what happens if clients thought or understood what was going on behind the scenes about how we were gathering data and what we were doing with it. This was in 2011, the same year that Mark Zuckerberg became Time’s Person of the Year, and during a time where when you spoke out against Facebook or Google it was very unfashionable to do so. However, I still banged that drum then, saying we need to think through the consequences of this, that there may be social justice consequences and ethical concerns of what we’re doing, and slowly and steadily over the last 10 years that has become increasingly more recognised.

I never really thought this would be something that pays the bills, I always saw this as a hobby. I’ve been doing public speaking and writing on this subject for a long time, but things started coming together when I was running AI at Fidelity International, a big investment firm, and I found that pushing the question of governance and ethics within the firm was like pushing an open door. The firm was focused on its reputation and really trying to make sure it remained positive into the long term. So through this I was doing a lot of interesting work, collaborating with people from all over the business and getting a name for myself internally for this.

I started to get questions from our investment teams on how we saw the governance at some of the companies that we were investing in, and that’s for me when the penny dropped, that actually what we were looking at, these questions on governance and best practice and how we should be communicating to the market about how we look at digital governance – this is all in itself an ESG issue, it’s just a very early stage issue. The thing that makes it an ESG issue is if you’re reporting to the market, if you’re reporting to external stakeholders what you’re doing and how you’re doing things – this is the thing that unites it with climate change or your zero carbon targets etc. I’m not saying it’s on an equal footing, but in terms of category it’s very similar. And so, rather than reinvent the wheel and try to figure things out from first principles, what we started doing was thinking through how have people done this before in other areas of ESG, and then how might we launch the business.

 

How Did You Find Running a Business During the Pandemic?

We started in February 2020 which was an interesting month. I definitely asked myself the question “is this the right time to be doing this?”. With the pandemic, people suffering and dying, as well as the economy, all of these major issues made me wonder if I was doing the right thing at the right time.

During this time, a friend said to me, one of the things the pandemic is going to really drive is much greater reliance on technology, and so all these questions that I was asking were suddenly going to become much more important. I think we’ve been very lucky because we saw things like contact tracing and the “pingdemic” – all these questions on what our relationship should be with technology, and how do we get that balance right and use these tools in a way that’s beneficial to society.

I also think from an economics perspective, the investment community have really channelled a huge amount of capital towards tech companies, or non tech companies that have a really big story to tell in terms of how they might automate more and how they might use data analytics and AI more. I think one of the other factors here is that the bets people are placing on our post-pandemic economic recovery are really driven by who investors feel are going to successfully digitalise over the next decade, and again a function of successful digitalisation isn’t just building great tech and doing cool things with it, it’s also being responsible with it and having the appropriate governance in place. The pandemic has really boosted the need for us in many ways.

The business need point has really helped accelerate us, but I think the other thing worth mentioning is that it’s forced me personally to reassess my own attitude towards leadership and management.

Pre-covid I was quite old-fashioned, and face-to-face time was really important for me. We had an office in the centre of the city and if people weren’t in the office and were working from home I was deeply suspicious. When I started EthicsGrade the default would’ve been that; another city-centre location with everyone working together. However, I think one of the most wonderful opportunities we’ve had with EthicsGrade is to actually have built a global team (half the team I haven’t even met!), doing everything via Zoom. Technology has helped us be a really productive organisation, so I’ve become a born-again believer in flexible working environments.

 

 

What Are Some of the Main AI/Tech Ethical Issues Facing Businesses Today? 

I think so much of this is about translating what’s relevant to a Facebook or an Amazon, because they’re the organisations that get the headlines when something’s wrong, to the vast majority of instances that are small, and have a very different risk profile.

I think the fear that most small/medium business owners must feel is that there probably is a great opportunity that AI or automation can offer them, but they’re put off, feeling that they can’t get a grasp of the risks and don’t know where to get started or how to do it. In addition to this, when these businesses see the headlines of “Amazon screws up here” or “Google screws up there”, that probably makes them feel like the wise course is to do nothing.

I think the reality is that to get a grip of the governance is actually a lot easier than you’d think, and what we try to do in the way that we’ve organised our score cards on companies to categorise things into broad categories such as governance – do you have the right governance structures in place, how do you manage ethical risk? How do you manage the technical questions? How do you manage data privacy and sustainability etc. So we try to break things down into those categories too, and make it easier for people who are thinking about these things and the nature of each of those risks.

I think one aspect really critical about AI specifically is that it really does touch across all of those things, and you have to think quite holistically. Because if you don’t have, for example, the cybersecurity policies in place, then you can build the best machine learning system on the best data, but it’s going to expose your organisations to significant risks if it goes wrong. I really think that the biggest thing most organisations don’t have, large and small, is the right structure to manage the governance around this.

Overall, it depends on the type of industry an organisation is in as well as its size, but the one thing that’s really common of all organisations is that you need to have a structure and you need to have some responsibility and I think like any other ESG issue there needs to be broader accountability for these questions, and also external reporting of what you’re doing about these things, and that’s why we see it as an ESG issue.

 

How Do You Think the European Commission’s Proposal for a Regulation on Artificial Intelligence Could Impact the Sector?

GDPR is what the European Union is famous for in terms of tech regulation. Tim Berners-Lee’s invention of the World Wide Web was in 1989, and the Internet Cookie came in around 1995, and it actually became the cause for all of the problems that lead to GDPR in 2016. You can see that there’s a massive gap in terms of the time between the start of the risk and the action being taken to address it. My greatest fear around AI regulation was that it would follow GDPR and take big controversies and lots of things going wrong before finally getting around to regulating it. However, I think we’re working within a much more compressed time, and it’s good to see the European Commission starting to think about these things, and actually putting us onto a path to regulation.

I also think it’s great that the nature of this regulation means organisations are going to have to report and disclose the internal governance they have in place, as well as some of the performance characteristics of their technology. I think we’re probably one of the first of few people that really joined the dots with this and ESG, because that’s the nature of ESG regulation, it’s about reporting and disclosing the operational governance you put in place and how things perform.

I think what we’re going to see as AI regulations really drive is companies starting to disclose how they’re making sure their AI systems are fit for purpose. How they do the quality management and the risk management are two specific things that these AI regulations are really calling for, and then hopefully what we’ll see is some performance metrics that will start to be agreed upon as best practice coming out of this. From this, organisations can then become compared against each other in terms of their performance metrics.

The other thing about GDPR is that I think there will be a parallel also with the AI act is that it’s going to be widely adopted and widely copied around the world, at least that’s our hope! We’ll start to see the large multinational tech companies look at what’s necessary to be compliant within the Union and apply the same governance in places like the UK or the US. I think a lot of good will come from this.

 

What Does the Future Hold for EthicsGrade?

The focus for us has really been around tech governance and digital governance, and that’s something that we don’t plan to change in the short term. We intend to stay focused on what we’re doing.

In 2020, we did something quite unusual and published all of our headline data into the public domain. Most companies in our position don’t do that, but we found this really important to do so that consumers, journalists and academics could start to see how, for example, Toyota compares to Tesla or Twitter or TikTok etc., and so that’s why we provide that high-level data.

The next thing we want to do this year is really to come back to this alignment of values. Many of the companies we’ve researched have published what their values are, and what we’re going to be doing is publishing these companies’ ratings alongside these values. This will give users the ability to assess companies not by our criteria but by the companies’ own criteria. We’re able to really show how organisations live up to their own values. We think that’s going to be really game-changing, and I’m sure it’s going to create a few stories along the way!