“Defining ‘ethical AI’ starts with defining ‘ethics’.
Not to get into a drawn out, philosophical debate, but it’s worth mentioning that what’s being discussed could easily take different forms.
To be clear: the term ‘ethical AI’ is used when referencing a very specific type of AI technology. One which has well-defined ethical guidelines and/or codes of conduct, and is based on a set of fundamental values that, in essence, is developed in support of a greater good. It isn’t tech for tech’s sake. It’s tech for a purpose with the core aim of ‘being better’.
If that’s all sounding a bit reductive, it might be because it is. Ethical AI can mean – and does mean – many different things to different organisations. It can seem dependent on developers not only understanding the breadth of what it means to be ethical and have ethics, but their ability to transmit that into a piece of machinery; the machinery’s ability to assess and analyse data subsets to produce a conclusion, and the entire lifecycle and evolution of the technology from there.
Which is where we slip into the trap of contextual and subjective ‘right’ in AI and indeed in data regulations. We’ve all grown more and more aware about the problem with algorithms and their intrinsic link to human bias, so how can we label technology ethical by any standards if we aren’t sure whether their internal clock (algorithms) is ticking right (morally)?
Leading by example
The argument that ethical AI can be in existence is really an argument for an alternative perspective. We are not so much determining AI’s ability to be ethical, as much as we are determining how organisations can use it ethically. We are assuming that innovators, developers, teams and businesses will be in a position to root the technology in the interests of the public – whether that’s a matter of privacy and security measures, environmental efforts, animal rights, the list goes on.
This is the chicken and egg scenario which too often runs alongside any innovation. Take facial recognition technology (FRT) as an example. FRT is broadly used for safety and security measures from airports to supermarkets, across law enforcement, including body cameras, smartphone UX and enhanced personal security measures; even children are regularly being subjected to FRT.
But there is, as always, the other side to it. Law enforcement and the need for security measures, such as identifying and reprimanding suspects, has faced particular scrutiny in recent years. Many US cities are effectively banning it – though recent news suggest this is an ongoing debate and will be for some time – but here in the UK, it’s increasingly enforced. This is despite campaigners’ efforts, and many findings suggesting that FRT not only isn’t anywhere near as effective as it could be in safety and security efforts, but that it is a deeply biased and needs strict regulation.
We aren’t here to get into the weeds of ‘right’ or ‘wrong’ use of AI, but it is a demonstrative and real life example of where the technology can embody both sides of the ethical coin – and disproportionately impact more vulnerable people.
More from Guides
- A Guide To Armenia’s Digital Nomad Visa
- Can I Use VoIP To Call Emergency Services?
- TechRound’s Consultancy Services
- 10 Best Automation Tools For Your eCommerce Store
- Can VoIP Be Used In Education?
- Top 10 Courses To Level Up Your Digital Skill Game
- Top 10 Best Cities In Africa To Be a Digital Nomad In
- A Guide to Finland’s Digital Nomad Visa
The pendulum effect
If positive innovations can lead to negative outputs, logic says that negative outputs can lead to positive change. Or that’s the theory.
Regulation in technology often comes after the fact – rather than working alongside or preceding it. Over recent years, GDPR has made it obvious that we don’t always see the need for enhanced data protection until it’s obvious we do through not insignificant privacy and security concerns. You only need to look at Microsoft’s Tay chatbot, or, dare I say it, Facebook’s now infamous entanglement with Cambridge Analytica.
It might seem like an unnecessary risk, even ethically wrong, but these types of catastrophes are sometimes the greatest catalyst we have for change and regulation. When the European Union began their discussions on AI regulation – the first major body to examine implementing legislation for the tech – Microsoft acted as an advisor, bringing their experience of Tay to the discussions.
We can also learn from the likes of Uber Eats who were accused of unfair dismissal after erroneous FRT findings – this particularly case has helped steer the Trade Unions’ Congress into acting and monitoring the use of AI technology in the workplace, particularly in light of the potential unethical practices.
Similarly, Clearview AI was accused of numerous unethical and unlawful practices across EU countries as well as Canada, UK and US over their ‘face scraping’ methodology. And while no formal legislation or third party involvement has taken place as a result of this yet, it’s not to say there won’t be.
Turning AI around
AI disasters aren’t only making way for greater regulation as a consequence – but teaching other organisations the dos and don’ts in real time, in real ways. It’s demonstrative of how and when the use of AI is applicable and ethical, and how and when it isn’t.
Comparatively, it’s also helping steer many businesses towards ethical AI innovation in the first instance – rather than after a scandal. Pimloc’s product, Secure Redact is a prime example. The software uses AI to anonymise personal data, not only helping businesses maintain GDPR compliance – and their own corporate due diligence – but keeping visual data of any ordinary person walking down a street, secure.
Yes, this solution has been inherently designed to support data legislation adherence and everything that goes along with it. But it also supports the wider education and awareness of people’s rights when it comes to their data – especially visual data which all too frequently falls through the cracks.
Which is particularly required when we think about the complexities of data laws and data privacy, and more so how few businesses understand the full extent of these laws. You could argue that this solution – if we look at it in isolation – is less about making sure all AI is used ethically, and more about developing the type of technology which could combat the dangers of AI used unethically.
True that this might be effective, but it is – arguably – only a short term solution. It’s treating the symptoms of a wider issue, rather than curing it. So the long-term solution? Understand the driving factors behind ‘ethical AI’ and let those help determine more stringent data laws – which aren’t open for interpretation or debate.
Realistically, we’re sitting right in the middle of the perfect age to create such laws.”