Menu visibility control

Video

Events

Startup Surgery

Most Recent

News

Petya ransomware attack: What you need to know

Tech City Voices

Here’s how you can deliver a killer presentation

Investments

Hometree gets £1.9m from investors including LocalGlobe

Press Releases

FinTech firm Smart Pension picks up two major awards for digital leadership
Let’s gets physical: New app helps fitness fanatics find a partner at the gym
‘Connected finance’ app Curve hits £50m spend ahead of UK launch

When AI discriminates, are you legally prepared?

AI robot

Topics

twitterlinkedinFacebookgoogle_plustwitterlinkedinFacebookgoogle_plus

Razia Begum, an employment lawyer at Taylor Vinters, explores the growing risk of AI discrimination and the legal exposure for those developing and using machine learning.

Artificial intelligence is having a growing impact on the way we work and live. However, outsourcing decision-making to machines risks unintended discriminatory outcomes. As technology firms increasingly exploit the possibilities of machine learning, they could become vulnerable to legal pitfalls both now and increasingly so in the future.

Who is responsible?

Machine learning is already powering a host of new systems, which are helping consumers and businesses to make smarter and quicker decisions. But as we continue to explore the benefits, it is easy to forget the powerful autonomy granted to this sophisticated technology to make automated decisions on our behalf.

Although human influence creates artificial intelligence, it is not always possible to control the consequences. Many forms of technology learn as they go along, refining how they analyse data sets and making automated decisions accordingly, so have the potential to impact both negatively and positively on humans.

AI bias under the spotlight

Recent reports have emphasised this issue. A Princeton University study recently published in ‘Science’ in the US revealed that machine learning systems and algorithms can be biased when they learn from humans, adopting stereotypes similar to our own.

This follows several other examples also in the US including Microsoft’s AI-powered chatbot ‘Tay’, which was shut down because it couldn’t recognise when it was making offensive or racist statements.

Last year, an investigation published by New York-based ProPublica, found that widely used software that assessed the risk of re-offending in criminals was twice as likely to mistakenly flag black defendants as being higher risk.

Legal dangers

For tech firms that both develop and use machine learning systems, this emerging discrimination risk must be considered from the outset. It can take many forms and affect both individuals and groups.

As highlighted by the Princeton University research, one major risk is if an automated decision results in unfavourable treatment of individuals or groups because of a ‘protected characteristic’ such as gender or race. Legally, this would be considered a form of direct discrimination in the UK.

However, discrimination can also be indirect. This could happen if the technology was applied equally to everyone, but people with a particular protected characteristic were still put at a disadvantage. For example, AI software may make a decision without deliberate reference to a person’s race, which has an adverse effect on people belonging to an ethnic minority.

Unless this approach could be justified as a means of achieving a legitimate aim, this would be discriminatory.

Similarly, if a machine learning system didn’t take into account an individual’s disability when applying a decision to everyone in a given data set, this could also be unlawful.

If you are developing or using the software and are aware that an individual is disabled, you have a legal obligation to consider whether you need to make any reasonable adjustments to ensure they are treated on a level playing field with non-disabled individuals.

What happens if you break the law?

If an individual was discriminated against and decided to take legal action, then both the business that supplied the AI software and the customer or end-user of that technology could be affected.

In the first instance, the end-user could be ordered to compensate the individual for any financial loss, which is uncapped, or injury to their feelings. However, the end-user could also attempt to recover their costs from the business that supplied the software.

These discrimination claims would be costly, complex, time consuming and more importantly cause short and long-term damage to the reputation of both businesses.

Protecting your business

This is an example of where our current law has not caught up with the rapid pace of technological change. Eliminating the risk of discrimination in the decisions made by the use and application of software and algorithms by business is almost impossible.

But tech businesses can, and should, take steps to protect themselves from potentially thorny legal issues:

• If you are a software firm developing machine learning systems and algorithms, work with, and not against, your clients to prevent discrimination.

• If you believe there is a risk, work together to discuss the business reasons for using the software and identify possible solutions, including modifying the project brief if required.

• Raise awareness of potential discrimination issues with the relevant members of your workforce, including its impact on individuals and groups.

• If providing a data-set for example, avoid stereotyping or using wording that may single out certain groups.

• Put in place bespoke policies for your clients and internal guidance for your workforce to reduce exposure to legal and associated commercial risks.

The use of AI is bringing with it a raft of new opportunities which many expect will transform our lives over the next decade. As with any new technology, there is an element of the unknown and for machine learning that includes legal risks.

Until we can find a way to eliminate discrimination from the outset, tech firms must ensure they’re aware of the issues and put procedures in place to prepare and protect themselves. After all prevention, particularly in this case, is better than cure.

Enter your email address to receive updates straight to your inbox

* indicates required
Send me news on...
twitterlinkedinFacebookgoogle_plustwitterlinkedinFacebookgoogle_plus

Editor's picks

petya

Petya ransomware attack: What you need to know
posted 11 hours ago

Here’s how you can deliver a killer presentation
posted 12 hours ago

Hometree gets £1.9m from investors including LocalGlobe
posted 15 hours ago

Smart home battery firm Moixa gets £2.5m to grow in the UK and abroad
posted 17 hours ago

UK tech reacts to Google’s record-breaking £2.14bn fine
posted on June 27, 2017

Beyond the to-do list: 6 tips to help tech entrepreneurs increase their productivity
posted on June 27, 2017