Beklager...

Siden findes desværre ikke på dansk

Bliv på siden | Fortsæt til en relateret side på dansk

29-03-2021 09:00

AI: With great power comes great responsibility

The use of artificial intelligence (AI) is becoming more varied and more widespread. Many of us interact with it every single day: we talk to digital assistants like Siri and Alexa, we let sites like Netflix, Amazon and ASOS suggest our next purchases, and we let AI sort our emails and even suggest replies. But AI isn’t without its problems and even companies with large teams of researchers are struggling with some of the challenges.
AI

The possibilities

The business potential of AI is huge. It’s able to analyse more data more quickly and more accurately than its human counterparts. And it can find patterns that humans would almost certainly miss. That’s why many companies are using it to aid decision making: HR teams are using it to filter job applications, operations teams to predict maintenance needs and finance teams to identify and assess risk.

AI is also being used to give customers more personalised experiences. Many stores, banks and other businesses are using chatbots, a form of conversational AI. These can handle many customer requests, speeding up response times, enabling 24×7 service and freeing up human staff for other tasks.

“The potential of AI is incredibly exciting.” Mattias Fras, Head of AI Hub at Nordea, explains. “For example, Nordea has implemented AI to help with the handling of disability claims. The AI handles two-thirds of cases, giving human staff the chance to give extra time and attention to those that need it.”

The dangers

AI is created by humans, and according to Conway’s Law, that means it’s prone to picking up their bad habits. Tay, the AI-driven chatbot created by Microsoft a few years back, is an excellent example. It was designed to engage 18-24 year olds on Twitter, but within hours it had started to post racist and inflammatory messages. Perhaps predictably, some users posted offensive messages to Tay to see how it would react. Tay responded to these messages in kind and a downward spiral began.

“Organisations which design systems are constrained to produce systems which are copies of the communication structure of these organisations.”

Melvin E. Conway

Microsoft was quick to take down Tay and explain the whole experience as a teachable moment. But some companies are embedding AI into important business processes that could affect people’s lives. There are many examples of AI-based systems being accused of bias, including racial discrimination in assessing job applicants.

AIs are also prone to pick up other human weaknesses. An experiment in India found that a human assessor was more likely to approve a loan application if the previous three had been rejected. This is called the gamblers’ fallacy, it also explains the tendency to think that after heads coming up many times in a row the next coin toss is more likely to be tails. It’s not.

“This is still an emerging technology in terms of applications, and there’s lots that we still need to learn. Nordea is embracing AI where we can be confident that the risks are manageable and that we have controls in place to ensure ethical, compliant and value adding usage. One example where we are applying AI is in customer interaction. AI can help our customers get the help that they need quicker but we can also provide alternatives for those that don’t want to use the AI,” Fras says.

Nordea is embracing AI where we can be confident that the risks are manageable and that we have controls in place to ensure ethical, compliant and value adding usage. One example where we are applying AI is in customer interaction. AI can help our customers get the help that they need quicker but we can also provide alternatives for those that don’t want to use the AI.

Mattias Fras, Head of AI Hub at Nordea

“Whereas, something like making decisions on loan applications with AI is more complex and requires fit-for-purpose control mechanisms. Some new entrants are using AI-based decision-making for business loans, but as a systemic bank, with millions of business and consumer customers, we need to be careful and make sure we put trust first. We will not risk Nordea’s reputation and 200 years of putting the customer first. However, it does not mean we shouldn’t start exploring and learning.”

Improving AI

AI has enormous potential to improve businesses and society. It could transform education, healthcare and just about every aspect of our lives. But there are pitfalls to avoid. We asked Mattias what advice he’d give to companies looking at AI.

1. Set clear principles

AI is still in its infancy in terms of its applications, but it is being adopted by an increasing number of businesses. In many cases new entrants are making use of AI in new and exciting ways and showing incumbents what’s possible. But there’s a danger that businesses become so preoccupied with whether they can do something with AI, rather than think about if they should. When introducing any technology you always need to ask “how can we create more value for our customers with this new tool and are we ready to manage it?”

I believe too many businesses are trying to implement AI simply because they can. They’re not necessarily thinking about the experience of the customer. AI provides for some incredible use cases, but it’s not right for every situation.

Mattias Fras, Head of AI Hub at Nordea

“I believe too many businesses are trying to implement AI simply because they can,” says Fras, “they’re not necessarily thinking about the experience of the customer. AI provides for some incredible use cases, but it’s not right for every situation. We believe that the opportunity for Nordea is not to be first with AI, but to be best at applying AI where it makes sense. And that’s what we’re working on. We’re using AI where it is feasible and can provide value for our customers, and we’re proceeding at steady pace to allow for continuous learning.”

2. Be transparent

All companies should be clear about the data they are gathering and ensure customer consent. This isn’t just good practice, it’s a requirement under the EU’s General Data Protection Regulation (GDPR). Consumers should also be given the right to opt out and have their information deleted. Another requirement is that companies inform subjects what the company is doing with their data. To stay within the law and build trust with customers, companies should be open about their use of AI.

Fras says, “We need to understand the underlying logic of these new systems and also to explain why decisions have been made in the context of their applications, and that’s not always easy with AI. Unlike traditional software you can’t just look at the code and follow the rules. That’s why it’s really important that companies are rigorous about checks and balances as well as thinking about problem solving in new ways. We have developed new protocols to handle this as part of an end-to-end lifecycle for AI in Nordea.”

“We need to understand the underlying logic of these new systems and also to explain why decisions have been made in the context of their applications, and that’s not always easy with AI. Unlike traditional software you can’t just look at the code and follow the rules.

Mattias Fras, Head of AI Hub at Nordea

There are several industry initiatives related to this — including a collaboration between the Financial Conduct Authority and The Alan Turing Institute — to encourage companies to improve transparency and help them to do so.

3. Identify and reduce sources of bias

There’s an old IT saying, “garbage in, garbage out.” And that’s very true of AI. The quality of data will directly affect the standard of decisions that the AI makes. Still it’s very hard to ensure complete elimination of bias. For example, perhaps you decide to delete age information from your data to prevent discrimination. But your AI may still end up being ageist by including factors like “time at current address” or income in its algorithms — as these can be proxies for age.

Fras says, “Prioritising and investing in diversity will be key to reducing bias. Having a diverse workforce will help companies to spot potential bias and correct for it. Some IT companies are also building tools to identify bias in AI models, like this one from Accenture.”

4. Educate your employees

There’s a lot of misinformation out there. A great example is the well-known story about Target, a leading US retailer, and the teenage girl. The story goes that Target’s customer profiling was so good that it identified that she was pregnant and sent her offers in the post. Her father saw these and was irate. He later had to apologise when the girl admitted that she was indeed expecting. It’s a great story, and one which many commentators — including IT experts — leapt on as an example of all that’s wrong with AI. But, in reality, that story probably isn’t true and even if it is, may not involve AI at all.

Fras said, “It’s important to educate employees about AI. This will enable them to evaluate where AI could work for their company and contribute to adding value for its customers. It will also help them to identify misinformation and correct perceptions. That’s why Nordea is supporting the “Elements AI” online course and encouraging employees to improve their knowledge. The introduction of AI into workforces means we have truly entered an age of continuous learning, at all levels.”