Ethical and Governance Challenges of AI

Dr Jennifer Cobbe, Coordinator of the Trust & Technology Initiative at the University of Cambridge, joined the Digital Leadership Forum at our first AI for Good conference in July, Leading your organisation to responsible AI. Cobbe delivered a thought-provoking presentation, encouraging us to question how we perceive AI technology and its regulation. Here’s what we learnt:

1. It’s AI, Not Magic

While there is a tendency to make exaggerated claims about what artificial intelligence can actually do, we’re not quite at Skynet capabilities yet. Most current AI uses Machine Learning: essentially, statistical models that are trained to spot patterns and correlations in datasets and then make predictions based on these. Machine Learning is only trained to operate within what its trainers think is an acceptable margin of error. “It’s only ever going to be a proximation of the best result,” Cobbe said, arguing that AI is best suited to prediction and classification tasks, but anything more complex may be too much for it at the moment.

2. New Technology Is Not the Wild West

We often think of technology as a largely unregulated new frontier, with the law lagging far behind its bold strides, but this assumption is incorrect. Cobbe explained that existing laws apply straightforwardly to AI, including data protection laws, non-discrimination laws, employment laws, and other sector-specific laws.

3. Our AI Is Only As Ethical As We Are

“Technology isn’t neutral,” Cobbe reminded us. “If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.” Fortunately, the process of introducing AI to your organisation gives you an opportunity to actively confront and address any existing issues.

“If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.”

4. Regulation Can Make Us More Creative

“We should also acknowledge that advances in the law lead to advances in technology,” Cobbe said, highlighting the example of GDPR law, which encouraged the development of new Privacy Enhancing Technologies. We should welcome new regulations because the need to work within them inspires creative solutions. “The need for AI systems to be legally compliant means that designers and engineers are often tasked with finding novel ways to do what the law needs,” Cobbe said.

5. Beware of Bias

Bias manifests in many forms in artificial intelligence. Sometimes designers encode their own biases and assumptions simply by choosing which data to include (and to exclude). Machine Learning is also dependent on historical datasets, which reflect society’s existing biases and discriminatory practices. “By using historical data we do run the risk of essentially encoding the past into the future,” Cobbe said, encouraging organisations to actively guard against this.

In particular, when AI is used for classification there is a risk that it will choose to discriminate against protected groups, as in the example of Amazon’s AI recruiting tool. As we’ve already learned, non-discrimination laws apply straightforwardly to AI, and so companies face serious legal consequences for any discriminatory decisions made by AI.

6. Humans Might Actually Be Better

AI might not always be the most appropriate solution for your organisation. “If you’re using AI within your organisation then you really should be asking yourself whether you’re comfortable relying on a system which probably can’t tell you why it made a decision or why it reached a particular outcome.” Technical solutions are often framed as the best solutions to socioeconomic problems and non-technical problems, but this isn’t always the case. If a task involves qualitative data then a human will probably be a more efficient and ethical evaluator.

“While the real world is a messy, complicated thing, AI will inevitably flatten nuances and gloss over complexities,” Cobbe warned, explaining “It relies on data that attempts to quantify a world that is often qualitative in nature and provides outputs that are overly simplistic sometimes, or even just downright misleading.” “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

7. Hire More Social Scientists

We tend to assume that only people who studied STEM subjects need to be involved in artificial intelligence development, but Cobbe warns that this is a mistake. “We really need social scientists,” she said, as they are much more aware of the existing power-dynamics and biases in society and can help organisations to address these.

8. Good Regulation Should Stifle Bad Ideas

Not all new ideas are good ideas, Cobbe argued, and we should welcome the regulation of AI as relying on ethics and self-regulation has proven to be insufficient. We now need regulation as a baseline to protect society and to prevent unethical projects from prospering at the cost of ethical businesses. “Without legal intervention there’s a real danger that irresponsible AI becomes the defining feature of AI in the public’s imagination.”

9. The Buck Stops At You

Ultimately, it is your obligation as an organisation to ensure that you are using AI responsibly, both legally and morally. Organisations should also stay informed of emerging ethical issues. Cobbe highlighted the research work being done by Doteveryone, a London-based think tank, as a useful resource for organisations.

So what if your technology falls short of the legal and ethical requirements? Well, Dr Cobbe has an easy solution: “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

You can watch Dr Jennifer Cobbe’s full presentation below:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

Build a 5-Star Customer Experience with AI

Harnessing AI: where to start? Led by Katie King, Keynote speaker on AI

We were looking at where to start with AI, I think the best takeaway from this was not necessarily just thinking, let’s go for AI because it is cool, or it’s the in thing, or it’s the buzz word of the moment but starting within the business. What do you need? What’s the requirement? What’s the objective?

Whether it’s revenue based, or brand based, or cost cutting, or getting data, whatever it may be. But the internal business objective is your starting point. And then where can AI help facilitate that? Not the other way around, not thinking where can we put AI into our business.

Feedback presented by Nathan Brown, Senior Digital Project Executive, AXA PPP Healthcare

How can you leverage chatbots in the B2B sector? Led by Diogo Coutinho, Product Lead, Shell International Ltd

We looked at starting points for chatbots in the B2B sector. We were looking at how we should go away and have a look at the big players and who can provide the service and to play around with the tools and have a look at what they offer.

Then going away and seeing why is it needed? What’s the business case? Would it make the process quicker? Does the user actually want it? Then speak to the audience and see how easy is it for them to currently extract content? And is there a better way of doing it through a chatbot service.

We also discussed the importance of putting a minimal viable product together so you’ve got a scope for what’s required. As well as interviewing the users, conducting research and speaking with the internal help desk to find out really what is needed internally and what information could be needed on the chatbot.

Feedback presented by Toni Fitch, Digital Marketing Manager, Octopus Investments

How can you utilise AI to create a compelling and intelligent CX? Led by Darren Ford, VP Global Customer Services, Artificial Solutions

We were talking about how you can utilise AI to create compelling and intelligent customer experiences. We started off by asking what do we think one of those would look like? And we got very quickly into the topics of personalisation and customisation, being able to understand the question however it’s posed and to answer it. Then we got into the importance of data for delivering that answer. And we had quite a long discussion around the availability of data and how people have tried to create some solutions. One of the things that came out is that very few people have actually tried to have one or two POCs around, but nothing that’s really got to a production level.

So, a key thing that came out of this was the need to have the right data in the right place to be able to answer the questions when posed. I think one of the things that Darren said was, it’s far better to have a very narrow solution with a great depth, rather than a broad solution that does next to nothing.

Feedback presented by Chris Bushnell, CFO, Artificial Solutions

How do you decide the best CX areas to automate with AI?

We started with a use case, we said no matter which area you focus on, you have to start with the use case, then only as a secondary stem, we need to think about the technology, although we debated that as well. The driver needs to be the business case. In terms of what the benefits could be, we were looking at how to service customers’ needs, how to understand the customer and predict customer behaviour.

There was a conversation about how to cut costs and customer lifecycle management. We also looked at some successful use cases such as complaints management. We had a very good use case, in terms of advisory services in general and how they can improve customer service, and also how to include external data like LinkedIn into your customer service, which you usually don’t do when you work manually.

Feedback presented by Gaby Glasener-Cipollone, Managing Director, Cirrus

How can you measure your AI-powered customer experience?

We discussed how AI is just an enabler. Everyone sees it as some bright shiny thing, but actually it’s just a tool. So, I think, it’s important to look at your business and consider the different functions as you normally would. Is AI helping with sales? Is it decreasing cost? Is it improving customer experience? Is it improving employee experience and the effectiveness of HR?

Feedback presented by Graham Combe, Business Development, DataArt

How can you create a culture of AI innovation?

Feedback presented by Jon Downing, Business Development Director, business mix and Jane Ruddock, Manager, PwC

Jon Downing, business mix

We really focused around the importance of creating an AI innovation culture and how it’s about putting the customer first and trying to create a series of quick wins. One of the things we discussed was to look at anything that isn’t working and try to get rid of it quickly and anything which is working to explore and deploy more effectively. We spoke about the importance of raising a very clear and compelling narrative around the work. This is particularly important when discussing AI because there’s a lot of confusion around the language, a lot of uncertainty around what this really means to people and it’s about creating clarity. We also talked about looking externally at the competition, and not just the traditional competitor organisations, but some of the challenger organisations and companies such as Amazon, Facebook, Apple who may be trying to enter into different markets.

Jane Ruddock, PwC

From PwC, one of the things that we identified around creating a culture of innovation is to truly mean that. And that it goes directly to the organisation’s core values, which enables people at all levels to be involved in the AI and innovation changes. So that could be anything from making sure that training is available, making sure there are champions involved with all of the different areas where you’re trying to apply AI and embedding it into people’s roles. It’s also really important to make sure that people feel empowered, and that they have the right training and support to carry out their roles around artificial intelligence and around innovation.

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.