Ethical challenges of AI with Arash Ghazanfari

We caught up with Arash Ghazanfari, Field CTO at Dell Technologies, at our AI for Good session, The ethics of artificial intelligence, to discuss current innovations and challenges in AI technology.

AI and Cyber Security

“One of the biggest challenges that we are seeing around artificial intelligence is emerging in the space of cyber security,” Ghazanfari said, highlighting the development of deep fake technology “that’s becoming a major concern for us.”

“AI in the wrong hands can have a severe impact on a digitally transformed society,” Ghazanfari said. AI opens up some exciting new possibilities, particularly in manufacturing, where Ghazanfari noted that both the time needed to bring new products to market and the production costs are decreasing.

Automation and Employee Retention

However Ghazanfari cautioned that organisations should be careful when introducing automation, and be particularly mindful of how any transition to automation is presented to employees.

“What tends to happen is that we end up losing the best people first, and people don’t really react to it very well,” he explained. “If the intention is to free us up from those mundane tasks and move us to more valuable activities I think it can be really beneficial to the employees, as well as introducing productivity gains for the business.”

Technology for Education

We are seeing an emergence of platforms that are delivering educational content in new innovative ways. Ghazanfari is particularly passionate about using technology to improve education and make it more accessible.

“Different people consume and learn content in different ways. With AI, we are seeing an emergence of platforms that are delivering educational content in new innovative ways.”

Making Life Easier with AI

Ghazanfari is also hopeful that technology can be used to improve our lives in other areas too. “Technologies are enhancing our lives, making life easier, and democratising access to resources and access to skills. I think we are on the right path, but we shouldn’t lose touch with our humanity.”

Watch the full interview below:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

5 Ways to Transform Your Digital CX

Consumers expect increasingly high standards, and as technology continues to improve there are now more ways than ever to deliver an excellent customer experience. Read on for 5 ways to transform your digital customer experience.

1. Invest in a virtual assistant

Chatbots have come a long way since ELIZA, and while they no longer need to pass the Turing Test in order to impress us they can be extremely useful for providing immediate assistance to busy consumers. Whether your business is B2B or B2C, your customers will value a knowledgeable virtual assistant that can guide them through their purchases and queries.

2. Put a CX specialist on your digital team

Not all digital innovation needs to involve technology. We spoke with Vinay Parmar, UK Customer and Digital Experience Director at National Express, who told us how putting CX specialists from their contact centres onto their digital teams helped them to keep customer perspectives and experiences central when designing new digital products. Parmar explained that having someone from the contact centre saying “‘I take calls all day and this is what customers say-’ or ‘That’s how customers really use it and what we should be thinking about is-‘” gave the team invaluable insight. 

3. Personalise your customer experience

Broad segmentation is no longer sufficient: just 8% of respondents to a recent survey said that they would be encouraged to engage with a retail brand if they addressed them by their first name. Customers now expect hyper-personalised experiences and are much more likely to buy from brands that offer them individualised offers that suit their lifestyles. By embracing increasingly detailed datasets and machine learning technology you can create a scalable process that detects intention and promotes a frictionless customer journey. 

4. Improve your employee experience

According to PwC’s recent Consumer Insights Survey, employee experience has been shown to correlate directly with customer experience, particularly in customer service roles. Investing in an employee experience platform, which combines access to HR, Learning & Development opportunities, and other employee resources, can improve your employees’ experience and help them to deliver excellent customer service.

5. Be transparent about data

93% of online shoppers say that, compared to last year, it is the same or higher priority for companies to respect their anonymity online. Consumers want companies to be open and transparent in their handling of data – to be not just GDPR compliant but also to clearly communicate how any data is stored and used throughout.

View More Insights

Ethical and Governance Challenges of AI

Dr Jennifer Cobbe, Coordinator of the Trust & Technology Initiative at the University of Cambridge, joined the Digital Leadership Forum at our first AI for Good conference in July, Leading your organisation to responsible AI. Cobbe delivered a thought-provoking presentation, encouraging us to question how we perceive AI technology and its regulation. Here’s what we learnt:

1. It’s AI, Not Magic

While there is a tendency to make exaggerated claims about what artificial intelligence can actually do, we’re not quite at Skynet capabilities yet. Most current AI uses Machine Learning: essentially, statistical models that are trained to spot patterns and correlations in datasets and then make predictions based on these. Machine Learning is only trained to operate within what its trainers think is an acceptable margin of error. “It’s only ever going to be a proximation of the best result,” Cobbe said, arguing that AI is best suited to prediction and classification tasks, but anything more complex may be too much for it at the moment.

2. New Technology Is Not the Wild West

We often think of technology as a largely unregulated new frontier, with the law lagging far behind its bold strides, but this assumption is incorrect. Cobbe explained that existing laws apply straightforwardly to AI, including data protection laws, non-discrimination laws, employment laws, and other sector-specific laws.

3. Our AI Is Only As Ethical As We Are

“Technology isn’t neutral,” Cobbe reminded us. “If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.” Fortunately, the process of introducing AI to your organisation gives you an opportunity to actively confront and address any existing issues.

“If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.”

4. Regulation Can Make Us More Creative

“We should also acknowledge that advances in the law lead to advances in technology,” Cobbe said, highlighting the example of GDPR law, which encouraged the development of new Privacy Enhancing Technologies. We should welcome new regulations because the need to work within them inspires creative solutions. “The need for AI systems to be legally compliant means that designers and engineers are often tasked with finding novel ways to do what the law needs,” Cobbe said.

5. Beware of Bias

Bias manifests in many forms in artificial intelligence. Sometimes designers encode their own biases and assumptions simply by choosing which data to include (and to exclude). Machine Learning is also dependent on historical datasets, which reflect society’s existing biases and discriminatory practices. “By using historical data we do run the risk of essentially encoding the past into the future,” Cobbe said, encouraging organisations to actively guard against this.

In particular, when AI is used for classification there is a risk that it will choose to discriminate against protected groups, as in the example of Amazon’s AI recruiting tool. As we’ve already learned, non-discrimination laws apply straightforwardly to AI, and so companies face serious legal consequences for any discriminatory decisions made by AI.

6. Humans Might Actually Be Better

AI might not always be the most appropriate solution for your organisation. “If you’re using AI within your organisation then you really should be asking yourself whether you’re comfortable relying on a system which probably can’t tell you why it made a decision or why it reached a particular outcome.” Technical solutions are often framed as the best solutions to socioeconomic problems and non-technical problems, but this isn’t always the case. If a task involves qualitative data then a human will probably be a more efficient and ethical evaluator.

“While the real world is a messy, complicated thing, AI will inevitably flatten nuances and gloss over complexities,” Cobbe warned, explaining “It relies on data that attempts to quantify a world that is often qualitative in nature and provides outputs that are overly simplistic sometimes, or even just downright misleading.” “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

7. Hire More Social Scientists

We tend to assume that only people who studied STEM subjects need to be involved in artificial intelligence development, but Cobbe warns that this is a mistake. “We really need social scientists,” she said, as they are much more aware of the existing power-dynamics and biases in society and can help organisations to address these.

8. Good Regulation Should Stifle Bad Ideas

Not all new ideas are good ideas, Cobbe argued, and we should welcome the regulation of AI as relying on ethics and self-regulation has proven to be insufficient. We now need regulation as a baseline to protect society and to prevent unethical projects from prospering at the cost of ethical businesses. “Without legal intervention there’s a real danger that irresponsible AI becomes the defining feature of AI in the public’s imagination.”

9. The Buck Stops At You

Ultimately, it is your obligation as an organisation to ensure that you are using AI responsibly, both legally and morally. Organisations should also stay informed of emerging ethical issues. Cobbe highlighted the research work being done by Doteveryone, a London-based think tank, as a useful resource for organisations.

So what if your technology falls short of the legal and ethical requirements? Well, Dr Cobbe has an easy solution: “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

You can watch Dr Jennifer Cobbe’s full presentation below:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.