Applying AI for Social Good – Session Report

In March 2020, members of the Digital Leadership Forum held the third in our series of quarterly AI for good events, supported by our technology partner, Dell.

The aim of the AI for Good programme is to encourage cross-industry collaboration on key ethical issues surrounding artificial intelligence and its implementation within organisations.

Representatives from leading organisations met at CMS in London to discuss applying AI for social good, learn from academic and field experts, and work collectively towards developing professional best practices in a rapidly evolving technical and regulatory environment. 

Attendees heard presentations from Panakeia Technologies, Darktrace and DEFRA. Additionally, a panel discussion was held with industry experts from CMS, Exscientia and Panakeia Technologies.

Attendees also discussed how to address key challenges, risks and ethical questions that come with AI, and how we can reassure both businesses, and the public, that AI can be used for social good. 

Watch the presentations below:

Download the full report:

The Digital Workplace – Session Report

In November 2019, members of the Digital Leadership Forum met at Baker McKenzie in London to discuss how new digital workplace technologies and working styles can be successfully implemented within their organisations.

Representatives from leading organisations including GSK, BT, Dell Technologies, Slack, EDF Energy, Schroders, Octopus Investments, BDO, Zoom and many more discussed the varied challenges that they are facing, whether as legacy companies transitioning to a digital workplace or as digital-first workplaces.

Attendees heard presentations from Neil Usher, Chief Partnerships Officer at GoSpace, who highlighted the importance of the team-centric workplace, and from Amy Dicketts, Product Lead at Monzo Bank, who presented a case study which gave insight into what a digital workplace looks like in practice.

We were also joined by a panel of experts from Zoom, Slack, Artificial Solutions, Baker McKenzie, and Immerse, who discussed best practices and common challenges when introducing new digital technologies into the workplace.

Attendees then broke into smaller groups to discuss how to demonstrate the value of changes, how to identify which new technology is appropriate for your organisation, the evolving role of leaders in the digital workplace, new skills requirements and training, and how to create and design a digital workplace strategy.

Download the full report:

Using AI To Extend Human Cognitive Capabilities with Dr Karina Vold

Dr Karina Vold, Research Fellow at the University of Cambridge, joined the Digital Leadership Forum at our second AI for Good conference in October, The Ethics of Artificial Intelligence. Vold challenged attendees to consider whether AI systems could be used to complement and extend our cognitive capabilities in more advanced and sophisticated ways than they are currently.

1. We’ve always been suspicious of new technology

Vold explained that while shifts in technology are generally positive, they have historically been met with suspicion. The Greek philosopher Socrates resisted the shift from the oral to the written tradition as he thought that by writing things down we would become more forgetful and less social. “Those are exactly the same arguments that you hear against technology today,” Vold said. “You hear that Google is making us more forgetful and Facebook is making us asocial. It’s a story that’s been happening for a very long time in philosophy.”

2. New technology is redesigning tasks

When information is easily accessible we are less likely to remember the information itself, but instead how to access it. For example, we no longer need to remember phone numbers but instead just the passcode to our phones where those numbers are stored.

3. It’s time to expand our definitions of AI

Most AI definitions used today include a clause about autonomous agency. Vold challenged this definition, suggesting that we should include non-autonomous systems in our definition of AI. These systems are built to interact with humans and become intimately coupled with us as we engage in an ongoing dialogue with them. Vold argued that these systems could know us better and have a more complete record of us than any human.

“You hear that Google is making us more forgetful and Facebook is making us asocial. It’s a story that’s been happening for a very long time in philosophy.”

4. AI can help us generate new ideas and approaches

Vold told the story of AlphaGo and Move 37. In 2016 during a Go match in Seoul between world champion Lee Sedol and a computer program developed by Google DeepMind, called AlphaGo, AlphaGo played an unexpected and successful move that no human player would have played. This became known as Move 37. “One of the reasons that people think that the system came up with that move was that it wasn’t being burdened by some of our own social norms, our own game-playing norms and our own human wisdom about what’s good and what’s not good,” Vold said. “It’s really interesting when you think about situations where the stakes are higher: scientific discoveries, drug discoveries, or healthcare.”

5. Offload our weaknesses so we can focus on our strengths

“Obvious weaknesses for us are easy tasks for some systems,” Vold said, suggesting that memory processes, psychometrics, and quantitative and logical reasoning were all areas that could be offloaded. This frees up our time and cognitive capacity for more creative tasks.

6. We may actually be more biased than AI systems

Vold also argued that we should offload decision-making to systems in order to avoid bias. “We don’t really make decisions in the way we think we do,” Vold said. “A lot of times even though we think we’re making judgments in a particular way, we’re being informed by all sorts of built-in systematic biases.”

7. Beware the potential risks

While AI offers exciting opportunities to extend human cognitive capacities, Vold identified three key risks and implications to be aware of:

  • Cognitive atrophy – if we become too reliant on technology we may lose our ability to perform tasks independently;
  • Responsibility – we may become too removed from the decision-making process but are still held responsible for negative consequences, without the ability to understand and rectify the problem; and
  • Privacy – as we put more information onto our devices we need measures to protect that data.

Watch the full presentation:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

Build a 5-Star Customer Experience with AI

Harnessing AI: where to start? Led by Katie King, Keynote speaker on AI

We were looking at where to start with AI, I think the best takeaway from this was not necessarily just thinking, let’s go for AI because it is cool, or it’s the in thing, or it’s the buzz word of the moment but starting within the business. What do you need? What’s the requirement? What’s the objective?

Whether it’s revenue based, or brand based, or cost cutting, or getting data, whatever it may be. But the internal business objective is your starting point. And then where can AI help facilitate that? Not the other way around, not thinking where can we put AI into our business.

Feedback presented by Nathan Brown, Senior Digital Project Executive, AXA PPP Healthcare

How can you leverage chatbots in the B2B sector? Led by Diogo Coutinho, Product Lead, Shell International Ltd

We looked at starting points for chatbots in the B2B sector. We were looking at how we should go away and have a look at the big players and who can provide the service and to play around with the tools and have a look at what they offer.

Then going away and seeing why is it needed? What’s the business case? Would it make the process quicker? Does the user actually want it? Then speak to the audience and see how easy is it for them to currently extract content? And is there a better way of doing it through a chatbot service.

We also discussed the importance of putting a minimal viable product together so you’ve got a scope for what’s required. As well as interviewing the users, conducting research and speaking with the internal help desk to find out really what is needed internally and what information could be needed on the chatbot.

Feedback presented by Toni Fitch, Digital Marketing Manager, Octopus Investments

How can you utilise AI to create a compelling and intelligent CX? Led by Darren Ford, VP Global Customer Services, Artificial Solutions

We were talking about how you can utilise AI to create compelling and intelligent customer experiences. We started off by asking what do we think one of those would look like? And we got very quickly into the topics of personalisation and customisation, being able to understand the question however it’s posed and to answer it. Then we got into the importance of data for delivering that answer. And we had quite a long discussion around the availability of data and how people have tried to create some solutions. One of the things that came out is that very few people have actually tried to have one or two POCs around, but nothing that’s really got to a production level.

So, a key thing that came out of this was the need to have the right data in the right place to be able to answer the questions when posed. I think one of the things that Darren said was, it’s far better to have a very narrow solution with a great depth, rather than a broad solution that does next to nothing.

Feedback presented by Chris Bushnell, CFO, Artificial Solutions

How do you decide the best CX areas to automate with AI?

We started with a use case, we said no matter which area you focus on, you have to start with the use case, then only as a secondary stem, we need to think about the technology, although we debated that as well. The driver needs to be the business case. In terms of what the benefits could be, we were looking at how to service customers’ needs, how to understand the customer and predict customer behaviour.

There was a conversation about how to cut costs and customer lifecycle management. We also looked at some successful use cases such as complaints management. We had a very good use case, in terms of advisory services in general and how they can improve customer service, and also how to include external data like LinkedIn into your customer service, which you usually don’t do when you work manually.

Feedback presented by Gaby Glasener-Cipollone, Managing Director, Cirrus

How can you measure your AI-powered customer experience?

We discussed how AI is just an enabler. Everyone sees it as some bright shiny thing, but actually it’s just a tool. So, I think, it’s important to look at your business and consider the different functions as you normally would. Is AI helping with sales? Is it decreasing cost? Is it improving customer experience? Is it improving employee experience and the effectiveness of HR?

Feedback presented by Graham Combe, Business Development, DataArt

How can you create a culture of AI innovation?

Feedback presented by Jon Downing, Business Development Director, business mix and Jane Ruddock, Manager, PwC

Jon Downing, business mix

We really focused around the importance of creating an AI innovation culture and how it’s about putting the customer first and trying to create a series of quick wins. One of the things we discussed was to look at anything that isn’t working and try to get rid of it quickly and anything which is working to explore and deploy more effectively. We spoke about the importance of raising a very clear and compelling narrative around the work. This is particularly important when discussing AI because there’s a lot of confusion around the language, a lot of uncertainty around what this really means to people and it’s about creating clarity. We also talked about looking externally at the competition, and not just the traditional competitor organisations, but some of the challenger organisations and companies such as Amazon, Facebook, Apple who may be trying to enter into different markets.

Jane Ruddock, PwC

From PwC, one of the things that we identified around creating a culture of innovation is to truly mean that. And that it goes directly to the organisation’s core values, which enables people at all levels to be involved in the AI and innovation changes. So that could be anything from making sure that training is available, making sure there are champions involved with all of the different areas where you’re trying to apply AI and embedding it into people’s roles. It’s also really important to make sure that people feel empowered, and that they have the right training and support to carry out their roles around artificial intelligence and around innovation.

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

Leading your Organisation to Responsible AI – Session Report

In July 2019 the Digital Leadership Forum held the first in our series of quarterly AI for Good events, supported by our Technology Partner Dell Technologies.

As machines become better and smarter at making decisions, the question of how we ensure their ethical behaviour arises. This was one of the topics debated at the Digital Leadership Forum’s “Leading your organisation to responsible AI” event, hosted by Lloyds Banking Group in London on 19th July.

The session was the first instalment of a series of events, part of DLF’s newly created “AI for Good” initiative. The project, supported by Dell Technologies, aims to help organisations deploy ethical artificial intelligence (AI) in their products and operations. The event kicked off with a discussion of a “black box”, a traditional AI model – based on the idea that the more data-heavy and complex the system is, the more accurate the model is. However, this does not always work in practice and makes it more difficult to determine the outcome. For example, a bank might decline a mortgage application based on the AI model’s recommendation and then fail to explain to the consumer why this occurred.

Download the full report:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

Building the Workforce of the Future – Session Report

As we continue to adopt artificial intelligence (AI) at the workplace, how do we ensure these technologies bring the most value for businesses, while protecting employees? This was the key concern for speakers at the Digital Leadership Forum event “Building the Workforce of the Future”, hosted by PwC in London on 17 May.

The workplace of the future

In addition to conventional macroeconomic factors and shifts in demographics and urbanisation, technological innovations continue to revolutionise the workplace. According to Jeremy Waite, Chief Customer Officer at IBM, 90% of the world’s data was created in the last 12 months. As this trend in growth continues, organisations will face challenges in handling such vast amounts of data and lack a proper understanding of AI to leverage the technology. Indeed, four out of five CTOs are feeling overwhelmed and unprepared for AI adoption, according to a recent IBM survey. For Waite, AI will facilitate decision-making for both businesses and customers, and C-Suite executives must jump on the moving bandwagon and harness the power of AI.

Alastair Woods, Partner at PwC, recommended that in order to survive in the new digital age, organisations must take the leap and embrace these changes. For example, leveraging cloud computing and automation can replace cost-heavy infrastructure and operation processes, previously incurring heavy burdens for new companies. This not only promotes new technologies but also enables faster and cheaper market entry for start-ups competing with tech giants In addition, employers should create a tech-savvy culture across all departments, invest in human skills, and create flexible work environments that appeal to a new generation of professionals.

The workforce of the future

Inevitably, this organisational transformation gives rise to new employee behaviours and expectations. Nine-to-five jobs continue to diminish in favour of part-time, freelance, and job-share positions. Employees now travel more, working multiple jobs and demanding “borderless” contracts. For Callum Adamson, CEO and Founder of Distributed, today’s workforce is the product of a shift in balance of power from employers to employees. In addition to traditional expectations of fair treatment, professional development, today’s workforce also demands pay transparency, diversity, equality, flexibility, wellbeing incentives, and social responsibility. As a result, companies are appealing to their staff by promoting emotionally aware cultures, where gratitude for quality work is expressed in the form of gifts, payments, or promotions relative to the standard produced.

These new dynamics will benefit from AI as it transforms human resources practices, employee engagement and internal comms. IBM’s Silvia Cambié presented IBM tools, such as tone analysers, bots and e-learning apps, as prime examples of technologies enhancing internal processes. The tone analyser, which examines sentiments of written communications, allows employees and customer service departments to understand, revise and tailor the tone of their responses. AI-power bots enable employees to ask anonymous questions to management when they would otherwise be reluctant to do so, such as performance evaluation, complaints, or even health-related queries. E-learning platforms offer “learning on demand” and tailoring courses based on individual objectives and previous selections. However, AI outcomes depend on the quality of data sets, removing bias and associated ethical risks. To tackle these challenges, Silvia Cambié advocated for diversity in the AI industry and recommended companies to demand user feedback to finetune systems and mitigate bias.

The role of government

As with any technology, industry best-practice and government policy must adapt to innovation. The final discussion of the day on the role of government and AI was moderated by Access Partnership’s Chief BD Officer Matthew McDermott. Increased understanding and trust in the use of AI technologies can be facilitated by adhering to the principles of fairness, accuracy, responsibility and explainability (trust and understandability of AI for consumers). Sound data innovation policy, cybersecurity and privacy protection, investment in research and development and skills training will be crucial to develop an AI-equipped workplace. These objectives can be achieved through collaborations between governments and industry, and multinationals and start-ups. Combining resources across the market will generate a greater pool of talent.

But, to ensure the sustainable evolution of the workplace of the future, policy-makers also need to implement policies protecting employees (healthcare, insurance, retirement benefits), as well as incentives for companies to invest in people and innovative technologies.

Ultimately, this forum confirmed the awesome impact of AI on lives and jobs, as well as the responsibilities of both public and private actors in enabling its adoption and leveraging it to respond to future challenges in the workforce.

Written by Ivan Ivanov, Marketing Manager at Access Partnership

Watch Jeremy Waite’s keynote speech below:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.