< Back to news listing

Member Spotlight: Matthew McDermott, Director, EMEA Public Policy at Access Partnership March 29, 2019

Matthew McDermott on the Future of Tech Regulation


Could you give us a quick introduction to Access Partnership, what it is you do and what drove you to work for the organization?

Access Partnership is a global public policy consultancy that opens markets for technology. We want people to have access to IT and technology, and for regulation to help and not get in the way- and we have delivered practical outcomes in and shaped conversations in regional, national and international contexts. Personally- I’ve always been fascinated by how technology can improve people’s lives, by the interplay between business and governments and how regulation affects our day to day lives. It has also been a fascinating journey with Access Partnership, as it began as part of a niche area- Google was still a new company when I joined, and technology focused largely on mobile phones, whereas now everything, be it a retailer like M&S, a bank or a platform like Facebook, is digital.

"What we’re going to see over the next few years is the rise of AI, services like driverless cars and radical change in the technology that’s currently available, in ways we can’t even imagine now."

It seems like you’ve had quite the journey when it comes to being involved with technology. You’ve witnessed its contemporary starting point, when Google was just beginning, to where we are now – do you see this rate of technological process as something that will continue to grow exponentially, or do you believe that graph will begin to flatline soon?

Let me answer your question in two ways, firstly on the technology side. I think the last couple of years have been quieter than previous decades, but that’s party because we’ve been used to such drastic and dramatic change. What we’re going to see over the next few years is the rise of AI, services like driverless cars and radical change in the technology that’s currently available, in ways we can’t even imagine now. The problem is that it takes time to commercialize these technologies as radical ideas take awhile to bed in – whether that is for consumers to understand how they will improve their lives or figuring out exactly how to make commercial value out of them. Looking at it from a regulation point it’s a darker story- there is a ‘techlash’, where people feel like technology companies (often code for Google, Facebook and Amazon) have overreached themselves and no longer respect their customers and the rights of citizens, and whether this is true or not politicians are responding to this- and over the next 18-24 months we are going to see significant changes in the regulation of which how data is used by technology companies, and businesses more generally- and it will be hard to predict whether that will have a constricting effect on these companies, or whether those companies that pride themselves as digital disruptors will be able to adapt to the new rules and thrive.

With regard to the increased conversation around legislation that is going to occur, what are the key challenges when it comes to implementing this regulation around Big Data and AI?

The balance is always going to be about putting the citizen and the consumer first. Legislators have a tough task when it comes to explaining fast moving technology to experts like myself, let alone the layperson. Technology and AI companies must take on the challenge of better explaining what it is they are doing, and what services their products are providing, to allow regulators to understand where existing regulation already is appropriate- and where the genuine gaps exist. Unless they take their jobs seriously, the real risk is ill-informed regulation- which will make it impossible for consumers to benefit from the advantages of AI and automation. Regulation in these sectors is inevitable, and I don’t see it as a bad thing, but it’s about ensuring there is joined up thinking across the government and regulators- so that we don’t take a decision in one sector that makes it impossible to deliver benefits elsewhere.

"Liability is a big question in the world of AI because it adds an extra layer to decision making that doesn’t exist today."

I understand that liability is a key question in the world of regulation right now, and will be even larger going into 2019- could you explain why this is and what some of the complications around the issue are?

Liability is a big question in the world of AI because it adds an extra layer to decision making that doesn’t exist today. If a driver crashes a car, the responsibility lies with them, however with an autonomous car where does this lie? Does it lie with the software developer, programmer or car manufacturer? Could it be with another road user, that was acting in a way that the car couldn’t comprehend (even if it wasn’t a particularly unsafe manner) and then caused a crash- would they then be liable? This is only one core of a broader liability point, and what we’re seeing is a realization that technology should be held liable for some of the actions undertaken. Historically under EU law companies like Facebook claim that they aren’t liable for the content they deliver on their networks, that they are mere conduents and are able to push all liability onto their clients. That notion is also under scrutiny. The idea that technology is more pervasive in our lives has resulted in calls for some responsibility to be passed onto the tech companies- and whilst I don’t have the answers (and don’t think anyone does), what I can say is that it will be a massive point of discussion in 2019, and that change is coming.


Why do you believe it is important for large organisations to make the right ethical decisions when it comes to AI and Big Data?

Post-Snowden, Post-Cambridge Analytica- in a world where technology is no longer trusted in a way it once was- the biggest issue for tech companies is trust. The ability to not just take the right ethical decisions, but to be seen to take the right ethical decisions- is necessary to build that trust. If large companies can be open about their use of AI and Big Data, this takes a massive step in building trust toward that organisation which will in itself build opportunities in UK and around the globe. The future of AI is bright and there are many opportunities with how technology will improve people’s lives across the country, but to make that a reality consumers have to trust the technology- and only by businesses engaging in ethical discussions, not just with themselves, but with government and society at large- will we be able to fully take advantage of these opportunities. 

Written by Ajay Gnanam, Events & Customer Success Executive, Digital Leadership Forum




See all events