Coffee table conversations with people thinking about foundational issues. Multiverses explores the limits of knowledge and technology. Does quantum mechanics tell us that our world is one of many? Will AI make us intellectually lazy, or expand our cognitive range? Is time a thing in itself or a measure of change? Join James Robinson as he tries to find out.
AI is already changing the world. It’s tempting to assume that AI will be so transformative that we’ll inevitably fail to harness it correctly, succumbing to its Promethean flames.
While caution is due, it’s instructive to note that in many respects AI does not create entirely new challenges but rather exacerbates or uncovers existing ones. This is one of the key themes that emerge in this discussion with John Zerilli. John is a philosopher specializing in AI, Data, and the Rule of Law at the University of Edinburgh, and he also holds positions at the Oxford Institute for Ethics in AI and the Centre for the Future of Intelligence in Cambridge.
For instance, John points out that some of the demands we make of AI with respect to fairness are simply impossible to fulfill — not due to some technological or moral failing on the part of AI, but that our demands are in mathematical conflict. No procedure, whether executed by a human or a machine, can consistently meet these requirements. We have AI research to thank for illuminating this.
In contrast, concerns over a ‘responsibility gap’ in AI seem to overlook the legal and social progress made over the last centuries, which has, for example, allowed us to detach culpability from individuals and assign it to corporations instead.
John also notes that some of the dangers of AI may be more commonplace than we imagine — such as the use of deep fakes to supercharge hacking, or our psychological tendency to become complacent with processes that mostly work, leading us to an unwarranted reliance on AI.
Notes:
- A Citizen’s Guide to Artificial Intelligence
- John’s Edinburgh research page
- Twitter @JohnZerilli
- Mulltiverses.xyz website
- Brian Hedden’s article on fairness
(00:00) Intro
(3:25) Discussion starts: risk
(12:36) Robots are scary, embedded AI is anodyne
(15:00) But robots failing is cute
(16:50) Should we build errors into AI? — catch trials
(26:62) Responsibility
(29:11) There is no responsibility gap
(42:40) Should we move faster to introduce self-driving cars?
(45:22) Fairness
(1:05:00) AI as a cognitive prosthetic
(1:18:14) Will we lose ourselves among all our cognitive prosthetics?
AI is already changing the world. It’s tempting to assume that AI will be so transformative that we’ll inevitably fail to harness it correctly, succumbing to its Promethean flames.
While caution is due, it’s instructive to note that in many respects AI does not create entirely new challenges but rather exacerbates or uncovers existing ones. This is one of the key themes that emerge in this discussion with John Zerilli. John is a philosopher specializing in AI, Data, and the Rule of Law at the University of Edinburgh, and he also holds positions at the Oxford Institute for Ethics in AI and the Centre for the Future of Intelligence in Cambridge.
For instance, John points out that some of the demands we make of AI with respect to fairness are simply impossible to fulfill — not due to some technological or moral failing on the part of AI, but that our demands are in mathematical conflict. No procedure, whether executed by a human or a machine, can consistently meet these requirements. We have AI research to thank for illuminating this.
In contrast, concerns over a ‘responsibility gap’ in AI seem to overlook the legal and social progress made over the last centuries, which has, for example, allowed us to detach culpability from individuals and assign it to corporations instead.
John also notes that some of the dangers of AI may be more commonplace than we imagine — such as the use of deep fakes to supercharge hacking, or our psychological tendency to become complacent with processes that mostly work, leading us to n unwarranted reliance on AI.
We get pretty speculative toward the end, John throws in a fascinating thought which I’ve been chewing over: is the intelligence we have as humans linked to our metabolic dependence on the external world? It’s not uncommon to hear that AI may need to be embodied to keep progressing. Intuitively, the ability to manipulate the world and explore its response does seem to be an important feature of learning that would move us from the somewhat Pavlovian regime that is currently applied to AI. However, could being incarnate also be important?
There are features to our bodies, our processing of food, that do add extra dimensions to our experience. Small children, much to the horror of their parents, love to put objects in their mouths. That’s got to be a great way of appreciating stuff and understanding its size, texture, and solidity. Our mouths are rooms stuffed with sensors. Flavour, taste, the feel of my own heartbeat, weariness, moodiness — there are physical and physiological aspects to experience that relate to being incarnate.
I’m not convinced that these are necessary features for a general intelligence, certainly not conceptually, but perhaps they contribute significantly to our self-awareness and our awareness of the porous boundaries between ourselves and the world. And it may be hard to imbue a machine with this knowledge without making it incarnate.
Notes