Peter Nixey: AI — Disruption Ahead

It’s easy to recognize the potential of incremental advances — more efficient cars or faster computer chips for instance. But when a genuinely new technology emerges, often even its creators are unaware of how it will reshape our lives. So it is with AI, and this is where I start my discussion with Peter Nixey. … Read more

David Papineau: How Philosophy Serves Science

Are philosophy and science entirely different paradigms for thinking about the world? Or should we think of them as continuous: overlapping in their concerns and complementary in their tools? David Papineau is a professor at Kings College London and the author of over a dozen books. He’s thought about many topics — consciousness, causation the … Read more

Moral philosophy as puzzles of daily life — Paulina Sliwa

Why do men do less housework? What happens when an apology is offered? What are we looking for when we ask for advice?


These are the sorts of problems drawn from everyday experience that Paulina Sliwa intends to resolve and in doing so make sense of the ways we negotiate blame and responsibility.


Paulina is a Professor of Moral & Political Philosophy at the University of Vienna. She looks carefully at evidence accessible to us all — daily conversations, testimony from shows like This American Life, and our own perceptions — and uses these to unravel our moral practices. The results are sometimes surprising yet always grounded. For example, Paulina argues that remorse is not an essential feature of an apology, nor is accepting that behavior was unjustified.


This is illuminating for its insights into moral problems, but I equally enjoyed seeing how Paulina thinks, it’s a wonderful example of philosophical tools at work.

Milestones
(0:00) Into
(3:00) Start of conversation: grand systems vs ordinary practices of morality
(5:30) Philosophy and evidence
(6:39) Apologies
(8:40) Anne of Green Gables: an overblown apology
(10:50) Remorse is not an essential feature of apologies
(12:00) Apologies involve accepting some blame
(15:30) Why apology is not saying I won’t do it again
(17:17) Essential vs non-essential features of apologies
(18:12) Apologies occur in many different shapes, is a unified account possible?
(20:00) Moral footprints
(24:10) Apologies and politeness
(26:20) Tiny apologies as a commitment to moral norms
(29:50) Moral advice — verdictive vs hermeneutic (making sense)
(33:30) Moral advice doesn’t need to get us to the right answer but it should get us closer
(36:30) Perspectives, affordances and options
(38:40) Perspectives vs facts
(46:45) Housework: Gendered Domestic Affordance Perception
(49:40) Evidence that affordances are directly perceived (and not inferred)
(52:00) Convolutional neural networks as a model of perception
(53:00) Environmental dependency syndrome
(54:30) Perceptions are not fixed
(59:30) Perception is not a transparent window on reality
(1:01:00) Tools of a philosopher
(1:03:20) A Terribly Serious Adventure – Philosophy at Oxford 1900-60 — Nikhil Krishnan
(1:04:50) Philosophy as continuous with science
(1:06:17) Philosophy is not a neutral enterprise:
(1:09:00) Santa: Read letters!
(1:10:10) Apologise less

Astrobiology: what is life & how to know it when we see it — Sean McMahon

Life. What is it? How did it start? Is it unique to Earth, rare or abundantly distributed throughout the universe?

While biology has made great strides in the last two hundred years, these foundational questions remain almost as mysterious as ever. However, in the last three decades, astrobiology has emerged as an academic discipline focused on their resolution. Already we have seen progress, if not aliens. The success of the space telescope Kepler in discovering exoplanets may come to mind. Equally important is the work to understand how we can demarcate biological from abiotic patterns — when we can be sure something is a genuine biosignature (evidence of life) and not a biomorph (looks like life, but is the product of other processes).

Our guest this week is Sean McMahon, a co-director of the UK Centre for Astrobiology. Sean takes us through the field in general and gives particularly thoughtful insights into these epistemological problems. He also cautions that we may need a certain psychological resilience in this quest: it may require generations of painstaking work to arrive at firm answers.

Milestones

(00:00) Intro

(3:22) Start of discussion: astrobiology as where biology meets the physical science

(6:00) What is life?

(9:30) Life is a self-sustaining chemical system capable of Darwinian evolution — NASA 94

(10:44) Life is emergent, therefore hard to define

(12:00) Assembly theory — beer, the pinnacle of life?

(14:22) Schrodinger & DNA

(15:45) Von Neumann machine behavior as defining life

(17:00) All life on Earth we know comes from one source

(22:55) How did life emerge on Earth

(26:40) The most important meal in history — emergence of eukaryotes

(28:20) The difficulty of delineating life from non-life

(33:30) How spray paint looks like life

(35:30) ALH84001

(39:00) How false positives invigorated exobiology

(44:05) The abiotic baseline

(46:30) Chemical gardens

(49:30) Is natural selection the only way to high complexity?

(54:55) Sci-fi & life as we don’t know it

(58:45) Kepler & exoplanets

(1:00:00) It may take generations

(1:03:40) Sagan’s dictum: Extraordinary claims require extraordinary evidence

(1:08:50) Technosignatures: Gomböc, Obelisk, not Pulsar

(1:12:00) Can we prove the null hypothesis (no life)

How & why do animals play? — Gordon Burghardt

Many animals play. But why?

Play has emerged in species as distinct as rats, turtles, and octopi although they are separated by hundreds of millions of years of evolution.

While some behaviors — hunting or mating for example — are straightforwardly adaptive, play is more subtle. So how does it help animals survive and procreate? Is it just fun? Or, as Huizinga put it, is it the primeval soil of culture?

Our guest this week is Gordon Burghardt, a professor at The University of Tennessee and the author of the seminal The Genesis of Animal Play: Testing the Limits where he introduced criteria for recognizing animal play.

Gordon has spent his career trying to understand the experience of animals. He advocates for frameworks such as critical anthropomorphism and the umwelt so we can judiciously adjust our perspectives. We can play at being other.

Links

Gordon Burghardt — Multiverses Podcast

Milestones

(00:00) Introduction

(2:20) Why study play?

(4:00) Criteria for play

(5:00) Fish don’t smile

(5:50) The five criteria: 1. incompletely functional

(7:40) 2. Fun (endogenous reward)

(8:20) 3. Incomplete

(9:45) 4. Repeated

(10:50) 5. Healthy, stress free

(13:30) Play as a way of dealing with stress (but not too much)

(16:40) Parental care creating a space for play

(17:45) Delayed vs immediate benefits

(20:45) Primary, secondary and tertiary play

(26:00) Role reversal, imitation, self-handicapping: imagining the world otherwise

(31:00) Secondary process: play as a way of maintaining systems

(33:37) Tertiary process: play as a way of going beyond

(34:45) Komodo dragons with buckets on their heads

(39:22) Critical anthropomorphism

(42:40) Umwelt — Jakob von Uexküll

(49:18) Anthropomorphism by omission

(53:00) Play evolved independently — it is not homologous

(53:45) Do aliens play?

(1:00:10) Play signals — how to play with dogs and bears

(1:04:00) Inter species play

(1:09:00) Final thoughts

Language Evolution & The Emergence of Structure — Simon Kirby

Language is the ultimate Lego.

With it, we can take simple elements and construct them into an edifice of meaning. Its power is not only in mapping signs to concepts but in that individual words can be composed into larger structures. 

How did this systematicity arise in language?

Simon Kirby is the head of Linguistics and English Language at The University of Edinburgh and one of the founders of the Centre for Langauge Evolution and Change. Over several decades he and his collaborators have run many elegant experiments that show that this property of language emerges inexorably as a system of communication is passed from generation to generation. 

Experiments with computer simulations, humans, and even baboons demonstrate that as a language is learned mistakes are made – much like the mutations in genes. Crucially, the mistakes that better match the language to the structure of the world (as conceived by the learner) are the ones that are most likely to be passed on.

Dynamic Message Animation

Links

Outline

(00:00) Introduction

(2:45) What makes language special?

(5:30) Language extends our biological bounds

(7:55) Language makes culture, culture makes language

(9:30) John Searle: world to word and word to world

(13:30) Compositionality: the expressivity of language is based on its Lego-like combinations

(16:30) Could unique genes explain the fact of language compositionality?

(17:20) … Not fully, though they might make our brains able to support compositional language

(18:20) Using simulations to model language learning and search for the emergence of structure

(19:35) Compositionality emerges from the transmission of representations across generations

(20:18) The learners need to make mistakes, but not random mistakes

(21:35) Just like biological evolution, we need variation

(27:00) When, by chance, linguistic features echo the structure of the world these are more memorable

(33:45) Language experiments with humans (Hannah Cornish)

(36:32) Sign language experiments in the lab (Yasamin Motamedi)

(38:45) Spontaneous emergence of sign language in populations

(41:18) Communication is key to making language efficient, while transmission gives structure

(47:10) Without intentional design these processes produce optimized systems

(50:39) We need to perceive similarity in states of the world for linguistic structure to emerge

(57:05) Why isn’t language ubiquitous in nature …

(58:00) … why do only humans have cultural transmissions

(59:56) Over-imitation: Victoria Horner & Andrew Whiten, humans love to copy each other

(1:06:00) Is language a spandrel?

(1:07:10) How much of language is about information transfer? Partner-swapping conversations  (Gareth Roberts)

(1:08:49) Language learning  = play?

(1:12:25) Iterated learning experiments with baboons (& Tetris!)

(1:17:50) Endogenous rewards for copying

(1:20:30) Art as another angle on the same problems

The Meaning of Net Zero — Myles Allen

What is net zero?

Easy right? Surely, even LLMs can’t mess this up: “Net zero in terms of climate change refers to achieving a balance between the amount of greenhouse gases (GHGs) emitted into the atmosphere and the amount removed from it.” (Bard) WRONG. ❌

On Multiverses this week Professor Myles Allen of Environmental Change Institute (ECI), University of Oxford and Oxford Martin School tells us what he and other scientists really meant by net zero when they introduced it in COP 21 (Paris 2015).

Listen to get the full account. But here’s the short version:

🌍 Net zero does not mean holding GHGs constant in the atmosphere by balancing sources and sinks to it

🌍 … That would lock in (a lot) more warming

🌍 Net zero means balancing the flow of carbon to/from the Earth’s crust

🌍 … And letting natural sinks gradually reduce atmospheric levels

🌍 … Preventing warming beyond 2050.

Myles also makes a strong case that, if we want to hit the 2050 goals we need to invest more heavily in large-scale geological carbon capture and storage. Many climate activists worry that such a policy would detract from the progress of renewables and give the fossil fuel industry carte blanche to continue emitting. But Myles points out that our reliance on fossil fuels is not falling as quickly as we need, and CCAS is technologically viable, economically feasible, and essential to reaching true geological net zero.

Myles Allen - net zero

(00:00) Intro

(2:29) What is net zero?

(4:12) Net zero is not a stable state but dynamical

(6:20) If we stabilise concentrations of CO2 we would see half as much warming again

(9:10) The meaning of net zero is often confused

(12:20) The danger of carbon accounting double counting

(16:56) The difficulty of establishing additionality

(19:52) Geological net zero is what was originally meant by net zero

(21:30) There are no significant natural sources or sinks of carbon between the biosphere and geosphere

(27:25) COP 28: the fossil fuel industry has got to be part of the solution

(30:50) “It is almost dangerous to claim it’s possible to solve the climate crisis without getting rid of CO2 on a very large scale … injecting it back into the Earth’s crust”

(32:30) Phasing out fossil fuels altogether is effectively letting the industry off the hook

(32:45) To what extent can we trust the fossil fuel industry? The potential dangers of CCAS

(35:30) “The cost with today’s technology of recapturing CO2 from the atmosphere and storing under the North Sea … “ is such that the natural gas industry could recapture all emissions and still be profitable at current prices

(40:10) Carbon pricing has failed: people do the cheapest thing first and the costly, slow-to-develop things (e.g. CCAS) are not coming fast enough

(42:20) The difficulty of getting a carbon capture flywheel going

(45:05) Intermittent energy supply is not a problem for carbon capture

(45:45) Is biochar a viable alternative to geological carbon capture?

(47:08) Biochar can’t hit the scale we need

(48:55) Extended producer responsibility

(50:10) eFuels (synthetic fuels)

(50:44) Final comments: we have the technology but we need to be realistic, we need to start taking carbon back

MV#18: Feeling Right, Ethics and Emotion — James Hutton

Should we trust our emotions as a guide to right and wrong?

Some ethical frameworks would see us removing our feelings from the picture, acting with impartiality, and following principles such as the greatest good for the greatest number. Yet we can find cases where the consequences of those frameworks don’t feel right.

This week’s guest James Hutton is a philosopher at the Delft University of Technology who argues that emotions provide a way of testing our moral beliefs — similar to the way observations are used in natural sciences as evidence for or against theories.

This is not to say that emotions are infallible, nor that they are not themselves influenced by our moral beliefs, but that they do have a place in our moral inventory. In particular, the destabilizing power they can have — their capability to clash with our beliefs — is an important counterpoint to the entrenchment of poorly justified worldviews.

Listen carefully and you can just about hear me revising my own beliefs throughout this conversation.

MV#17: AI Risks & Rewards — Santiago Bilinkis

Could AI’s ability to make us fall in love with be our downfall? Will AI be like cars, machines that encourage us to be sedentary, or will we use it like a cognitive bicycle — extending our intellectual range while still exercising our minds? 

These are some of the questions raised by this week’s guest Santiago Bilinkis. Santiago is a serial entrepreneur who’s written several books about the interaction between humanity and technology. Artificial, his latest book, has just been released in Spanish.

It’s startling to reflect on how human intelligence has shaped the Earth. AI’s effects may be much greater.

Links

Outline

(00:00) Intro

(2:31) Start of conversation — a decade of optimism and pessimism

(4:45) The coming AI tidal wave

(7:45) The right reaction to the AI rollercoaster: we should be excited and afraid

(9:45) Nuclear equilibrium was chosen, but the developer of the next superweapon could prevent others from developing it

(12:35) OpenAI has created a kind of equilibrium by putting AI in many hands

(15:45) The prosaic dangers of AI

(17:05) Hacking the human love system: AI’s greatest threat?

(19:45) Humans falling in love may not only be possible but inevitable

(21:15) The physical manifestations of AI have a strong influence over our view of it

(23:00) AI bodyguards to protect us against AI attacks

(23:55) Awareness of our human biases may save use

(25:00) Our first interactions with sentient AI will be critical

(26:10) A sentient AI may pretend to not be sentient

(27:25) Perhaps we should be polite to ChatGPT (I, for one, welcome our robot overlords)

(29:00) Does AGI have to be conscious?

(32:30) Perhaps sentience in AI can save us? It may make it reasonable

(34:40) An AGI may have a meaningful link to us in virtue of humanity being its progenitor

(37:30) ChatGPT is like a smart employee but with no intrinsic motivation

(42:20) Will more data and more compute continue to pay dividends?

(47:40) Imitating nature may not necessarily be the best way of building a mind

(49:55) Is my job safe? How will AI change the landscape of work?

(52:00) Authorship and authenticity: how to do things meaningfully, without being the best

(54:50) Imperfection can make things more perfect (but machines might learn this)

(57:00) Bernard Suits’ definition of a game: meaning can be related to the means, not ends.

(58:30) The Cognitive Bicycle: will AI make us cognitively sedentary or will it be a new way of exercising our intellect and extending its range?

(1:01:24) Cognitive prosthetics have displaced some intellectual abilities but nurtured others

(1:06:00) Without our cognitive prosthetics, we’re pretty dumb

(1:12:33) Will AI be a leveller in education?

(1:15:00) The business model of exploiting human weaknesses is powerful. This must not happen with AI

(1:24:25) Using AI to backup the minds of people