Hypercomputation: Why Machines May never Think Like Humans — Selmer Bringsjord

AI can do many things equally well as humans: such as writing plausible prose or answering exam questions. In certain domains, AI goes far beyond human capabilities — playing chess for instance. We might expect that nothing prevents machines from one day besting humans at every task. Indeed, it is often asserted that, in principle, … Read more

AI Moonshot — Nell Watson On The Near & Not So Near Future Of Intelligence

The launch of ChatGPT was a “Sputnik moment”. In making tangible decades of progress it shot AI to the fore of public consciousness. This attention is accelerating AI development as dollars are poured into scaling models. What is the next stage in this journey? And where is the destination? My guest this week, Nell Watson, … Read more

Peter Nixey: AI — Disruption Ahead

It’s easy to recognize the potential of incremental advances — more efficient cars or faster computer chips for instance. But when a genuinely new technology emerges, often even its creators are unaware of how it will reshape our lives. So it is with AI, and this is where I start my discussion with Peter Nixey. … Read more

Language Evolution & The Emergence of Structure — Simon Kirby

Language is the ultimate Lego.

With it, we can take simple elements and construct them into an edifice of meaning. Its power is not only in mapping signs to concepts but in that individual words can be composed into larger structures. 

How did this systematicity arise in language?

Simon Kirby is the head of Linguistics and English Language at The University of Edinburgh and one of the founders of the Centre for Langauge Evolution and Change. Over several decades he and his collaborators have run many elegant experiments that show that this property of language emerges inexorably as a system of communication is passed from generation to generation. 

Experiments with computer simulations, humans, and even baboons demonstrate that as a language is learned mistakes are made – much like the mutations in genes. Crucially, the mistakes that better match the language to the structure of the world (as conceived by the learner) are the ones that are most likely to be passed on.

Dynamic Message Animation

Links

Outline

(00:00) Introduction

(2:45) What makes language special?

(5:30) Language extends our biological bounds

(7:55) Language makes culture, culture makes language

(9:30) John Searle: world to word and word to world

(13:30) Compositionality: the expressivity of language is based on its Lego-like combinations

(16:30) Could unique genes explain the fact of language compositionality?

(17:20) … Not fully, though they might make our brains able to support compositional language

(18:20) Using simulations to model language learning and search for the emergence of structure

(19:35) Compositionality emerges from the transmission of representations across generations

(20:18) The learners need to make mistakes, but not random mistakes

(21:35) Just like biological evolution, we need variation

(27:00) When, by chance, linguistic features echo the structure of the world these are more memorable

(33:45) Language experiments with humans (Hannah Cornish)

(36:32) Sign language experiments in the lab (Yasamin Motamedi)

(38:45) Spontaneous emergence of sign language in populations

(41:18) Communication is key to making language efficient, while transmission gives structure

(47:10) Without intentional design these processes produce optimized systems

(50:39) We need to perceive similarity in states of the world for linguistic structure to emerge

(57:05) Why isn’t language ubiquitous in nature …

(58:00) … why do only humans have cultural transmissions

(59:56) Over-imitation: Victoria Horner & Andrew Whiten, humans love to copy each other

(1:06:00) Is language a spandrel?

(1:07:10) How much of language is about information transfer? Partner-swapping conversations  (Gareth Roberts)

(1:08:49) Language learning  = play?

(1:12:25) Iterated learning experiments with baboons (& Tetris!)

(1:17:50) Endogenous rewards for copying

(1:20:30) Art as another angle on the same problems

AI Risks & Rewards — Santiago Bilinkis

Could AI’s ability to make us fall in love with be our downfall? Will AI be like cars, machines that encourage us to be sedentary, or will we use it like a cognitive bicycle — extending our intellectual range while still exercising our minds? 

These are some of the questions raised by this week’s guest Santiago Bilinkis. Santiago is a serial entrepreneur who’s written several books about the interaction between humanity and technology. Artificial, his latest book, has just been released in Spanish.

It’s startling to reflect on how human intelligence has shaped the Earth. AI’s effects may be much greater.

Links

Outline

(00:00) Intro

(2:31) Start of conversation — a decade of optimism and pessimism

(4:45) The coming AI tidal wave

(7:45) The right reaction to the AI rollercoaster: we should be excited and afraid

(9:45) Nuclear equilibrium was chosen, but the developer of the next superweapon could prevent others from developing it

(12:35) OpenAI has created a kind of equilibrium by putting AI in many hands

(15:45) The prosaic dangers of AI

(17:05) Hacking the human love system: AI’s greatest threat?

(19:45) Humans falling in love may not only be possible but inevitable

(21:15) The physical manifestations of AI have a strong influence over our view of it

(23:00) AI bodyguards to protect us against AI attacks

(23:55) Awareness of our human biases may save use

(25:00) Our first interactions with sentient AI will be critical

(26:10) A sentient AI may pretend to not be sentient

(27:25) Perhaps we should be polite to ChatGPT (I, for one, welcome our robot overlords)

(29:00) Does AGI have to be conscious?

(32:30) Perhaps sentience in AI can save us? It may make it reasonable

(34:40) An AGI may have a meaningful link to us in virtue of humanity being its progenitor

(37:30) ChatGPT is like a smart employee but with no intrinsic motivation

(42:20) Will more data and more compute continue to pay dividends?

(47:40) Imitating nature may not necessarily be the best way of building a mind

(49:55) Is my job safe? How will AI change the landscape of work?

(52:00) Authorship and authenticity: how to do things meaningfully, without being the best

(54:50) Imperfection can make things more perfect (but machines might learn this)

(57:00) Bernard Suits’ definition of a game: meaning can be related to the means, not ends.

(58:30) The Cognitive Bicycle: will AI make us cognitively sedentary or will it be a new way of exercising our intellect and extending its range?

(1:01:24) Cognitive prosthetics have displaced some intellectual abilities but nurtured others

(1:06:00) Without our cognitive prosthetics, we’re pretty dumb

(1:12:33) Will AI be a leveller in education?

(1:15:00) The business model of exploiting human weaknesses is powerful. This must not happen with AI

(1:24:25) Using AI to backup the minds of people

Generative Art using GPT4 #2: 3D fractals

Click for full size, interactive version

There’s a serendipitous quality to experimenting with LLMs.

I was trying to make an interactive model of a Mandelbulb with ChatGPT. Although it didn’t work as intended, it produced something funky. Indeed, for making art their fallibility might be a feature and not a bug.

Here’s a tip for making HTML5 from code generated by LLMs:

  • It will get stuff wrong — frequently things won’t render at all
  • Paste the results into a text file, save as .html, open in Chrome, open the Developer Tools Panel in Chrome
  • If you have any errors they will be listed (see below) paste them into Chat GPT or your LLM of choice. Let it figure out what went wrong.

MV#11 — AI, risk, fairness & responsibility — John Zerilli

AI is already changing the world. It’s tempting to assume that AI will be so transformative that we’ll inevitably fail to harness it correctly, succumbing to its Promethean flames. While caution is due, it’s instructive to note that in many respects AI does not create entirely new challenges but rather exacerbates or uncovers existing ones. … Read more