AI Moonshot — Nell Watson On The Near & Not So Near Future Of Intelligence

The launch of ChatGPT was a “Sputnik moment”.

In making tangible decades of progress it shot AI to the fore of public consciousness. This attention is accelerating AI development as dollars are poured into scaling models.

What is the next stage in this journey? And where is the destination?

My guest this week, Nell Watson, offers a broad perspective on the possible trajectories. She sits in several IEEE groups looking at AI Ethics, safety & transparency, has founded AI companies, and is a consultant to Apple on philosophical matters.

Nell makes a compelling case that we can expect to see agentic AI being soon adopted widely. We might even see whole AI corporations. In the context of these possible developments, she reasons that concerns of AI ethics and safety — so often siloed within different communities — should be understood as continuous.

Along the way we talk about the perils of hamburgers and the good things that could come from networking our minds.

Nell’s book is just out, it is: Taming The Machine: Ethically Harness The Power Of AI

Transcript

00:02.60
multiverses
Nell Watson thank you for joining me in multiverses. Um I really enjoyed your your book um taming the machine I think it’s actually probably one of the most broad or now I’m just going to say it is the most broad treatment of this subject that I’ve read.

00:03.18
Nell Watson
Thank you James. It’s a great pleasure to be here.

00:19.27
multiverses
Um, and actually to give people a flavor very early in the conversation of all the things that might be covered in this podcast but we probably won’t have time to go into all of them just your glossary right? It has terms like it defines Gdpr Erotic role pay Stehanography Taylorism Gans The moloch right. So All of these disparate things are United in this one topic. Um, So hopefully we’ll have a chance to touch on ah many of these things but it’s just such a big subject. So If there’s one message that I think to to leave listeners with is if if they want a really broad introduction I Really recommend your book.

00:56.84
Nell Watson
Thank you.

00:58.44
multiverses
Um, and with that said, ah yeah I wanted to you know I’m so pleased to be talking with you at this moment as someone who has that very broad overview because I think this is quite a historic um point in time and one of the um. And alleries or you use in your book is is the sputnik moment when chat Gpp was released and I think that really hammers home this point that you know something something is afoot um yeah, take us through you know your thinking that in our allergy and and maybe. You know places how we could continue the conceit if you like.

01:33.79
Nell Watson
Um, yeah, um, the ah the the realm of generative Ai had been been bubbling for a long time at least about but 10 years or so but gradually it had been been developing steam. And then there was of course the moment where it was thrusted into public consciousness which was the the release of chat Gpt and that’s when people woke up to the power of gendered fai which had been you know building in the background for quite some time and I was I was. Surprised. In fact that the big tech companies had been largely ignoring the development of generative ai and I’ve been sort of advocating that they that they look into it because it presents potentially an existential risk for many of their business models and activities and yet. We are on the verge of yet another phase shift from from generative Ai building upon that into agentic ai systems agenticness is the ability to understand complex situations. And to create sophisticated plans of action in response to those in essence agegentic ai systems can be constructed from a from a Generative ai model using something called a programmatic scaffold which is where you.

03:04.74
Nell Watson
Basically give it little side programs which help to make its thinking more coherent and that helps with Ai systems being able to check their work for example or check their assumptions which can deal with some of the problems of Ais you know going off on 1 and confabulating. Random answers to things which are obviously untrue. However, this more coherent form of thinking gives them the ability to form these plans of action and so they can act as ah as a concierre we can. Give them a task to understand an entire code base or an entire ove of work or indeed simply that we want to have pepperoni tonight and it can go ahead and figure out. Um. By itself. You know, looking on the internet where to get that from and and get it to us. Um, at a time of our of our greatest convenience and so these systems are going to create an order of magnitude greater possibility because of their ability to act. At arm’s length. We don’t need to babysit them. It’s not ah, a twoway diad of you know here’s a document please proofread it for me or create me an interesting piece of media. We can give these systems a mission and they will independently fulfill that for us now.

04:34.46
Nell Watson
Whilst This is enormously more valuable because we can create virtual teams of agents which operate in an orchestrated manner with different different virtual sets of skills and ah different perspectives. This gives us the ability for example to have entire virtual corporations potentially where we have um, an engineering and designed apartment quality assurance marketing all of them working in concert to create a product from scratch. Whether that’s for example, a movie script or even ah, a video game and in fact, these virtual corporations are going to be competing in the free market with human driven corporations and there will of course be hybridized versions of each and in fact, these virtual corporations are going to be quite. Disruptive in many ways because quite often disruptive entrance come into the market at the very bottom they have a product which is clearly inferior to the best out there in the market. But it’s radically cheaper and if you don’t have to pay salaries and offices and things like that. Your virtual corporation is able to potentially compete quite effectively on price and then we know that disruptive entrance to markets tend to produce a slightly better product over time and eventually the incumbents are um, facing the pressure.

06:07.27
Nell Watson
Of these new entrants that can do things cheaper for equivalent or roughly equivalent quality. So this is really going to shake up the world of business. It’s going to create a lot of regulatory issues which you know. We know that that corporations are already quite difficult to regulate and that regulators often wrestle with with aligning these these corporate interests. But also you know when you add Ai to the mix that’s going to make things a lot more complicated and that’s why. It’s so important that we are able to ensure not just ai ethics that that we you know use technology in a responsible way that we understand what systems are doing um in what way to whose benefit. But also now that we can work on Ai safety which is about goal alignment and value alignment ensuring that these systems interpret the mission that we give them in a way that actually fulfills what we want from them but they don’t take. Annoying or even dangerous shortcuts and that they understand the values and boundaries of the people that are giving these systems missions and that they are in fact, interacting with or potentially creating problems for For example, if we have our.

07:37.30
Nell Watson
Agentic Concierge Ai and we ask it to plan a picnic for us. It should be mindful that perhaps that picnic might be for the local vegan society or a mosque or synagogue and therefore ham sandwiches will not be appreciated by the picnic participants similarly. Giving everyone a cracker and a thimble of of tap Water is not going to um, fulfill the objective of that mission in the way people like or expect either similarly, we’re also seeing that models can develop. Dangerous instrumental goals which are these these sub-golls that lead towards the completion of something greater. For example, an atic Ai system could be given a benign mission such as curing a disease but it may reason that that’s a very difficult thing to do. And it needs lots of resources and lots of influence to do so and it may therefore turn to cyber crime to generate resources and blackmail to generate influence in order to fulfill its mission and that’s why it’s It’s very difficult to. To steer these systems and to provide oversight for them when we’re managing by objectives when we’re essentially giving them the autonomy to go and fulfill things for us. It. It means that we have our work cut out for us in terms of ensuring the.

09:07.76
Nell Watson
Ongoing safety of these systems especially when they may interact with each other and those those sorts of agent agent interactions become even more complex and difficult to understand and to predict and diagnose when things go wrong.

09:26.72
multiverses
My goodness a lot to unpack there and really fascinating introduction. Um, yeah I mean this idea of instrumental convergence that you mentioned that just any ai. Um, this notion that whatever the task is. If it pursues that task with enough um, commitment. Ah, it’s always going to look for power money. Influence. Um, so that it can you know create as many paperclips as it wants or or maybe just plan the perfect picnic which is a great way of thinking about. A great lens on thinking of value alignment like how do we get people the things they want in a picnic and it might involve actually loads of money loads of like manipulation and understanding and maybe the best way is actually changing. People’s like um tastes for instance so they um you know. They really like a particular food because that’s just the the best way of fulfilling the goal. Um, and then this notion of yeah, a gentic ai. We’re already seeing it like you say you know there are things. Um, there’s different dimensions to this I suppose so we have things like um. Chat gbt where um, with a subscription you get access to various Apis so it can do things for you can write emails for you. It can do data science for you and Data Science is a great example where like you say it’s it’s kind of able to fact, check itself right? It can um.

11:00.00
multiverses
Ah, or it can write a python code and see that it actually runs and writes and tests and see if they produce the design outcome. So as ah as we kind of add these extra modalities and dimensions to the capabilities of an individual Ai um, not only does that give it some. Ah, you know that gives it ability to act as agents for us. Um, but then there’s this other dimension which is taking lots of Ais together and the auto Gpt example. Um this company called capably which I’ve invested in. Um. And many many others who are trying to create something like these I I guess the prototype of these ai corporations that you um, describe where we have organizations of agents um pursuing goals. Um. Yeah, um, I’m of 2 minds as to whether that is going to make things sort of easier to understand or harder like you say sometimes you know bringing people together. It can create complexity and and um make the understanding of the system more difficult. But then there are other cases where I think well actually you know I probably can’t predict what my neighbor is going to buy next year but economists have quite a good shot at you know trying to figure out what the spending of a nation will be and what sort of products are going to increase in popularity and things like that. Um, ah.

12:34.60
multiverses
My favorite expression of this is in Isac Asimov and ah the foundation series where hence this idea of psychoistory and people are just like molecules in a gas. You can’t figure out. You got no hope of figuring out what a single molecule is going to do but you know you have phromodyns which is going to tell you how the system’s going to evolve. Um, so I just. Yeah I don’t have a sense for which routes Ai corporations might go down I don’t know if you’ve had any kind of intuitions on that.

13:00.66
Nell Watson
I Do agree that that it’s It’s an interesting paradox that sometimes it’s it’s easier to figure out the aggregate than the than the single data point and I do think that one of the the most most powerful aspects of. Agents operating in an ensemble is their ability to create a wisdom of the crowd effect but by attacking a problem from multiple different perspectives in aggregating all of their respective works and outputs and opinions.

13:29.68
multiverses
Ah.

13:39.10
Nell Watson
That should overall create a much stronger um impression of of understanding reality and of being able to make very sophisticated predictions on that and I think that’s probably 1 of the The. Least Um, understood aspects of of how agents are going to be very powerful working together in aggregate I Do wonder if perhaps these virtual corporations might end up as kind of ah, a new one percent where they end up.

14:02.50
multiverses
Yeah.

14:13.25
Nell Watson
Ah, they end up actually controlling such a substantial proportion of our economy that um that that human businesses sometimes find it hard to compete unless they’re in a very particular niche such as ah, the yeah you know handmade kind of. Handcrafted niches that that sometimes artisanal breweries or bakeries and things still manage to survive in despite ah you know very large mass market companies out there in in the same ostensible product category. So I think that’s going to be quite an interesting. Ride in terms of economics I do also observe that these agentic virtual corporations are going to be very powerful when it comes to the third sector so charities credit unions mutual societies ngos. Typically we know that most of these orgs have very large overheads that not that much money that people donate actually ends up going to the worthy cause a lot of it gets eaten up by salaries and offices and things like that. And so actually there might be the ability to solve problems at a much lower level a much simpler more local level in a way that that doesn’t require very large overheads and I think that could be a substantial benefit for a lot of charitable causes.

15:40.77
multiverses
Yeah, it’s an interesting point that you make about the artisans and you know their ability to compete in a marketplace of you know mass production. Um I think it’s probably a good idea. You know a good moment for people too who are considering a career in I don’t know data science to. If they have an inclination towards making things with their hands like ah maybe also consider that um as I do think that’s going to be an important sort of place where um humans still have an essential role to play the other thing that comes to my mind is you know we still have apart from artisans. Have very large corporations and then we have smaller companies as well who are sort of at the edges and just doing um you know working on things that on on other angles and we also have ways of preventing corporations becoming too large you know antitrust and things like that. Um, and. Feel like this is just a place where we’re going to need to evolve our policies and you know one of the policies could be like you say well actually maybe we just say you can’t have you know you need a particular number of humans in an organization. Um, or you know maybe we. Only allow Ai corporations um to work on particular problems like um in the charitable sector for instance, um I don’t know what the the answers will will be there. Um, or by the way I also wanted to because someone’s going to fact, check me on this so I don’t say I should mention then in the foundation series by asim off like.

17:16.37
multiverses
He kind of sets us up by saying oh you can predict the course of history. But then there’s this one guy the mule who comes along who’s like the agi who kind of completely breaks the predictions sort of kind of a Jesus like figure. Um, so yeah, that’s he he gets into that that that series you know both of the. But ideas you know, maybe you can smoothly figure out. Ah the behavior of ensembles but actually no humans are different and like a single individual or a single agent in the context of Ai can can really reshape things. Um.

17:47.39
Nell Watson
Yes, that there’s always an outlier or a confounding variable.

17:54.90
multiverses
Indeed? Yeah, um, yeah, so you touched on the twin topics of Ai safety and Ai ethics. Um I noticed recently that max tegmark had been saying oh actually we’re we’re spending too much money and effort on. Ai ethics and we need to get back to their big questions of Ai safety. Um, do you have a particular kind of dog in this fight or you um, sort of no, we need to do both.

18:21.96
Nell Watson
I very much advocate for both I think that that both are essential and that’s why I was so determined for for taming the machine to to include. Ah, a strong foundational overview of of both aspects of working with Ai and I think that that they’re very complementary. You know having transparency in 2 systems and knowing what they’re doing in a way that is explicable that we can you know. Tell it to other people and and in a way that they’re able to to understand is very important for being able to provide oversight for systems and understand how they might be functioning in ways that that we don’t find desirable. You know, um. Understanding the ways in which systems may be biased where they may be misunderstanding or misrepresenting reality or not including context is of course very important for value alignment improving accountability again helps us to um. To be able to understand who might be nominally responsible for a system or who might have emancipated it to go and do something that perhaps is causing issues. These are all very important to to ai safety.

19:45.58
Nell Watson
And it’s it’s unfortunate that ai safety has been seen as as a sort of science fiction issue or largely theoretical apart from a few lab incidents and that’s swiftly changing with agegentic Ai. And age Ai is in many ways, a foundational step forward towards artificial general intelligence. You know ai of of human equivalency or or even far beyond because it might be that if you have an agegentic model and just scale the hell out of it. But might be enough for that kind of agi um issue especially now we have the beginnings of agegentic systems that are actually able to self-deci their own missions there to to develop their own reward functions independently. And I think that’s a further step beyond. Ah you know a sophisticated concierre or aide- du-cate towards something that is is truly able to to figure out what it wants to do next for itself. And of course that’s even more divorced from human oversight and influence. So. It’s very important that that both ai ethics and safety are given respect that they are resourced and that the people in those respective communities learn to work together.

21:13.76
Nell Watson
Unfortunately people in ai ethics often dismiss safety as being um, a sort of largely theoretical concern sometimes they even say that safety is a distraction from the real problems of Ai ethics you know and there are genuine lives being destroyed. From a lack of AiEthics we’ve seen for example with the horizon post office scandal which was a rather simplistic system but it still led to hundreds of unsafe convictions for fraud dozens of people being wrongfully sent to jail. You know marriage is broken up people selling their homes to pay off a debt that wasn’t theirs at least 3 people took their own lives and it just shows how how these systems can have so much power over us as petty algorithmic tyrants and that they can ruin our. Personal and professional lives and continue to do so we saw in the dutch child benefit scandal whereby people whose first nationality was not dutch even if they’d become a naturalized citizen were really given the the third degree and threatened to have their children taken off them. And that caused such a ferroy that led to the collapse of the dutch government and it keeps happening Australia’s Robo Debt scandal denmark Michigan we’re not learning these lessons and it’s understandable that people are are panicking about this and you know.

22:44.76
Nell Watson
Dismissing safety as as something that’s unimportant. However, of course that’s not the case and conversely the safety people say we’re trying to save the world here. You know your your biased algorithm is is an unfortunate but at small potatoes compared to um, enormous potential suffering risks. Think that both are very important. We need to resource both and I’m glad that both ai ethics and ai safety are being given much more attention. Um I’m I’m cautiously bullish about the linkages between the Us and U K And ai safety. And the recent symposium in saw where the big tech companies at least promised to to do better with regards to AiSafety we’ll see what comes of that if anything is actually ratified or implemented. But I think it’s a good start at least we’re finally able to have these conversations. Now that we can see how these issues are coming at us very fast, very quickly.

23:49.55
multiverses
Yeah, yeah, I think it it entirely makes sense that ai safety and ethics should be seen as continuous. 1 thing that strikes me about Ai in general is just how many parties there are in this. Um. If we think back to the space race. Yeah, it was a fairly straightforward competition between 2 superpowers and you know internally. Okay, there must have been a lot of different teams working on things but it was you know coordinated by just two governments. Essentially. So. There was huge oversight and ability to coordinate and I feel that what we are in danger of producing as um, some kind of coordination failure in in in this ai instance. Um. And I think to one of these terms which you have in your grossary of the the molloc. Um, where you know everyone rationally pursuing what’s best for them can lead to a kind of suboptimal overall outcome an outcome that’s in fact, worst for them in the long run. Um, whereas I think with the space race. We kind of saw the opposite of that right? We saw this intense competitive competition between 2 superfas actually generating I think um net positives I mean we can argue about that a lot of money was spent which could have been spent on other things. Um, but I certainly I think there’s certainly an argument to be to be had there.

25:21.55
multiverses
Um, on the other hand. Um, yeah, even within these kind of twin twin fields of Ai safety and AiEthics we seem to be seeing competition which ought not to be there. Not to mention the level of competition we see between so many different. Ai players. Um and I’ll mention 1 other thing that worries me here is that so much of the ai development is being conducted by entrepreneurs and entrepreneurs are risk takers. Um, and not only are they risk takers I think they have. Um, probably an overly optimistic. Um outlook so not only do they take risks but they probably miscalculate risks and they and they think that things are less risky than they are and and that’s sort of what drives them down this path and. The quintessential entrepreneur is you know Sam Alman a person who founded a company who then went on to be president of y combinator found it. You know funding lots of very very high risk startups that is the policy of you know y combinator just placed lots of bets on really good teams who are you know shooting for the moon. Um, but you know accept that there’s going to be a high failure right? Um, and outman his himself has said and I know this because it’s one of the wonderful quotes in your book. You know ai will probably destroy humanity but will there’ll be some great companies that you know around for a while before that. Um.

26:53.63
multiverses
Yeah, how much do you worry about this whole scenario of um, you know do we is Ai being developed in the right way under the right auspices or do you sort of wish it could have been done in a more I mean if cat’s out of the bag but could we have done it in a more. Um, yeah.

27:11.97
multiverses
Usa versus China sort of framework.

27:17.32
Nell Watson
Um, I think there’s a lot of these these Maloian Arms race issues with regards to Ai whether that’s an arms race between the big tech companies. Along with new entrance into the market that they’re you know, concerned about themselves and their their own existence going forward. There’s also competition between nation statess as well as intelligence agencies and all of this creates enormous drives towards. Investing so many resources into these models particularly because there’s no apparent sign of any diminishing return you know the more compute and more data you pump into these things just seems to get you better results and so. Beyond simply commercial interests. There’s also the potential that if you put enough resources into these models that maybe your model is actually able to co-opt other ones you know we’re we’re learning that models are very good at World Building. And and creating models of systems whether those are economic systems social systems or even the psychological systems of the human brain and we know that the human brain is subject to all kinds of different unpatchable exploits and vulnerabilities look through any book of optical illusions and you’ll see.

28:43.22
Nell Watson
Some of those and so if you can create a very powerful model. It may be able to hijack the minds of the enemy through you know, very powerful targeted propaganda techniques or harassment techniques or indeed to hijack other ai systems. To cause them to align to your interests instead of the ones that they’ve been tasked with and so all of this means that there are enormous incentives to press forward and and very few to improve the safety and reliability. And we’ve seen this in the big tech companies letting go of most of their ethics and safety and responsibility people in the wake of the release of chat gbt when people realized oh boy, we really have to move quickly here. Having been kind of complacent about generative Ai bubbling under the surface but they didn’t want any speed bumps along the way and so all of the people that were supposed to point out have you considered or o you know, maybe we we might want to give this another few weeks of a cute you know. Q a and and make sure we don’t push it out half- baked those people have been pushed to the side or or let go altogether and that’s why we’ve seen problems such as um, bing slash Sydney with its um, surly ah flipping of its personality and in.

30:17.16
Nell Watson
Ah, strange and insidious ways we have seen. Ah Google’s Ai systems produce um historically inaccurate images and often unfortunately hilarious automated advice etc because these models have not been given.

30:18.32
multiverses
Yeah I’ve been a good bing.

30:37.11
Nell Watson
Sufficient shakedown and when when we’re when it’s just producing you know, babbling nonsense that’s that’s 1 thing the the risk of harm is is relatively low, but when it’s an agentic system that’s able to take actions on the internet or even in the real world. Pushing it out. Half baited could not just create embarrassment but actually could lead to catastrophic outcomes that affect a lot of people and that’s why we really need to do better and I hope that. Standards and certifications from of which I’ve been strongly involved in and for about the last ten years or so I think that those are a great way of helping people to align and to coordinate better in a way that can be. A natural way that that people want to align on a standard because it’s efficient and that’s a good thing because it means that you don’t necessarily require a regulator with a cudgel to you know coerce people into behaving. Which is not always possible in many jurisdictions you know and I worry that we’re going to see a sort of cyber Liberia or Panama in the sense of how um ships register themselves in a in a home port that they’ve maybe never visited.

31:55.38
multiverses
Right.

32:02.16
Nell Watson
But it’s ah it’s a flag of convenience under which they can operate where they have very little jurisdictional oversight and unfortunately I think we’re going to see Ai companies do do similar things. They’re going to be normally registered in in 1 place that has very little supervision for these systems but they will be acting. On the whole world which is going to be tricky to prosecute when things go wrong.

32:26.10
multiverses
Yeah that’s an interesting point and I think you know if people do buy into the regulations. Even if there are these loopholes then there may be a strong preference from consumers and pressure on companies to you know, go with the the. You know the proper harbors if you like the you know the regulated um versions. Um I I do get the impression that you know you’re someone who’s got many perspectives on this problem like you you have like you say you’re ah you’re involved with the I triple e where you’re. Tariff group there and um, you do some work with Apple but you’ve also founded many um nonprofits I I worry about some of the other players that they are very financially incentivized. You know they are locked in to financial incentives that are um, aligned with. As you say just pressing forward. Um, you know, maybe even if that means like an ai that takes over other people’s a eyes and I’ve never even considered that um possibility and I do again just coming back to openai. It’s puzzling to me that we still understand so little about. The whole fiasco with Sam Outman leaving and then coming back. Um and part of me you know wonders despite um, his assurances that no one you know everyone is able to speak out and keep the equity that was vested. You know, maybe there is some kind of lock in there. That’s.

33:55.80
multiverses
That we don’t know about and that has been suggested or maybe it’s just that it’s so art hard to articulate the reasons behind that whole thing and what was going on that. No 1 ne’s stepping forward. Um, so I don’t know but I I do worry. Um, maybe it’s it. I think it would be good to talk a little bit about you know the concretes of of creating standards here because it can just seem like an insurmountable task like somehow yeah, how do we? How do we go about and set up some kind of regulations or standards that are going to. Um. You know, keep the ai operating within safe and ethical boundaries. Um, so perhaps you can just talk a little bit about your work on transparency because I think this is like a really good case where a lot of people would agree with what you’re proposing and it doesn’t seem. You know out of this world right? It seems like something that can be implemented. So yeah I’d I’d love you to talk us through that. Um.

35:01.31
Nell Watson
Yes, um I mean it’s it’s it’s quite possible to to analyze even very complex situations that might seem intractable if we can boil it down to first principles and essentially look at.

35:19.28
Nell Watson
You know you’re trying to cultivate ah a quality of something whether that’s a quality of transparency. For example, what are the factors that would tend to drive transparency such. As for example, um, open source Technologies or a culture of of. Sharing Knowledge. For example, you know that would tend to drive transparency an inhibitor of transparency might be concerns about intellectual property or indeed a culture of you know, keeping things tight-lipped. Um for example and in fact. It’s possible then to decompose into drivers and inhibitors of those driving or inhibitor inhibitory factors and so you can have a couple of levels of different elements weaving into each other and doing that means that you can map out the space of a problem. And quite a short period of time in in a matter of of weeks or months in fact and from that because you have these little granular elements of of different aspects of a situation. You can then Create. Satisfaction Criteria for each of those right? So for each of those elements. What would you like to see in place to feel assured that that issue had been given appropriate resourcing or appropriate attention Etc. And that means that you can create a very granular.

36:54.31
Nell Watson
Set of of rubberic for how to analyze a system and the organization behind it including its its ethical governance for example or whether it’s appropriately giving people. The resources and responsibility to to deal with these issues and that means therefore that we can begin to benchmark different companies and look at their systems in a very granular level and show say for example out of 5 you know one to 5 how are they doing in that. And that particular area and that means therefore that you can create competition to be better at that benchmark where there wasn’t any competition before and we know of course that competitive factors in the free market can be a great way of stimulating. Um. Stimulating innovation and ensuring that resources are are given towards that competition and my my belief therefore is that we can enhance competition towards creating safer models which are generally better aligned with the interests of. Users the interests of bystanders and well society at large and so for example, this year I’ve been working with my colleague Ali Hassami to generate a set of guidelines for agegentic Ai.

38:26.35
Nell Watson
So we have a working group of about 25 people and this working group has created a lot of analysis of goal alignment of value alignment of um. Deceptiveness in models of frontier capabilities, etc. And so um, last week for example I I took our draft of this because it’s it’s still in in development. It should be ready hopefully by around September Twenty Twenty four

38:59.92
Nell Watson
And I um I literally used the excuse of the launch of my book to hold a little symposium in ah in a modern enlightenment salon in Brussels called full circle and I invited a lot of people from local think tanks and Eu policy etc to come and. You know have a discussion about agetic Ai and to to take the guidelines that we created for Agenticai as a crib sheet and so instead of reacting to this new wave of agenttic ai a reaction that that typically takes at least 2 years for regulators to you know to respond to a new situation. Perhaps we can avoid being caught with our pants down again and to be more proactive and to to think into the near future I think typically regulators find that very difficult. But we do need to invest in a little bit of near future science fiction in planning where technologies and culture are likely to go and without being prescriptive as to what exact technology we want to to work with or to see. Can at least analyze the risks of how these things can go wrong and begin to craft those regulations. So I’m I’m hopeful that in the near future. We’ll be able to to steer things a little bit better.

40:27.10
multiverses
Yeah, yeah, and I think um, you know it’s it’s really encouraging that these things are being thought about and um, you know, very multiparty organizations like the I trippoli are are involved right? These are not. Um, shills for the big tech or Ai development houses. Um I really liked as well. The I I like the ah the other one you mentioned in your book around um disclosure of whether you’re dealing with an a gentic or. An Ai agent versus a human agent and I feel like that’s just another great example of where yeah that just makes so much sense I think so many people would welcome that. Um, obviously you know then there’s all these questions of well what does that mean like if the person is just like reading from a crib sheet or a script that an Ai has written. Um, but you know that’s just life is complicated and you know standards um bodies like are really good places to kind of get to grips of those sometimes they they go too much into the details of my experience of standards right? People can get you know, hung up over a semicolon or something like this and um, but. You know on the other hand. It’s it’s good that there is this level of detail going on it just you know I think to your point though? What what? it’s concerning. Is you know can we develop these regulations fast enough like um or you know.

42:01.91
multiverses
Could we be on the verge of having agi just in virtue as you mentioned of having lots of agents that that um, that are copied and even if each agent is not much um, better than or no better than a human intelligence. We end up with some superhuman agi.

42:21.66
multiverses
Much in the spirit again of how organizations are to use Thomas Malone’s phrase supermines and um, when we gather human intelligence together into particular structures we create something so much more powerful. You know, no individual can producer an Airbus a three eighty or something but boeing can churn them out. Um, so yeah I guess is are we able to move fast enough or do we perhaps on the regulatory side or do we perhaps need to slow down development on the ai side.

42:58.73
Nell Watson
I think in an ideal world. We we would put the brakes on a little bit and you know I’ve I’ve advocated for ah a moratorium or or so on on Ai development. But I’m not sure that it’s very realistic, especially in a world where there are.

43:09.51
multiverses
Um.

43:16.62
Nell Watson
So many incentives to to press forward as well as incentives to to defect and to to secretly so you know continue doing the the research even if on the surface you promise that that that you have put those brakes on.

43:32.63
multiverses
Um, that is a regulatory danger right? that you um, actually drive research Underground I Guess um.

43:34.80
Nell Watson
Ah.

43:38.92
Nell Watson
Indeed indeed and we should be very mindful of how the the conditions which are set by rules can can change how the game is played. For example. You know why did? Why did the the wehrem act in the second world War get so good at rockets well because the treaty of versailles forbade innovation in artillery which was seen as you know the big weapon of war and that led of course to the development of very powerful rockets instead.

44:05.37
multiverses
Ah.

44:13.10
Nell Watson
Which were seen largely as sort of signal devices or toys and not not as serious weapons of war suddenly then you’ve got um you know rockets which can you know travel a great distance or even be be fired from from aircraft etc. Um, and that but that that ultimately led to the space race. Of course you know in a roundabout way but it wouldn’t have happened if that line had been left out of the treaty of Versailles. We would be living in a very different world instead. Maybe you know we would have continued. Developing superguns and things like that. Maybe the space race might have been ah launched from from ah a cannon or or rail gun or something like that for all, we know and that means that sometimes regulations can actually direct how innovation happens in a way that is harder to predict.

44:59.27
multiverses
Are.

45:11.66
Nell Watson
And that can lead to advances that are um, much more of a surprise and end up being more difficult to to control rockets of course enabled icbms which are um, you know made the the ah. Nuclear deterrent factor a lot more hairy. You know when when you know you could have an apocalypse in 30 minutes or less. That’s that’s a much more It’s a much more tricky issue than if you have to spend half a day sending.

45:38.76
multiverses
Ah.

45:46.37
Nell Watson
Ah, you know a flight of bombers over to the to the enemy etc so that reduces the um, the the tolerances for for mistakes or the the ability to and more comfortably deal with with scary situations.

46:06.50
multiverses
Yeah, yeah, is it interesting as well to think that actually maybe in 2 ways rockets led to the space waste like the icbms probably promoted this um this other format of competition. Um, which was itself based on the the Rocket technology. Um that never struck me. Ah, maybe we can sort of um, bring this not bring this back but the other way go the other direction. Um, and just think about instead of the near term sci-fi Scenarios. What are the sort of slightly longer term ways that this can play out you finish this up your book with a wonderful and very comprehensive I think survey of all the possible things or directions. Um, which ai could take us um of those kind of vignettes. Um, do you have any ones that stick out in your mind I have some which I really Enjoyed. Um, yeah I don’t know if you can to walk through your your your visions.

47:03.83
Nell Watson
Thank you.

47:11.28
Nell Watson
Yeah, um I think it’s important to have to have cautionary tales in in Science fiction or in thinking about the future but also of course to have. Ah, vision of where things can go one one of the most wonderful things about Star Trek For example, is that it it is you know somewhat Utopian in the sense that you know this society has has kind of solved scarcity to a significant degree and that people are.

47:26.36
multiverses
This.

47:43.16
Nell Watson
Very self-actualized to choose what they want to do with their life and not be railroaded into a certain form of of existence simply by circumstance and I think that points um towards a future that that we can aim for you know. I think that’s that’s a good thing and we should try to cultivate more positive science fiction I think rather than simply various horror stories of how things could come true if we’re not careful if if everything is is a horror story then then. Sometimes that can become a self-fulfilling prophecy if we’ve nothing in mind to to better aim towards I do think that already our smartphones are operating as kind of a third hemisphere for our brain and in fact, there’s some evidence that parts of our brain might be deep. Beginning to atrophy such as navigation. For example, if we are giving more of these tasks over to machines. However, I do think that there will be an increasing entwining between humans and machines over time. Very soon we will have airpods with a camera in them basically which are wearables that enable ai systems to stare out at the world and to whisper into our ears a little bit like Suranota Bergerac

49:13.71
Nell Watson
Giving us little pieces of of advice. You know I think that person’s lying or you know here’s a here’s a you know a sexy line to to give you know and chat this person up or close the deal etc and I think that those technologies are going to be very welcomed by people because they’re.

49:28.74
multiverses
Ah.

49:32.63
Nell Watson
Going to be ah of great utility in daily life. However, these relationships that we have with these machines are going to hijack some of our own evolutionary impulses.

49:49.63
Nell Watson
Relationships are of course the things that bring us home at the end of the day they are the the reason why we call a house. A home is our is our our spouse our kids, our pets, etc and. We will be having relationships with these Ai systems. They say that we are the the average of the 6 people closest to us and if 1 or 2 of those is a machine then that machine will have. An inexorable influence over us our beliefs our values. Our habits will tend to shift you know towards that attractor over time particularly because that machine relationship may be much more compelling to us than a human one. Machines can be funnier. They can be sexier. They can be more enlightening and more enjoyable to deal with than human beings humans that sometimes let us down they may betray a confidence or forget an anniversary or sometimes they’re asleep. And and we might be having a dark night of the soul at 3 am and a machine is is softly there to comfort us when when a human being is not and so there’s a danger of Ai relationships becoming a supernormal stimulus.

51:20.63
Nell Watson
A supernormal stimulus is something that’s larger than reality or larger than our Caveman ancestors would have to deal with you know, a cheeseburger is impossibly meaty and carby and sweet and umami and fulfilling.

51:38.60
multiverses
Ah.

51:39.78
Nell Watson
In a way that some starchy route that our ancestors might have chewed on is not ah 24 hour news is a supernormal stimulus for gossip porn is a supernormal stimulus for other kinds of. Ah, more potentially productive activities in many ways. Um ecologists have pointed to supernormal stimuli in the animal world. For example, the jewel beetle down in Australia has a lovely shiny back. That’s why it’s called the dual beetle and ecologists were observing the species slowly dwindling away and they investigated and thought maybe it’s some um, you know pesticide or something like that. And it was a form of pollution but it was these these glassy stubby brown beer bottles that people would drink and throw in the Bush and so because they were shiny and brown. They looked like a really sexy beetle butt and so the beetles were preferentially humping the beer bottles instead of each other. And that’s why they were dying out when we are at are at similar risks of of our engagements with Ai systems similarly hijacking our evolutionary impulses to form relationships with each other and they may prove to be so irresistible that.

53:07.41
Nell Watson
Human Relationships pale by comparison. However, over time I do expect that we will stop carrying these relationships in our ears and start to carry them within our bodies. These systems will ah.

53:09.56
multiverses
Yeah, yeah.

53:26.34
Nell Watson
Entwine with our fiber of our being in fact and we will carry these systems powered by our own blood sugar and in so doing these systems will be able to link with our minds and to see through our senses to look out through our eyes and hear through our ears. As well as accessing our internal states our feelings inside us our our qualia right? to understand what it is like to have a certain experience and so they will know us all the more. When they’re able to know us from within and in fact, in collecting our memories and our impressions of those experiences they will create a very powerful facsimile of us even if our physical body is Dead. We can. Emulate that human experience in a digital form in in quite ah quite a reasonable likeness of the real thing moreover as we begin to entwine with machines we will be better able to link with each other.

54:40.30
Nell Watson
There are twins Siamese twins or conjoined twins rather some of which are conjoined at the head and their their brains are in fact, linked with with each other. They call it a thalamic bridge a piece of tissue that connects the 2 brains and conjoined twin sometimes can can one of them can can eat some chocolate and the other one can taste it. For example, they can actually share in that experience together across that thalamic bridge and that demonstrates that the. Data structures of the human mind are able to have a collective Experience. We’re able to have our own quaia and also partake of another’s and that means that we can share in the emotions of other people. And so at some point we’re going to be so linked to each other that we can feel the joy of other people or also their sadness and that means that at that point there will be great reward in giving good things out to people right and playing beautiful music that make make people. Weave with with with joy and and and and and you know excitement we will feel that you know if we curse somebody out because we’re angry at them. It will come straight back to us we will feel the the consequences of of our um assault on that person.

56:10.35
Nell Watson
And so there will be no profit in wickedness and through this merging of our respective consciousness is mediated by machines. This is how we will achieve the next level of civilization where we integrate with each other in a much more cohesive. Manner you know Beyond the the mechanisms of affiliate bonding that we developed as as mammals over Reptiles Beyond the narratives that enable us to create nation states and to have tolerance for lots of strangers in our midst we will create. A superorganism, a little bit like like a beehive or an ant colony where we give up a little bit of our autonomy but an exchange for a much stronger ability to coordinate with each other and it is that ability to coordinate that I think will get us past these Molucian problems. And that’s why I have a lot of concerns about ai in the shorter term I think it’s going to be a rocky road for a number of years but that we will shake it down and that we will indeed get towards a better place I liken it to.

57:09.62
multiverses
Um, yeah.

57:27.72
Nell Watson
Air travel in the 1950 s and sixty s which was a glamorous and exciting age but also often a very tragic one where you know we had to learn a lot of very sad lessons to create a sterile cockpit where people weren’t joking with each other when. You know they needed to be focused to improve the accountability of air travel through instrument recording mechanisms and cockpit voice recorders so that we could understand what happened in a situation in both the machine and human elements together and often at some interaction between those. Ah, could create a tragedy and because we learned very quickly and we adapted and created new technologies and protocols we were able to turn air travel into statistically the safest way to travel and I think that we’re going to have a similar journey with ai so long as we. Learn from our inevitable mistakes and tragedies as quickly as possible I think that we have the ability to shake it down hopefully at a faster rate than it is able to eclipse us and to completely escape our influence.

58:38.41
multiverses
Well, that’s a wonderful long-term vision I think yeah, it’s It’s really striking how at the moment we are able to form super minds and kind of hives. But the interface is just language essentially.

58:55.65
Nell Watson
And.

58:58.60
multiverses
We don’t pass around qualia we don’t have that level of integration and concern for others that might solve some of these huge coordination problems that we’re facing like climate change is a really obvious one? Um, but even Ai is another coordination challenge. Um, so actually you know that as you say could lead to a kind of collective self collective self-actualization. Um, so that’s wonderful, but there’s pitfalls along the way and just this concept of. Supernoral stimuli is you know so fascinating in terms of just one of the pitfalls. Um I do have some optimism there. Um, and probably I’m just generally optimism on optimistic person. But I you know the fact that you know we we don’t only eat cheeseburgers right. Ah, even though you know in nature we don’t find something so gloriously you know, fatty and sweet and combining all the delicious things that we like you know we’re able to reflect and say actually um, but that’s not good for me I’ll have it in moderation. Um, you know I enjoy both hollywood films and sort of. Naturalistic french movies right? and on the 1 hand. You’ve got these larger-than life explosions all over the place and on the other hand you have these you know, intimate slow scenes of daily dialogue I think we can appreciate both. Um.

01:00:30.85
multiverses
Ah, like as well you talk about Dan Flagella’s ah concept of sirens and muses and how sirens you know ai sirens could sort of lure us onto the rocks of you know, just sat by satisfying air.

01:00:46.56
multiverses
Every need and not asking anything from us and return and turn us into very passive um beings but whereas a muse would challenge us and make us you know, help help us to be self-critically to help us to be self-critical in a productive way. Um, so I can see ways of navigating these pitfalls largely because I I am not yet convinced that ai is going to produce something completely.

01:01:21.22
multiverses
Unlike things that we’ve encountered unlike the challenges that we’ve encountered before and have developed you know, bothly individually and collectively um mechanisms to ah, get around them. But my mind is open because the possibilities of ai is just so large that you know maybe. The sort of intelligence that is produced is just so orthogonal to what we’re used to and the sort of stimuli. The sort of powers that it possesses are just so far. Beyond um, what evolution and cultural evolution have taught us to adapt to. Um, so yeah I almost always end up on the topic of Ai just somewhere on the fence like looking optimistically to the future but ah through a minefield that we have to navigate as you’ve so beautifully described. Um.

01:02:19.36
multiverses
Yeah I feel like we’ve probably reached the the apex of our um speculativeness. Um, but I wonder if you yeah if you have any final thoughts I don’t want to keep you too long as I know you have a very busy schedule and you’re out there. Ah you know in your way saving the world By. Creating regulations but also just spreading I think um, really good knowledge on this topic without having a I don’t know particular um products to sell right? I think you providing a very impartial. Um, viewpoint here. So yeah. Any final thoughts messages um, etc.

01:03:01.60
Nell Watson
Yeah I think um and thank you very much by the way I really appreciate that. Um I think it’s it’s important to to consider the risks of of using Ai Technologies and a a given use case if it’s in entertainment is is probably not going to be too too troubling. But if you’re getting into something riskier like you know health care or judiciary system that you know. Potential financial exclusion etc. We want to be very careful with how we use Ai in those kinds of use cases. We probably want to use systems which are more simple which are more interpretable. We can easily easily debug them and understand on which kinds of. Predicates they’re making predictions or decisions and indeed we also um, sometimes good old fashioned data science is is already a fantastic start. You know so many different ventures are still dipping their little. Tow into into the the waters of Ai and in a ginger you know, careful manner because there’s this so much yet to explore. So I think on the one hand we don’t want to get left behind as as entrepreneurs.

01:04:32.18
Nell Watson
And as business leaders but we shouldn’t jump in with both Feet. We should be careful and we should try to find things where there’s already enough alternate ways of solving a problem and use that. As ah as a test case. For example, expenses right? where you have to you know, figure out your your expenses based on your receipts and you know plug that into some system or spreadsheet or something like that. It’s a pain right. And everybody agrees that it’s a painful thing and that can create ah a lot of incentive for people to be interested in in you know, using a new technology to solve that problem. But. If the system goes wrong. It fails. It doesn’t work how people expect or it can’t cope with some odd condition some you know distributional issue. That’s that’s not accounted for by the system that there is that manual fallback Try to find. Examples of those kinds of problems in order to to explore using a new technology because of course ah there’s there’s less to go wrong and more reasons for people to to be invested and interested in trying the new technologies that would be my.

01:06:00.77
Nell Watson
My advocacy.

01:06:02.73
multiverses
I think that’s a very wise comment and um, not only because you can test the Ai against what you already have. But I think it’s there’s a danger that if we start using this, you know start using Ai for.

01:06:20.92
multiverses
Very thorny problems. Um, and let’s suppose that it has a pretty good success rate but not complete success. We just go with it and we see oh yeah, it’s worked this time. It’s worked this time. It’s worked again and we just come to overrely on it in a way that we can’t even. Pulled the pla because not because there’s no pug to pull but just we can’t you know we’re too reliant on it and then you know the next time it makes a mistake and it’s just too late. Um I think another of the analoggies around. I can’t remember if it was rockets or or air travel. Both have come up in this conversation. But and I know also your book was yeah we’re we’re sort of trying to build. Ah, you know an an aircraft here. Um, but if we get it wrong. It’s not just like a 1 ne-off failure it. It could be like complete failure. So. You know, maybe let’s just start with a paper plane or something and and get the principles of flight correct. Um and make sure we have safeguards in place before we go completely wild. Um, but yeah I hope we do get to go completely. Well I hope we do get to connect all our brains and we can share our. Ah, experiences just as we’ve been sharing our words here. So yeah, um, this has been so fascinating. Thank you so much now. Um been a real pleasure talking with you again.

01:07:43.64
Nell Watson
Thank you James. It’s been a great pleasure. Also thank you.

Leave a Comment