up:: π₯ Sources
type:: #π₯/π
status:: #π₯/π¨
tags:: #on/podcasts, #on/ai
topics:: π€ Artificial Intelligence
Author:: Machine Learning Street Talk (MLST)
Title:: #99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism
URL:: "https://share.snipd.com/episode/b6cb9549-1d04-4cd3-a5ef-562b69e6c158"
Reviewed Date:: 2023-02-08
Finished Year:: 2023
99 - CARLA CREMER IGOR KRAWCZUK - X-Risk Governance Effective Altruism
Episode metadata
- Episode title:: #99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism
- Show:: Machine Learning Street Talk (MLST)
- Owner / Host:: Machine Learning Street Talk
- Episode link:: open in Snipd
- Episode publish date:: 2023-02-05
Show notes
> YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0>
> Support us! https://www.patreon.com/mlst
>
> MLST Discord: https://discord.gg/aNPkGUQtc5
>
>
>
> Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, and Carla's Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity.
>
>
>
> Carla's βDemocratizing Riskβ paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration.
>
>
>
> Aggregating peopleβs diverse ways of thinking about a problem and creating a risk-averse procedure gives a likely, highly probable outcome for having converged on the best policy. There needs to be a good reason to trust one organization with the risk management of humanity and all the different ways of thinking about risk must be taken into account. AI tools can help to scale this type of deliberative democracy, but the transparency of these algorithms must be taken into consideration.
>
>
>
> The ambition of the EA community and Altruism Inc. is to protect and do risk management for the whole of humanity and this requires an institutional turn in order to do it effectively. The dangers of optimization are real, and it is essential to ensure that the risk management of humanity is done properly and ethically. By understanding the importance of aggregating peopleβs diverse ways of thinking about a problem, and creating a risk-averse procedure, it is possible to create a likely, highly probable outcome for having converged on the best policy.
>
>
>
> Carla Zoe Cremer
>
> https://carlacremer.github.io/
>
>
>
> Igor Krawczuk
>
> https://krawczuk.eu/
>
>
>
> Interviewer: Dr. Tim Scarfe
>
>
>
> TOC:
>
> [00:00:00] Introduction: Vox article and effective altruism / FTX
>
> [00:11:12] Luciano Floridi on Governance and Risk
>
> [00:15:50] Connor Leahy on alignment
>
> [00:21:08] Ethan Caballero on scaling
>
> [00:23:23] Alignment, Values and politics
>
> [00:30:50] Singularitarians vs AI-thiests
>
> [00:41:56] Consequentialism
>
> [00:46:44] Does scale make a difference?
>
> [00:51:53] Carla's Democratising risk paper
>
> [01:04:03] Vox article - How effective altruists ignored risk
>
> [01:20:18] Does diversity breed complexity?
>
> [01:29:50] Collective rationality
>
> [01:35:16] Closing statements
- Show notes link:: open website
- Tags: #podcasts #snipd
- Export date:: 2023-02-08T19:37
Snips
[08:30] Effective Altruism
π§ Play snip - 1minοΈ (07:13 - 08:34)
β¨ Summary
The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay. Carla: "She's sympathetic to the type of greed that drives us beyond wanting to be good" She says institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty.
π Transcript
Speaker 3
The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay, according to Carla. She closed the introduction by saying institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty. So to finish off summarizing Carla's article on effective altruism, I mean, first of all, I should clarify these are Carla's words, not mine. I don't really have any skin in the game of effective altruism. I mean, if anything, my intuition tells me to be skeptical of it. It seems to have an air of elitism and grandiosity. It also has cult-like qualities that make me uneasy. Nevertheless, I think there are some very interesting topics to explore here in terms of governance, the effect of AI and other technologies, and of course, the power of information in society. Carla said, I'm quoting, she's sympathetic to the type of greed that drives us beyond wanting to be good, to instead be certain that we are good.
[12:14] Epistemic Collaboration Is Not a Nice to Have
π§ Play snip - 1minοΈ (10:45 - 12:16)
β¨ Summary
Despite the fears of bureaucratization, failures and wasted resources are inevitable. Carla Cremma concluded that in order to make EA members take responsibility for their decisions, open philanthropy should fund a new organization to research on epistemic mechanism design.
π Transcript
Speaker 3
Carla concluded that in order to make EA members take responsibility for their decisions, open philanthropy should fund a new organization to research on epistemic mechanism design. Despite the fears of bureaucratization, failures and wasted resources are inevitable. Epistemic collaboration is not a nice to have. According to Carla Cremma, it's essential. This is a clip from Professor Luciano Fluridi from Oxford University. I spoke with him last week, by the way.
Speaker 4
I published that episode. Similarly to Carla, he thinks that the real problem here isn't superintelligence or even AI alignment per se.
Speaker 3
It's more of a classical problem of governance and having the right institutions, the right policies, having more friction between the legal landscape and the technology landscape.
Speaker 4
And it should not be left to individuals to make the right decisions.
Speaker 5
You can devalue human skills. You can remove human responsibility. You can reduce human control. They rule the human self-determination. The rock who thought that their little drop of water was nothing. 18 years later has a big hole in it because drop after drop after drop, the drop will shape the stone or shape the rock. And I'm a bit worried that here, human nature, no, we philosophers, we know something about human nature. We'll kick in.
[15:09] Why I Do Not Worry About AGI Risk
π§ Play snip - 1minοΈ (13:20 - 15:09)
β¨ Summary
Igor Kraftchuk is a researcher and PhD student at the Lions Laboratory at EPFL. His current research focuses on generative models and reinforcement learning for combinatorial domains, in particular, their application to integrated circuit design. He's particularly interested in the interaction of specific algorithm systems and their biases with humans and society at large. In particular, he wants to contribute to the mechanisms of accountability that ensure benefits of AI systems are not monopolized or abused.
π Transcript
Speaker 3
Also joining us this evening is Igor Kraftchuk. He is a researcher and PhD student at the Lions Laboratory at EPFL. His current research focuses on generative models and reinforcement learning for combinatorial domains, in particular, their application to integrated circuit design. On the humanities side, he's particularly interested in the interaction of specific algorithm systems and their biases with humans and society at large. In particular, he wants to contribute to the mechanisms of accountability that ensure benefits of AI systems are not monopolized or abused and improve our understanding of governance structures in general. Somewhere in between is the subject of algorithmic mechanism design and game theoretical analysis of existing political economic systems. Now, I have had privileged access to a position paper that Igor is in the business of writing. As far as I can tell, he's almost finished writing it, but it's called Why I Do Not Worry About AGI Risk. He said that he's unhappy with the current state of the discussion about AGI risk that he encounters in the effective altruism community and its adjacent circles. On the first page of his paper, he gives an outline. He says that he thinks alignment is either a non-problem or identical with the principal agent, social choice, and good corporate governance problems in politics.
Speaker 4
He said that he doesn't agree that the risks posed by losing control of human level or beyond, AGI are plausible, or indeed large enough to pose an existential risk.
[22:34] What Do You Think Are the Most Dangerous Steps in AI?
π§ Play snip - 1minοΈ (21:16 - 22:40)
β¨ Summary
There's a notion of like, there's an exponential and an exponential goes, it's an exponential forever until it hits the physical bottleneck. If you scale so far, you're even scaling further than the fastest future supercomputers, then you hit the bottleneck of computer on the x-axis. So what do you think the bottlenecks will be in the world that we live in? We're talking about extrapolating timelines at Valk AI, JACO, Trist stuff. For video, computer's going to be the bottleneck at least until the 2040s. And then at that point, it's going to switch to data being the bottleneck.
π Transcript
Speaker 3
I think it's interesting to compare Ethan's opinion. He is an AGI scaling maximalist with Igor's opinion.
Speaker 8
But do you think there's something like, because we're talking about scaling bottlenecks, is that an intrinsic thing in systems?
Speaker 7
I mean, there's a notion of like, there's an exponential and an exponential goes, it's an exponential forever until it hits the physical bottleneck and then turns into a sigmoid. So there's that notion of bottleneck. If you scale so far, you're even scaling further than the fastest future supercomputers, then you hit the bottleneck of computer on the x-axis.
Speaker 8
So what do you think the bottlenecks will be in the world that we live in?
Speaker 7
We're talking about extrapolating timelines at Valk AI, JACO, Trist stuff. I mean, basically the current narrative is like for the next 10 to 20, okay, for text, it's like, it's basically, computer's going to be the bottleneck until mid 2020s or mid 30 30s. For video, computer's going to be the bottleneck at least until the 2040s. And then at that point, it's going to switch to data being the bottleneck.
Speaker 3
Okay. Okay.
Speaker 7
And that's just for the capability. For the alignment part, it gets more complicated because they're like your bottleneck by how fast can humans send their preference data to the machine, do like RO from human feedback or whatever.
[36:48] The GDPR Is a Big Momentous Thing
π§ Play snip - 1minοΈ (35:29 - 36:48)
β¨ Summary
The GDPR is a response that is coming like just in time. But again, these are the same types of processes that have been happening in similar ways throughout the history. We're acting as if it is big momentous thing, because it's always nice to live at the end of history. Like this is the final thing we need to solve and then either utopia or disaster.
π Transcript
Speaker 1
Like it used to be that you were fine if you couldn't read, and then that became mandatory. And then it was fine to not really know anything about computers and now that is mandatory. Like if you want to participate and be like a fully kind of emancipated human in these societies. And in the same way, it's gonna probably be a requirement to enable people to own their own data, own their own, you call it like a digital footprint and have a type of sovereignty. And the things lost like the GDPR are like a response that is coming like just in time. But again, these are the same types of processes that have been happening in similar ways throughout the history. And we're acting as if it is big momentous thing, because it's always nice to live at the end of history. Like this is the final thing we need to solve. And then either utopia or disaster. But actually, usually things are very boring. And we will like have lots of lawsuits and lots of regulation, lots of political debates. But like, there's no magic. And if you go into this with his mindset of like, Oh, this is this is different. You kind of implicitly hyping up the technology. So like the companies selling the technology to deal with this data, they love it. Like it's a really good marketing thing.
[01:36:49] AI and the Future of Democracy
π§ Play snip - 1minοΈ (01:35:31 - 01:36:52)
β¨ Summary
The average person is full of biases and it's all about trying to avoid those biases on browser. We can also think about the complementary capacity that AI can bring to this right surely there will be spaces that no aggregation and deliberation procedure can properly cover. That is where we can complement now with algorithmic tools, whether they're called AGI or not.
π Transcript
Speaker 2
Bring it back full circle because the this idea which often crops up in the rationality crowd as well right the less one community and so on that right that the voter the average person is full of biases and it's all about trying to avoid those biases on browser and others have done fantastic work to show that a lot of the biases that we see in human psychology obviously have very good reasons for being there because they're very adapted to particular environments and even the current environments they make a lot of sense right and you can read those papers they're great but when we're thinking about systems and environments that aggregate those biases together allow them to exchange information through these deliberation tools we can also think about the complementary capacity that AI can bring to this right surely there will be spaces that no aggregation And deliberation procedure can properly cover and that is where we can complement now with algorithmic tools that yeah we can call them AI or AGI they will be generally intelligent in different ways and I think Chris is making this argument right that there's no reason to replicate humans we have a lot of humans around let's try and find a space where actually we are benefiting from having a different kind of bias which we have in these algorithms to complement the kind of
[01:38:41] The overblown concern about AI alignment risks
π§ Play snip - 1minοΈ (01:37:22 - 01:38:41)
β¨ Summary
So one thing that I would really like to hear almost like my personal take on this like AI risk alignment risk thing is that the framing of this is a technical problem is extremely overblown. The really careful really competent researchers that like I meet and like I have in my lab like the most competent in my eyes they don't worry about the technical say they see technical problems to solve. So these are small incremental improvements that like we have tools to solve this when you blow it up Β to a degree that is being discussed in the less wrong alignment crowd.
π Transcript
Speaker 1
So one thing that I would really like to hear almost like my personal take on this like AI risk alignment risk thing is that the framing of this is a technical problem is extremely like overblown and that like the really careful really competent researchers that like I meet and like I have in my lab like the most competent in my eyes they don't worry about the technical say they see technical problems to solve and like my my lab because I'm biased like they have like work on robust AI safe AI like they fairness and and so on but these are like these small incremental improvements that like we have tools to solve this when you blow it up to a degree that is being discussed in the less wrong alignment and so on crowd which all like trying to do the best and I think I'm actually like happy that they exist they have like a different opinion than me and they're taking this this more seriously but my critique is that they really seem to buy into the hive that this is something different and I remember joining my lab my PI actually kind of being a bit aggressive in my enthusiasm or this is something
Created with Snipd | Highlight & Take Notes from Podcasts
up:: π₯ Sources