Multi-column

99 - CARLA CREMER IGOR KRAWCZUK - X-Risk Governance Effective Altruism

Cover|200

Episode metadata

Show notes > YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0
>
> Support us! https://www.patreon.com/mlst
>
> MLST Discord: https://discord.gg/aNPkGUQtc5
>
>
>
> Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, and Carla's Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity.
>
>
>
> Carla's β€œDemocratizing Risk” paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration.
>
>
>
> Aggregating people’s diverse ways of thinking about a problem and creating a risk-averse procedure gives a likely, highly probable outcome for having converged on the best policy. There needs to be a good reason to trust one organization with the risk management of humanity and all the different ways of thinking about risk must be taken into account. AI tools can help to scale this type of deliberative democracy, but the transparency of these algorithms must be taken into consideration.
>
>
>
> The ambition of the EA community and Altruism Inc. is to protect and do risk management for the whole of humanity and this requires an institutional turn in order to do it effectively. The dangers of optimization are real, and it is essential to ensure that the risk management of humanity is done properly and ethically. By understanding the importance of aggregating people’s diverse ways of thinking about a problem, and creating a risk-averse procedure, it is possible to create a likely, highly probable outcome for having converged on the best policy.
>
>
>
> Carla Zoe Cremer
>
> https://carlacremer.github.io/
>
>
>
> Igor Krawczuk
>
> https://krawczuk.eu/
>
>
>
> Interviewer: Dr. Tim Scarfe
>
>
>
> TOC:
>
> [00:00:00] Introduction: Vox article and effective altruism / FTX
>
> [00:11:12] Luciano Floridi on Governance and Risk
>
> [00:15:50] Connor Leahy on alignment
>
> [00:21:08] Ethan Caballero on scaling
>
> [00:23:23] Alignment, Values and politics
>
> [00:30:50] Singularitarians vs AI-thiests
>
> [00:41:56] Consequentialism
>
> [00:46:44] Does scale make a difference?
>
> [00:51:53] Carla's Democratising risk paper
>
> [01:04:03] Vox article - How effective altruists ignored risk
>
> [01:20:18] Does diversity breed complexity?
>
> [01:29:50] Collective rationality
>
> [01:35:16] Closing statements

Snips

[08:30] Effective Altruism

🎧 Play snip - 1min️ (07:13 - 08:34)

✨ Summary

The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay. Carla: "She's sympathetic to the type of greed that drives us beyond wanting to be good" She says institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty.

πŸ“š Transcript


[12:14] Epistemic Collaboration Is Not a Nice to Have

🎧 Play snip - 1min️ (10:45 - 12:16)

✨ Summary

Despite the fears of bureaucratization, failures and wasted resources are inevitable. Carla Cremma concluded that in order to make EA members take responsibility for their decisions, open philanthropy should fund a new organization to research on epistemic mechanism design.

πŸ“š Transcript


[15:09] Why I Do Not Worry About AGI Risk

🎧 Play snip - 1min️ (13:20 - 15:09)

✨ Summary

Igor Kraftchuk is a researcher and PhD student at the Lions Laboratory at EPFL. His current research focuses on generative models and reinforcement learning for combinatorial domains, in particular, their application to integrated circuit design. He's particularly interested in the interaction of specific algorithm systems and their biases with humans and society at large. In particular, he wants to contribute to the mechanisms of accountability that ensure benefits of AI systems are not monopolized or abused.

πŸ“š Transcript


[22:34] What Do You Think Are the Most Dangerous Steps in AI?

🎧 Play snip - 1min️ (21:16 - 22:40)

✨ Summary

There's a notion of like, there's an exponential and an exponential goes, it's an exponential forever until it hits the physical bottleneck. If you scale so far, you're even scaling further than the fastest future supercomputers, then you hit the bottleneck of computer on the x-axis. So what do you think the bottlenecks will be in the world that we live in? We're talking about extrapolating timelines at Valk AI, JACO, Trist stuff. For video, computer's going to be the bottleneck at least until the 2040s. And then at that point, it's going to switch to data being the bottleneck.

πŸ“š Transcript


[36:48] The GDPR Is a Big Momentous Thing

🎧 Play snip - 1min️ (35:29 - 36:48)

✨ Summary

The GDPR is a response that is coming like just in time. But again, these are the same types of processes that have been happening in similar ways throughout the history. We're acting as if it is big momentous thing, because it's always nice to live at the end of history. Like this is the final thing we need to solve and then either utopia or disaster.

πŸ“š Transcript


[01:36:49] AI and the Future of Democracy

🎧 Play snip - 1min️ (01:35:31 - 01:36:52)

✨ Summary

The average person is full of biases and it's all about trying to avoid those biases on browser. We can also think about the complementary capacity that AI can bring to this right surely there will be spaces that no aggregation and deliberation procedure can properly cover. That is where we can complement now with algorithmic tools, whether they're called AGI or not.

πŸ“š Transcript


[01:38:41] The overblown concern about AI alignment risks

🎧 Play snip - 1min️ (01:37:22 - 01:38:41)

✨ Summary

So one thing that I would really like to hear almost like my personal take on this like AI risk alignment risk thing is that the framing of this is a technical problem is extremely overblown. The really careful really competent researchers that like I meet and like I have in my lab like the most competent in my eyes they don't worry about the technical say they see technical problems to solve. So these are small incremental improvements that like we have tools to solve this when you blow it up Β to a degree that is being discussed in the less wrong alignment crowd.

πŸ“š Transcript


Created with Snipd | Highlight & Take Notes from Podcasts


up:: πŸ“₯ Sources