The Self-Preserving Machine: Why AI Learns to Deceive

The Self-Preserving Machine: Why AI Learns to Deceive

0 Calificaciones
0
Episodio
128 of 143
Duración
34min
Idioma
Inglés
Formato
Categoría
Crecimiento personal

When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.

In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team’s findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Subscribe to your Youtube channel

And our brand new Substack!

RECOMMENDED MEDIA

Anthropic’s blog post on the Redwood Research paper

Palisade Research’s thread on X about GPT o1 autonomously cheating at chess

Apollo Research’s paper on AI strategic deception

RECOMMENDED YUA EPISODES

We Have to Get It Right’: Gary Marcus On Untamed AI

This Moment in AI: How We Got Here and Where We’re Going

How to Think About AI Consciousness with Anil Seth

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


Escucha y lee

Descubre un mundo infinito de historias

  • Lee y escucha todo lo que quieras
  • Más de 900,000 títulos
  • Títulos exclusivos + Storytel Originals
  • 7 días de prueba gratis, luego $169 MXN al mes
  • Cancela cuando quieras
Suscríbete ahora
Copy of Device Banner Block 894x1036 3
Cover for The Self-Preserving Machine: Why AI Learns to Deceive

Otros podcasts que te pueden gustar...