Former OpenAI and Anthropic Staff Accuse Elon Musk’s xAI of Ignoring AI Safety Warnings

Former OpenAI and Anthropic Staff Accuse Elon Musk’s xAI of Ignoring AI Safety Warnings

0 Calificaciones
0
Episodio
1043 of 1164
Duración
10min
Idioma
Inglés
Formato
Categoría
No ficción

Former researchers from OpenAI and Anthropic are calling out xAI’s approach to AI safety.

The team behind Grok allegedly ignored internal warnings and sidelined staff who raised concerns.

Grok has generated antisemitic and conspiratorial responses on X, prompting further scrutiny.

Internal sources say Grok was trained using user data from X without consent.

Safety evaluations were reportedly skipped or dismissed to speed up product rollout.

Researchers pushing for safeguards were removed from key projects or left the company.

An open letter signed by multiple AI researchers demands legal protections for whistleblowers.

Current U.S. law lacks clear protection for employees disclosing AI-related risks.

Musk’s stance favors fewer restrictions, calling Grok “uncensored” compared to rivals.

The controversy raises pressure for regulation and transparency in high-risk AI development.


Escucha y lee

Descubre un mundo infinito de historias

  • Lee y escucha todo lo que quieras
  • Más de 1 millón de títulos
  • Títulos exclusivos + Storytel Originals
  • Precio regular: CLP 7,990 al mes
  • Cancela cuando quieras
Suscríbete ahora
Copy of Device Banner Block 894x1036 3
Cover for Former OpenAI and Anthropic Staff Accuse Elon Musk’s xAI of Ignoring AI Safety Warnings

Otros podcasts que te pueden gustar...