Note on How We Could Stumble Into AI Catastrophe via Cold Takes
Maybe they’re just creating massive amounts of “digital representations of human approval,” because this is what they were historically trained to seek (kind of like how humans sometimes do whatever it takes to get drugs that will get their brains into certain states).
Reference
- Notes
- ai, incentives, alignment
- How We Could Stumble Into AI Catastrophe
-
Permalink to
2023.NTE.066
- Edit
← Previous | Next → |
Note on How We Could Stumble Into AI Catastrophe via Cold Takes | Note on This Is Fine via Garbage Day |