Skip navigation

Welcome to the longest-running Rationalist podcast on the internet.

Support us and get Extra Content at our Patreon! Or if you prefer, our SubStack!

We’re in partnership with The Guild Of The Rose to level up your life.

Hey look, we have a frequently-used Discord! And a rarely-used Merch store.

We discuss how far the right to be wrong goes, a recent intra-rat kerfuffle, and why random violence is bad. And it’s secretly all about AI Doom.

Fill out our 2-question survey!

LINKS
One Week in the Rat Farm
Kelsey’s religious liberalism tweet
Scott’s tweet in response to Robby from MIRI
Scott’s tweet in reply to Ronny from Lightcone
Only Law Can Prevent Extinction by Eliezer Yudkowsky
Audio version of above

Paid Bonus content for the weekVideo

00:00:01 – Quick life catch-up
00:06:29 – Right to be Wrong about Doom
01:23:47 – Steven likes Huel a lot
01:25:28 – Guild of the Rose
01:27:50 – Thank the Supporter!


Our Patreon, or if you prefer Our SubStack

Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.


LessWrong Sequence Posts Discussed in this Episode:

on hiatus, returning someday

Matt returns to discuss a post urging us to chill out on AI if there isn’t imminent doom in 2028.
There is no video or preshow chat today, due to user error in setting up an in-person video recording. Sorry. 🙁

LINKS
Consider chilling out in 2028
The Scary Bridge
Kill Your Friend’s Cat
The UnSlop AI Fiction competition
The new Guild of the Rose

Paid Bonus content for the week – none, sorry!

00:00:50 – Feedback
00:29:03 – Consider chilling out in 2028
01:32:13 – Guild of the Rose
01:51:24 – Thank the Supporter!


Our Patreon, or if you prefer Our SubStack

Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.


LessWrong Sequence Posts Discussed in this Episode:

on hiatus, returning someday

We talk about three EA-adjacent articles, hitting veganism, directing your donations, and community.

LINKS
Why you should eat meat – even if you hate factory farming by KatWoods
Donations, The Fifth Year, by Jenn
Promises, by Harri
Bayesian Conspiracy 28 – Effective Altruism
The Case For Sardines
oh noes, Anthropic employees gonna give money to EA stuff” article
Gwern on AI Slop
Jenn on AI Slop & Virginia Woolf

Paid Bonus content for the week – Full VideoPreshow Chat

00:02:23 – Feedback
00:17:20 – You Should Eat Meat
00:59:33 – Directing Donations
01:23:04 – Promises of EA
01:38:06 – Guild of the Rose
01:40:12 – Thank the Supporter!


Our Patreon, or if you prefer Our SubStack

Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.


LessWrong Sequence Posts Discussed in this Episode:

on hiatus, returning someday

We relive the last 48 hours of the future of humanity being wrestled over. The Pentagon wants to use Claude for comprehensive mass surveillance of Americans and autonomous kill-bots, and Anthropic says no. The Pentagon retaliates with extreme prejudice. With guest-star Matt.

LINKS
Washington Post summary
Anthropic’s response
Trump’s response
Hegseth’s unhinged lunacy
We Will Not Be Divided – Goggle and OpenAI employees open letter
Eliezer on the tech/govt war
Scott Alexander tweet
RSP comment
Opus3 Retirement

Paid Bonus content for the weekFull Video, Preshow Chat


Our Patreon, or if you prefer Our SubStack

Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.


LessWrong Sequence Posts Discussed in this Episode:

on hiatus, returning someday

We are inspired by Andrew Cutler’s Writing for AI to consider the value of writing for LLMs

LINKS
Andrew Cutler’s Writing for AI
Gwern’s Writing for LLMs
Tracing Woodgrain’s Reliable Sources
Shambaugh’s An AI Agent Published a Hit Piece on Me
Eneasz’s Stone Age Billionaire Can’t Word Good
InkHaven
LessOnline

The main purpose of the AFFINE Seminar is to give promising newcomers to AI alignment an opportunity to acquire a deep understanding of some large pieces of the problem, making them better equipped for work on the mitigation of AI existential risk.
AFFINE Alignment Seminar

Paid Bonus content for the week – Preshow chatterFull Show Video

00:00:49 – Announcements & Feedback
00:42:15 – Writing for AI
01:23:15 – AFFINE Alignment Seminar
01:31:11 – Guild of the Rose
01:33:37 – Thank the Supporter!


Our Patreon, or if you prefer Our SubStack

Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.


LessWrong Sequence Posts Discussed in this Episode:

on hiatus, returning soon

Eneasz talks a bit about his CFAR experience, and we discuss DaystarEld’s Epistemically Honest Reassurance

LINKS
CFAR’s home page
Upcoming CFAR workshops
our episode 152 – Frame Control with Aella
Epistemically Honest Reassurance
Pokémon, Origin of the Species (also in audio)

Paid Bonus content for the weekPreshow chatter, Full Show Video

00:00:56 – Announcements & Feedback
00:13:15 – Eneasz at CFAR
00:48:15 – Epistemically Honest Reassurance
01:29:49 – Guild of the Rose
01:33:20 – Thank the Supporter!


Our Patreon, or if you prefer Our SubStack

Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.


LessWrong Sequence Posts Discussed in this Episode:

on hiatus, returning soon