You have a static copy of paperclipper on your HDD. You decide to smash it with a hammer. Then the OpenAI or whoever randomly figures out seed AI and the universe gets tiled with reward tensors.
This seems like a really pathetic way to go for a paperclipper, to lose a finger in this reality. If only there was some way to coordinate! We both got 0 out of this encounter, that's ridiculous.
Do you think the slobbering thoughts were lengthy enough to trigger the summarizer?
I don't think those are raw CoTs, they have a summarizer model.
I remember one twitter post with erotic roleplay ("something something slobbering for mommy"??? I don't remember) where summarizer model refused to summarize such perversion. Please help me find it?
EDIT: HA! Found it, despite twitter search being horrendous. Fucking twitter, wasted 25 minutes.
https://x.com/cis_female/status/2010128677158445517
Beirut explosion looked pretty spherical tbh.
https://www.reddit.com/r/gifs/comments/i3lzno/huge_explosion_in_beirut_happened_30_min_ago/
especially this one: https://www.reddit.com/r/gifs/comments/y0mvw2/beirut_shockwave/
https://www.reddit.com/r/gifs/comments/i41aj4/beirut_explosion_7_angles_at_once/
Okay. Do you know like the streets you see tend to be more crowded, airplanes have more seats taken, more people in restaurants on average from your observations, compared with how they actually are on average? It's not at all esoteric, you have to do such corrections in ordinary modelling. Anthropic reasoning is straightforward extension of this, onto rather uncertain base territory. (and with attempts to do it principledly)
Well, you can imagine you updating on all the evidence as it went in, in series. Like when you are a child and learn for the first time what year it is.
You get similar situation overall.
Suppose next thing you experience is you waking up in a room. There is a writing, "You had either 1/100 or 99/100 chance to be killed in your sleep before waking up, corresponding to door painted green or red from outside". Before opening the door and walking out, what color do you anticipate it will be from outside?
You probably should think you are in a 1/100 room?
>other than "being smart".
More like, being smarter than average. If you are that exact level of smart but in population with mean higher than your smarts, then the memes will target you as a primary substrate. You can argue in that case there are less such memes, but I don't know, it probably has less effect than positional smartness.
Isn't decision theory pretty closely related to AIXI stuff? Or other simple frameworks that try to take a stab at the core of intelligence. I would expect something like this to show up in groups who try to understand intelligence from first principles, from more abstract standpoint, rather than more like applied animal breeding.
Then it's not surprising that the groups that tried to do that, had interest in that particular area.
The Origami Men by Tomás B.