Re: biosignatures detected on K2-18b, there's been a couple popular takes saying this solves the Fermi Paradox: K2-18b is so big (8.6x Earth mass) that you can't get to orbit, and maybe most life-bearing planets are like that.
This is wrong on several bases:
I would say "agent harness" is a type of "scaffolding". I used it in this case because it's how Logan Kilpatrick described it in the tweet I linked at the beginning of the post.
I'm not sure that TAS counts as "AI" since they're usually compiled by humans, but the "PokeBotBad" you linked is interesting, hadn't heard of that before. It's an Any% Glitchless speedrun bot that ran until ~2017 and which managed a solid 1:48:27 time on 2/25/17, which was better than the human world record until 2/12/18. Still, I'd say this is more a programmed "bot" than an AI in the sense we care about.
Anyway, you're right that the whole reason the Pokémon benchmark exists is because it's interesting to see how well an untrained LLM can do playing it.
since there's no obvious reason why they'd be biased in a particular direction
No I'm saying there are obvious reasons why we'd be biased towards truthtelling. I mentioned "spread truth about AI risk" earlier, but also more generally one of our main goals is to get our map to match the territory as a collaborative community project. Lying makes that harder.
Besides sabotaging the community's map, lying is dangerous to your own map too. As OP notes, to really lie effectively, you have to believe the lie. Well is it said, "If you once tell a lie, the truth is ever after your enemy."
But to answer your question, it's not wrong to do consequentialist analysis of lying. Again, I'm not Kantian, tell the guy here to randomly murder you whatever lie you want to survive. But I think there's a lot of long-term consequences in less thought-experimenty cases that'd be tough to measure.
I'm not convinced SBF had conflicting goals, although it's hard to know. But more importantly, I don't agree rationalists "tend not to lie enough". I'm no Kantian, to be clear, but I believe rationalists ought to aspire to a higher standard of truthtelling than the average person, even if there are some downsides to that.
Have we forgotten Sam Bankman-Fried already? Let’s not renounce virtues in the name of expected value so lightly.
Rationalism was founded partly to disseminate the truth about AI risk. It is hard to spread the truth when you are a known liar, especially when the truth is already difficult to believe.
Huh, seems you are correct. They also apparently are heavily cannibalistic, which might be a good impetus for modeling the intentions of other members of your species…
Oh okay. I agree it's possible there's no Great Filter.
Dangit I can't cease to exist, I have stuff to do this weekend.
But more seriously, I don't see the point you're making? I don't have a particular objection to your discussion of anthropic arguments, but also I don't understand how it relates to the "what part of evolution/planetary science/sociology/etc. is the Great Filter" scientific question.
Actually another group released VideoGameBench just a few days ago, which includes Pokémon Red among other games. Just a basic scaffold for Red, but that's fair.
As I wrote in my other post:
I think VideoGameBench has the right approach, which is to give only a basic scaffold (less than described in this post), and when LLMs can make quick, cheap progress through Pokemon Red (not taking weeks and tens of thousands of steps) using that, we'll know real progress has been made.