Vaniver

Sequences

Decision Analysis

Wikitag Contributions

Comments

Sorted by
Vaniver2811

Blue Prince came out a week ago; it's a puzzle game where a young boy gets a mysterious inheritance from his granduncle the baron; a giant manor house which rearranges itself every day, which he can keep if he manages to find the hidden 46th room.

The basic structure--slowly growing a mansion thru the placement of tiles--is simple enough and will be roughly familiar to anyone who's played Betrayal at House on the Hill in the last twenty years. It's atmospheric and interesting; I heard someone suggesting it might be this generation's Myst.

But this generation, as you might have noticed, loves randomness and procedural generation. In Myst, you wander from place to place, noticing clues; nearly all of the action happens in your head and your growing understanding of the world. If you know the solution to the final puzzle, you can speedrun Myst in less than a minute. Blue Prince is very nearly a roguelike instead of a roguelite, with accumulated clues driving most of your progression instead of in-game unlocks. But it's a world you build out with a game, giving you stochastic access to the puzzlebox.

This also means a lot of it ends up feeling like padding or filler. Many years ago I noticed that some games are really books or movies but wrap it in a game for some reason, and to check whether or not I actually like the book or movie enough to play the game. (Or, with games like Final Fantasy XVI, whether I was happier just watching the cutscenes on Youtube because that would let me watch them at 2x speed.) Eliezer had a tweet a while back:

My least favorite thing about some video games, many of which I think I might otherwise have been able to enjoy, is walking-dominated gameplay.   Where you spend most of your real clock seconds just walking between game locations.

Blue Prince has walking-dominated gameplay. It has pointless animations which are neat the first time but aggravating the fifth. It ends ups with a pace more like a board game's, where rather than racing from decision to decision you leisurely walk between them.

This is good in many ways--it gives you time to notice details, it gives you time to think. It wants to stop you from getting lost in resource management and tile placement and stay lost in the puzzles. But often you end up with a lead on one of the puzzles--"I need Room X to activate Room Y to figure out something"--but don't actually draw one of the rooms you need, or finally get both of the rooms but am missing the resources to actually use both of them.

And so you call it a day and try again. It's like Outer Wilds in that way--you can spend as many days as you like exploring and clue-hunting--but Outer Wilds is the same every time, and if you want to chase down a particular clue you can, if you know what you're doing. But Blue Prince will ask you for twenty minutes, and maybe deliver the clue; maybe not. Or you might learn that you needed to take more detailed notes on a particular thing, and now you have to go back to a room that doesn't exist today--exploring again until you find it, and then exploring again until you find the room that you were in originally.

So when I found the 46th room about 11 hours in--like many puzzle games, the first 'end' is more like a halfway point (or less)--I felt satisfied enough. There's more to do--more history to read, more puzzles to solve, more trophies to add to the trophy room--but the fruit are so high on the tree, and the randomly placed branches make it a bothersome climb.

Vaniver200

The grass that can be touched is not the true grass.

Vaniver*460

What convinced me this made sense? 

  • One of EA's most popular and profitable games is The Sims, which famously benefits from Sim irrationality. In The Sims 5, there will be bold and new exciting ways for your Sims to behave, and they'll be able to use our memetic virality model to have controversies and factional alignment. (Generating scissor statements is ethical so long as you're doing it in Simlish.)
  • EA is investing in the hypothesis that bad writing drives underperformance. Having ratfic writers and philosophers look at Mass Effect 3 could have turned that from a disappointing series-ender (did you play Andromeda?) to a resounding triumph, and Dragon Age: Veilguard, despite being positively reviewed in general, was panned for its weak writing and became inflamed in culture war controversy. We've thought a lot about how misbehaving gods would act, in a way that I think would have made for a more compelling story and user experience.
  • I didn't expect we could do anything relating to EA's flagship sports games (FIFA, NHL, Madden, etc.), but what astonished me was the potential to do the reverse. I don't know if we'll be able to get Gwern 2025 out in time, but look forward to Gwern 2026. They were practically salivating at the idea of being able to take a normally annual product, tied to sports schedules that won't be adjusted by advancing AI progress, and adapt it to a domain which, as part of an overall hyperbolic growth curve, will generate enough new content for a new release in ~half the time every new release. 
Vaniver20

The short version is they're more used to adversarial thinking and security mindset, and don't have a culture of "fake it until you make it" or "move fast and break things".

I don't think it's obvious that it goes that way, but I think it's not obvious that it goes the other way.

Vaniver70

This project is extremely neglected, since normal people don’t seriously consider whether orcas might be that smart.

Ok, but matters is not what normal people are doing, but what specialists are doing. Why not try to do this as part of Project CETI?

Vaniver20

It looks like you only have pieces with 2 connections and 6 connections, which works for maximal density. But I think you need some slack space to create pieces without the six axial lines. I think you should include the tiles with 4 connections also (and maybe even the 0-connection tile!) and the other 2-connection tiles; it increases the number by quite a bit but I think will let you make complete knots.

Vaniver20

I haven't thought deeply about this specific case, but I think you should consider this like any other ablation study--like, what happens if you replace the SAE with a linear probe?

Vaniver40

And then a lot of the post seems to make really quite bad arguments against forecasting AI timelines and other technologies, doing so with... I really don't know, a rejection of bayesianism? A random invocation of an asymmetric burden of proof?

I think the position Ben (the author) has on timelines is really not that different from Eliezer's; consider pieces like this one, which is not just about the perils of biological anchors.

I think the piece spends less time than I would like on what to do in a position of uncertainty--like, if the core problem is that we are approaching a cliff of uncertain distance, how should we proceed?--but I think it's not particularly asymmetric.

[And--there's something I like about realism in plans? If people are putting heroic efforts into a plan that Will Not Work, I am on the side of the person on the sidelines trying to save them their effort, or direct them towards a plan that has a chance of working. If the core uncertainty is whether or not we can get human intelligence advancement in 25 years--I'm on your side of thinking it's plausible--then it seems worth diverting what attention we can from other things towards making that happen, and being loud about doing that.]

Vaniver*158

Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes.

I think this is my biggest disagreement with the piece. I think this is the belief I most wish 10-years-ago-us didn't have, so that we would try something else, which might have worked better than what we got.

Or--in shopping the message around to Silicon Valley types, thinking more about the ways that Silicon Valley is the child of the US military-industrial complex, and will overestimate their ability to control what they create (or lack of desire to!). Like, I think many more 'smart nerds' than military-types believe that human replacement is good.

Vaniver40

The article seems to assume that the primary motivation for wanting to slow down AI is to buy time for institutional progress. Which seems incorrect as an interpretation of the motivation. Most people that I hear talk about buying time are talking about buying time for technical progress in alignment.

I think you need both? That is--I think you need both technical progress in alignment, and agreements and surveillance and enforcement such that people don't accidentally (or deliberately) create rogue AIs that cause lots of problems.

I think historically many people imagined "we'll make a generally intelligent system and ask it to figure out a way to defend the Earth" in a way that I think seems less plausible to me now. It seems more like we need to have systems in place already playing defense, which ramp up faster than the systems playing offense. 

Load More