PoignardAzur

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

(according to claude)

I wish people would stop saying this. We shouldn't normalize relying on AI to have opinions for us. These days they can even link their sources! Just look at the sources.

I mean I guess the alternative is that people use Claude without checking and just don't mention it, so I guess I don't have a solution. But at least it would be considered embarrassing in that scenario. We should stay aware that there are better practices that don't require much more effort.

Likewise, Ev put in some innate drives related to novelty and aesthetics, with the idea that people would wind up exploring their local environment. Very sensible! But Ev would probably be surprised that her design is now leading to people “exploring” open-world video game environments while cooped up inside.

I think it's not obvious that Ev's design is failing to work as intended here.

Video games are a form of training. Some games can get pretty wireheady (Cookie Clicker), but many of the most popular, most discussed, most played games are games that exercise some parts of your brain in useful ways. The best-selling game of all times is Minecraft.

Moreover, I wouldn't be surprised if people who played Breath of the Wild were statistically more likely to go hiking afterward.

First, I could have just gone and talked to Oliver earlier.

I'd say you're underrating that option.

Part of it is domain-specific: a lot of developers start off with very little agency, and get "try to do the thing yourself before asking to the teacher how to do the thing" drilled into them; it's easy to overcorrect to "never ask for help". Learning to ask for help faster is a valuable senior developer skill.

On a more general level, "asking for help faster" is a disgustingly common answer to the question "how could I have found the solution sooner?". Life isn't an exam, you don't get points off for talking to people. (And using ChatGPT, StackOverflow, etc.)

I recommend Thinking Physics and Baba is You as sources of puzzles to start grinding on this

Mhh. Interesting. I haven't played Baba Is You in a while. It has enough puzzles that I think you could actually practice some skills with it.

I might try your routine, though I'm a bit skeptical of it.

Yeah, it seems a little weird to me that the post includes Eliezer's claim uncritically. "I totally train myself to improve on the spot, all the time" seems like a bold claim for someone who's admitted to having unusually low willpower?

If they are good at explaining or you are good at interviewing, you can learn the details about what you'd do differently if you yourself made the update, and ask yourself whether the update actually makes sense.

I would be extremely surprised if this didn't help at all.

I wouldn't be very surprised. One, it seems coherent with what the world looks like.

Two, I suspect for the kind of wisdom / tacit knowledge you want, you need to register the information in types of memory that are never activated by verbal discussion or visceral imagining, by design.

Otherwise, yeah, I agree that it's worth posting ideas even if you're not sure of them, and I do appreciate the epistemic warning at the top of the post.

To be clear, I think you should treat this as bogus until you have evidence better than what you listed.

You're trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn't amount to anything. So you should treat your own sincere enthusiasm and your own "subjectively it feels like it's working" vibes with suspicion. You should actively be looking for ways to falsify your theory, which I'm not seeing anywhere in your post.

Again, I note that you haven't tried the chess thing.

I don't really know what to tell you. My mindset basically boils down to "epistemic learned helplessness", I guess?

It's like, if you see a dozen different inventors try to elaborate ways to go to the moon based on Aristotelian physics, and you know the last dozen attempts failed, you're going to expect them to fail as well, even if you don't have the tools to articulate why. The precise answer is "because you guys haven't invented Newtonian physics and you don't know what you're doing", but the only answer you can give is "Your proposal for how to get to the moon uses a lot of very convincing words but the last twelve attempts used a lot of very convincing words too, and you're not giving evidence of useful work that you did and these guys didn't (or at least, not work that's meaningfully different from the work these guys did and the guys before them didn't)."

And overall, the general posture of your article gives me a lot of "Aristotelian rocket" vibes. The scattershot approach of making many claims, supporting them with a collage of stories, having a skill tree where you need ~15 (fifteen!) skills to supposedly unlock the final skill, strikes me as the kind of model you build when you're trying to build redundancy into your claims because you're not extremely confident in any one part. In other words, too many epicycles.

I especially notice that the one empirical experiment you ran, trying to invent tacit knowledge transfer with George in one hour, seems to have failed a lot more than it succeeded, and you basically didn't update on that. The start of the post says:

"I somehow believe in my heart that it is more tractable to spend an hour trying to invent Tacit Soulful Knowledge Transfer via talking, than me spending 40-120 hours practicing chess. Also, Tacit Soulful Transfer seems way bigger-if-true than the Competitive Deliberate Practice thing. Also if it doesn't work I can still go do the chess thing later."

The end says:

That all sounds vaguely profound, but one thing our conversation wrapped up without is "what actions would I do differently, in the worlds where I had truly integrated all of that?"

We didn't actually get to that part.

And yet (correct me if I'm wrong) you didn't go do the chess thing.

Here are my current guesses:

No! Don't!

I'm actually angry at you there. Imagine me saying rude things.

You can't say "I didn't get far enough to learn the actual lesson, but here's the lesson I think I would have learned"! Emotionally honest people don't do that! You don't get to say "Well this is speculative, buuuuuut"! No "but"! Everything after the "but" is basically Yudkowsky's proverbial bottom line.

if you (PoignardAzur) set about to say "okay, is there a wisdom I want to listen to, or convey to a particular person? How would I do that?"

Through failure. My big theory of learning is that you learn through trying to do something, and failing.

So if you want to teach someone something, you set up a frame for them where they try to do it, and fail, and then you iterate from there. You can surround them with other people who are trying the same thing so they can compare notes, do post-mortems with them, etc (that's how my software engineering school worked), but in every case the secret ingredient is failure.

This post is very evocative, it touches on a lot of very relatable anxieties and hopes and "things most rationalists are frustrated they can't do better" type of stuff.

But its "useful or actionable content to personal anecdote" ratio seems very low, extremely low for a post that made the curated list. To me it reads as a collection of mini-insights, but I don't really see any unifying vision to them, anything giving better handles on why pedagogy fails or why people fail to learn from other people's wisdom.

It's too bad, because the list of examples you give at the start is fairly compelling. It just doesn't feel like the rest of the article delivers.

Not necessarily.

I think any method that calculates the value/utility of your wealth as a timeless function of utility per amount will be pretty disconnected to how people behave in practice. It doesn't account for people making plans and having to scrap them because an accident cost them their savings, for instance.

(But then again, I'm not an economist, maybe there are timeless frameworks that account for that.)

Something about the article felt off to me, and "should I buy insurance if interest rates are zero" is a good intuition pump for why.

Yes, I think you should still buy insurance. The reason I'd come up with, peace of mind aside, is losing a lot of money at once is worse than losing a little money over time, because when you lose a lot of money your options are more limited. You have less flexibility, less ability to capitalize on opportunities, less ability to withstand other catastrophes, etc.

Load More