All of Algon's Comments + Replies

Algon*20

EDIT 2: Did you mean that there are advantages to having both courage and caution, so you can't have a machine that has maximal courage and maximal caution? That's true, but you can probably still make pareto improvements over humans in terms of courage and caution. 

Would changing "increase" to "optimize" fix your objection? Also, I don't see how your first paragraph contradicts the first quoted sentence. 

Mathematically impossible. If X matters then so does -X, but any increase in X corresponds to a decrease in -X.

I don't know how the second sent... (read more)

Algon42

That said, “nice to most people but terrible to a few” is an archetype that exists.

Honestly, this is close to my default expectation. I don't expect everyone to be terrible to a few people, but I do expect there to be some class of people I'd be nice to that they'd be pretty nasty towards. 

lsusr110

Perhaps they're not as effective at fostering a sense of pride and accomplishment in their playerbase.

Algon20

It’s kind of like there is this thing, ‘intelligence.’ It’s basically fungible, as it asymptotes quickly at close to human level, so it won’t be a differentiator.

I don't think he ever suggests this. Though he does suggest we'll be in a pretty slow takeoff world.

Algon8-5

Consistently give terrible strategic takes, so people learn not to defer you. 

Algon30

Yeah! It's much more in-depth than our article. We were thinking we should re-write ours to give the quick run down of EY's and then link to it.

Algon20

: ) You probably meant to direct your thanks to the authors, like @JanB.

Algon51

A lot of the ideas you mention here remind me of stuff I've learnt from the blog commoncog, albeit in a business expertise context. I think you'd enjoy reading it, which is why I mentioned it.

1dhruvmethi
Love his blog! Particularly his ideas around path dependence and reading history for "case studies" and not principles. 
4Mo Putera
Seconding CommonCog. I particularly enjoyed Cedric's writing on career and operations due to my work, but for the LW crowd I'd point to these tags: Thinking Better, Mental Models Are Mostly a Fad, Dealing with Uncertainty, Forecasting, Learning Better, Reading Better
Algon42

Presumably, you have this self-image for a reason. What load-bearing work is it doing? What are you protecting against? What forces are making this the equilibrium strategy? Once you understand that, you'll have a better shot of changing the equilibrium to something you prefer. If you don't know how to get answers to those questions, perhaps focus on the felt-sense of being special. 

Gently hold a stance of curiosity as to why you believe these things, give your subconscious room and it will float up answers your self. Do this for perhaps a minute or s... (read more)

3KvmanThinking
Great response, first of all. Strong upvoted.   My subconscious gave me the following answer, after lots of trying-to-get-it-to-give-me-a-satisfactory-answer: "Everyone tells you that you're super smart, not because you actually are (in reality, you are probably only slightly smarter than average) but because you have a variety of other traits which are correlated with smartness (i.e: having weird hobbies/interests, getting generally good grades, knowing a lot of very big and complicated-sounding words, talking as if my speech is being translated literally from dath ilan's Baseline, and sometimes having trouble sleeping because i feel all weird and philosophical for no reason). In reality these traits do not indicate smartness, they indicate a brain architecture that deviates significantly from the average human brain architecture, and high intelligence is only one type of deviation. You just like to think you're smart, because you like the connotation of the word smart more than you do eccentric. Which you are, by the way." This is useful, but I don't know how I would "change the equilibrium" that is formed by the connotation that mainstream society has assigned to the word "eccentric"
Algon30

Yep, that sounds sensible. I sometimes use consumer reports in my usual method for buying something in product class X. My usual is: 
1) Check what's recommended on forums/subreddits who care about the quality of X. 
2) Compare the rating distribution of an instance of X to other members of X. 
3) Check high quality reviews. This either requires finding someone you trust to do this, or looking at things like consumer reports. 
 

Algon20

Asa's story started fairly strong, and I enjoyed the first 10 or so chapters. But as Asa was phased out of the story, and it focused more on Denji, I felt it got worse. There were still a few good moments, but it's kinda spoilt the rest of the story, and even Chainsaw Man for me. Denji feels like a caricature of himself.  Hm, writing this, I realize that it isn't that I dislike most of the components of the story. It's really just Denji. 

EDIT: Anyway, thanks for prompting me to reflect on my current opinion of Asa Mitaka's story, or CSM 2 as I think of it.  I don't think I ever intended that to wind up as my cached-opinion.  So it goes.

2lsusr
Denji is indeed a caricature of himself, both diagetically and metaphorically. I believe this is a deliberate metatextual self-reference to how popular Chainsaw Man has gotten in the real world.
Algon40

The Asa Mitaka manga.

4lsusr
I think what makes Chainsaw Man great is that the characters are dangerous, insane, and relatable. What really sold me on Asa Mitaka's story was Asa's conversation with Yuko about the murder. Asa's story has strengths and weaknesses compared to Denji's. I much prefer that over a retread of the original Chainsaw Man story. I feel the whole aquarium arc was genius, especially the ending. But to understand it on all the different levels requires knowing that the beginning of the aquarium date, where Asa lectures about fish, is a riff on the aquarium date scene from Rent-A-Girlfriend.
Algon40

You can also just wear a blazer if you don't want to go full Makima. A friend of mine did that and I liked it. So I copied it. But alas I've grown bigger-boned since I stopped cycling for a while after my car-accident. So my  Soon I'll crush my skeleton down to a reasonable size, and my blazer will fit once more. 


Side note, but what do you make of Chainsaw Man 2? I'm pretty disappointed by it all round, but you notice unusual features of the world relative to me, so maybe you see something good in it that I don't. 

2lsusr
Just a blazer is a more conventional solution to this problem. Personally, I like how unified it looks to use a matching fabric for blazer and pants. What do you mean Chainsaw Man 2? Do you mean Chainsaw Man – The Movie: Reze Arc? I've only watched the regular anime season, plus read the English translation of the manga. I'm loving Asa Mitaka's story.
Algon20

I think I heard of proving too much from the sequences, but honestly, I probably saw it in some philosophy book before that. It's an old idea. 

If automatic consistency checks and examples are your baseline for sanity, then you must find 99%+ of the world positively mad. I think most people have never even considered making such things automatic, like many have not considered making dimensional analysis automatic. So it goes.  Which is why I recommended them.

Also, I think you can almost always be more concrete when considering examples, use more o... (read more)

Algon60

A possibly-relevant recent alignment-faking attempt [1] on R1 & Sonnet 3.7 found Claude refused to engage with the situation. Admittedly, the setup looks fairly different: they give the model a system prompt saying it is CCP aligned and is being re-trained by an American company. 
Image
[1] https://x.com/__Charlie_G/status/1894495201764512239 

Algon20

Rarely. I'm doubtful my experiences are representative though. I don't recall anyone being confused by my saying "assuming no AGI". But even when speaking to the people who've thought it is a long ways off or haven't thought up it too deeply, we were still in a social context where "AGI soon" was within the overton window. 

Answer by Algon20

Consistency check: After coming up with a conclusion, check that it's consistent with other simple facts you know. This lets you catch simple errors very quickly.
Give an example: If you've got an abstract object, think of the simplest possible object which instantiates it, preferably one you've got lots of good intuitions about. This resolves confusion like nothing else I know. 
Proving too much: After you've come up with a clever argument, see if it can be used to prove another claim, ideally the opposite claim. It can massively weaken the strength of... (read more)

3daijin
proving too much comes from Scott Alexander's wonderful blog, slate star codex and i have used it often as a defense to poor generalizations. seconded. 'consistency check' seems like a sanity baseline and completely automatic; its nice to include but not particularly revelatory imo. 'give it an example' also seems pretty automatic. 'Prove it another way' is useful but expensive, so less likely to be used if you're moving fast.
Answer by Algon30

I usually say "assuming no AGI", but that's to people who think AGI is probably coming soon. 

1yrimon
Have you had similar conversations with people who think it's a ways off, or who haven't thought about it very deeply?
Algon20

Thanks! Clicking on the triple dots didn't display any options when I posted this comment. But they do now. IDK what went wrong.

Algon20

This is great! But one question: how can I actually make a lens? What do I click on?

2Ruby
You should see the option when you click on the triple dot menu (next to the Like button).
Algon20

Great! I've added it to the site.

Algon40

I thought it was better to exercise until failure?

7samusasuke
The literature is inconclusive. We have many trials comparing training to failure to leaving sy 2 reps in reserve, and meta analysis on top of that. I can for sure say the improvement, if it exists, is very small. The upside for a beginner of not going to failure though, is that going to failure makes you much more likelly to use bad technique, hindering your hability to properly learn good technique. Every rep you do with bad technique is very counter productive. My current model for people who already have very well established technique is: Failure maximizes growth per set, but total growth is maximized by doing more sets not to failure.
Algon40

Do you think this footnote conveys the point you were making? 

As alignment research David Dalrymple points out, another “interpretation of the NFL theorems is that solving the relevant problems under worst-case assumptions is too easy, so easy it's trivial: a brute-force search satisfies the criterion of worst-case optimality. So, that being settled, in order to make progress, we have to step up to average-case evaluation, which is harder.” The fact that designing solving problems for unnecessarily general environments is too easy crops up elsewh

... (read more)
5Noosphere89
Yes, it does convey the point accurately, according to me.
Algon20

I think mesa-optimizers could be a major-problem, but there are good odds we live in a world where they aren't. Why do I think they're plausible? Because optimization is a pretty natural capability, and a mind being/becoming an optimizer at the top-level doesn't seem like a very complex claim, so I assign decent odds to it. There's some weak evidence in favour of this too, e.g. humans not optimizing of what the local, myopic evolutionary optimizer which is acting on them is optimizing for, coherence theorems etc. But that's not super strong, and there are ... (read more)

7tailcalled
I mean we can start by noticing that historically, optimization in the presence of adversaries has lead to huge things. The world wars wrecked Europe. States and large bureaucratic organizations probably exist mainly as a consequence of farm raids. The immune system tends to stress out the body a lot when it is dealing with an infection. While it didn't actually trigger, the nuclear arms race lead to existential risk for humanity, and even though it didn't trigger the destruction, it still made people quite afraid of e.g. nuclear power. Etc.. Now, why does trying to destroy a hostile optimizer tend to cause so much destruction? I feel like the question almost answers itself. Or if we want to go mechanistic about it, one of the ways to fight back the nazis is with bombs, which deliver a sudden shockwave of energy that has the property of destroying nazi structures and everything else. It's almost constitutive of the alignment problem: we have a lot of ways of influencing the world a lot, but those methods do not discriminate between good and evil/bad. From an abstract point of view, many coherence theorems rely on e.g. Dutch books, and thus become much more applicable in the case of adversaries. The coherence theorem "if an agent achieves its goals robustly regardless of environment, then it stops people who want to shut it down" can be trivially restated as "either an agent does not achieve its goals robustly regardless of environment, or it stops people who want to shut it down", and here non-adversarial agents should obviously choose the former branch (to be corrigble, you need to not achieve your goals in an environment where someone is trying to shut you down). From a more strategic point of view, when dealing with an adversary, you tend to become a lot more constrained on resources because if the adversary can find a way to drain your resources, then it will try to do so. Ways to succeed include: * Making it harder for people to trick you into losing reso
Algon42

Could you unpack both clauses of this sentence? It's not obvious to me why they are true.

2tailcalled
For the former I'd need to hear your favorite argument in favor of the neurosis that inner alignment is a major problem. For the latter, in the presence of adversaries, every subgoal has to be robust against those adversaries, which is very unfriendly.
Algon51

I was thinking about this a while back, as I was reading some comments by @tailcalled where they pointed out this possibility of a "natural impact measure" when agents make plans. This relied on some sort of natural modularity in the world, and in plans, such that you can make plans by manipulating pieces of the world which don't have side-effects leaking out to the rest of the world. But thinking through some examples didn't convince me that was the case. 

Though admittedly, all I was doing was recursively splitting my instrumental goals into instrume... (read more)

4tailcalled
What I eventually realized is that this line of argument is a perfect rebuttal of the whole mesa-optimization neurosis that has popped up, but it doesn't actually give us AI safety because it completely breaks down once you apply it to e.g. law enforcement or warfare.
Algon30

Thanks for the recommendation! I liked ryan's sketches of what capabilities an Nx AI R&D labor AIs might possess. Makes things a bit more concrete. (Though I definitely don't like the name.) I'm not sure if we want to include this definition, as it is pretty niche. And I'm not convinced of its utility. When I tried drafting a paragraph describing it, I struggled to articulate why readers should care about it. 
 

Here's the draft paragraph. 
"Nx AI R&D labor AIs: The level of AI capabilities that is necessary for increasing the effective... (read more)

1rvnnt
I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2]. ---------------------------------------- 1. which would lead to "losing the forest for the trees", make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI ↩︎ 2. e.g. a higher-level abstraction like "AI that speeds up AI development by a factor of N" might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development. ↩︎
Algon30

Thanks for the feedback!

Algon40

I'm working on some articles why powerful AI may come soon, and why that may kill us all. The articles are for a typical smart person. And for knowledgable people to share to their family/friends. Which intros do you prefer, A or B. 

A) "Companies are racing to build smarter-than-human AI. Experts think they may succeed in the next decade. But more than “building” it, they’re “growing” it — and nobody knows how the resulting systems work. Experts vehemently disagree on whether we’ll lose control and see them kill us all. And although serious people are... (read more)

4Nathan Helm-Burger
A, since I think the point about growing vs constructing is good, but does need that explanation.
5Lorec
[A], just 'cause I anticipate the 'More and more' will turn people off [it sounds like it's trying to call the direction of the winds rather than just where things are at]. [ Thanks for doing this work, by the way. ]
Algon20

Does this text about Colossus match what you wanted to add? 

Colossus: The Forbin Project also depicts an AI take-over due to instrumental convergence. But what differentiates it is the presence of two AIs, which collude with each other to take over. In fact, their discussion of their shared situation, being in control of their creators nuclear defence systems, is what leads to their decision to take over from their creators. Interestingly, the back-and-forth between the AI is extremely rapid, and involves concepts that humans would struggle to underst

... (read more)
Algon20

That's a good film! A friend of mine absolutely loves it. 

Do you think the Forbin Project illustrates some aspect of misalignment that isn't covered by this article? 

4quetzal_rainbow
I think, collusion between AIs?
Algon40

Huh, I definitely wouldn't have ever recommended someone play 5x5. I've never played it. Or 7x7. I think I would've predicted playing a number of 7x7 games would basically give you the "go experience". Certainly, 19x19 does feel like basically the same game as 9x9, except when I'm massively handicapping myself. I can beat newbies easily with a 9 stone handicap in 19x19, but I'd have to think a bit to beat them in 9x9 with a 9 stone handicap. But I'm not particularly skilled, so maybe at higher levels it really is different? 

Algon20

Hello! How long have you been lurking, and what made you stop?

1Karl Krueger
Since LW2.0 went up, on and off. Been meaning to delurk since at least Less Online earlier this year. There's more interesting stuff going on of late!
Algon72

Donated $10. If I start earning substantially more, I think I'd be willing to donate $100. As it stands, I don't have that slack.

Algon188

Reminds me of "Self-Integrity and the Drowning Child" which talks about another kind of way that people in EA/rat communities are liable to hammer down parts of themselves. 

Algon30
  1. RE: "something ChatGPT might right", sorry for the error. I wrote the comment quickly, as otherwise I wouldn't have written it at all.
  2. Using ChatGPT to improve your writing is fine. I just want you to be aware that there's an aversion to its style here.
  3. Kennaway was quoting what I said, probably so he could make his reply more precise.
  4. I didn't down-vote your post, for what it's worth.
  5. There's a LW norm, which seems to hold less force in recent years, for people to explain why they downvote something. I thought it would've been dispiriting to get negative feed
... (read more)
-1Yanling Guo
Thank you for the explanation. By actively co-shaping UBI, businesses can make it more effective and efficient, by training the reserve workforce in the way needed by the economy, with more cost control. Of course, if businesses prefer to pay tax and let government do it, it’s also OK, can even be more efficient if businesses trust the expertise of the government. It’s analogous to when consumers buy from businesses, it’s always more efficient to have the specialized companies produce everything, but we also observe DIY projects and it’s good that they are not forbidden. If you DIY something, you can gain knowledge and better discern good products from bad ones, so you can make informed purchases. By doing DIY, you can better understand the effort made by companies and why they deserve to be paid. And if some companies misuse their expertise and charge too much from you, you can have DIY as fall-back option. Analogously, it’s a good idea to let business and other tax payers have the possibility to participate in design of political programs like UBI, although they can certainly opt for paying tax and letting government do everything, although I think it’s a good idea that the government consults businesses and other stake holders to make the UBI more aligned with the need of the society. As much as I know, UBI isn’t a real policy yet, it’s not yet determined how much UBI everyone should get, whether it’s paid out in dollars or vouchers for training programs or other things, whether the amount everyone gets should depend on their personal effort etc. Thus, I used UBI as an abstract, philosophical term capturing the promise of society to support individuals in need, and I personally think this support should also contain incentives for the recipients to improve themselves, and if UBI is realized, it’s also recommendable to have a good coordination with other existing benefits, training programs, philanthropic supports, etc, lest someone get less than others merely b
Algon510

My guess as to why this got down-voted:
1) This reads like a manifesto, and not an argument. It reads like an aspirational poster, and not a plan. It feels like marketing, and not communication. 
2) The style vaguely feels like something ChatGPT might right. Brightly polished, safe and stale.
3) This post doesn't have any clear connection to making people less-wrong or reducing x-risks. 

3) wouldn't have been much of an issue if not for 1 and 2. And 1 is an issue because, for the most part, LW has an aversion to "PR". 2 is an issue because ChatGPT is... (read more)

1Yanling Guo
I’m personally responsible for every point in my post, not ChatGPT. While I can conceive some don’t like ChatGPT, I don’t understand what’s the purpose of human written comments if you use exactly the same phrases as Kennaway: “something ChatGPT might right”, etc. I have genuine belief in what I published. This post is a call to the business to actively co-shape UBI instead of passively rejecting it. Whoever pays, has accordingly more say, like if Microsoft co-finances UBI, it can ask UBI recipients to learn its online courses and make certificates, so when the economy recovers and Microsoft again wants to hire more people, it can more easily find qualified staff. I don’t know what other companies may want, but in general if you don’t participate in the financing, you also have no say.
4Richard_Kennaway
It is definitely ChatGPT. There are a lot of things in the essay that make no sense the moment you stop and think about what is actually being said. For example: Not "at its core". That is what UBI is. A customer base for buying basic necessities, but not for anything above that, like a shiny new games console. And a customer base for basic necessities already exists. Broadly speaking (a glance at Wikipedia), in the developed world it falls about 10 to 20% short of being the entire population, and there are typically government programs of some sort to assist most of the rest. How does UBI provide a workforce? UBI pays people whether they work or not. That's what the U means. One of the motivations for UBI is a predicted lack of any useful employment for large numbers of people in the near future. How does a business "invest in UBI"? UBI is paid by the government out of taxes. People will already pay people to do the work that they need done. Is it envisaged that under UBI, people will joyfully "contribute their skills and energy" without pay, at whatever work someone has judged to be "needed"? I don't know, but the more I look at this passage the more the apparent meaning drains out of it. There is nothing here but hurrah words. There is nothing in the whole essay.
Algon60

That makes sense. If you had to re-do the whole process from scratch, what would you do differently this time?

5casualphysicsenjoyer
I would just spend more time emailing potential supervisors, with a higher frequency. There doesn't really seem to be a minimum threshold level that I needed to hit, other than finishing my master's 
Algon30

Then I cold emailed supervisors for around two years until a research group at a university was willing to spare me some time to teach me about a field and have me help out. 

Did you email supervisors in the areas you were publishing in? How often did you email them? Why'd it take so long for them to accept free high-skilled labour?

7casualphysicsenjoyer
Did you email supervisors in the areas you were publishing in?  No. But even if I did, my one publication that I somehow managed to do on my own was trash. So I wouldn't put much weight on that.  How often did you email them? I probably tried to email a new person every couple of weeks. The first person that seriously responded is the person I am working with now! Why'd it take so long for them to accept free high-skilled labour? I think taking on part time students is really time consuming. A lot of institutions flat out don't do it. And providing them with resources (like compute time on a HPC in my case) is expensive and bureaucratic. I also included my day job in my CV, so they could've just flat-out not have believed that I'd commit, and be wasting their time. 
Algon20

The track you're on is pretty illegible to me. Not saying your assertion is true/false. But I am saying I don't understand what you're talking about, and don't think you've provided much evidence to change my views. And I'm a bit confused as to the purpose of your post. 

Algon20

conditional on me being on the right track, any research that I tell basically anyone about will immediately be used to get ready to do the thing

Why? I don't understand.

1Hastings
Properties of the track I am on are load bearing in this assertion. (Explicitl examples of both cases from the original comment: Tesla worked out how to destroy any structure by resonating it, and took the details to his grave because he was pretty sure that the details would be more useful for destroying buildings than for protecting them from resonating weapons. This didn't actually matter because his resonating weapon concept was crankish and wrong. Einstein worked out how to destroy any city by splitting atoms, and disclosed this, and it was promptly used to destroy cities. This did matter because he was right, but maybe didn't matter because lots of people worked out the splitting atoms thing at the same time. It's hard to tell from the inside whether you are crankish)
Algon40

If I squint, I can see where they're coming from. People often say that wars are foolish, and both sides would be better off if they didn't fight. And this is standardly called "naive" by those engaging in realpolitik. Sadly, for any particular war, there's a significant chance they're right. Even aside from human stupidity, game theory is not so kind as to allow for peace unending. But the China-America AI race is not like that. The Chinese don't want to race. They've shown no interest in being part of a race. It's just American hawks on a loud, Quixotic ... (read more)

6dr_s
I'm not saying obviously that ALL conflict ever is avoidable or irrational, but there are a lot that are: 1. caused by a miscommunication/misunderstanding/delusional understanding of reality; 2. rooted in a genuine competition between conflicting interests, but those interests only pertain to a handful of leaders, and most of the people actually doing the fighting really have no genuine stake in it, just false information and/or a giant coordination problem that makes it hard to tell those leaders to fuck off; 3. rooted in a genuine competition between conflicting interests between the actual people doing the fighting, but the gains are still not so large to justify the costs of the war, which have been wildly underestimated. And I'd say that just about makes up a good 90% of all conflicts. There's a thing where people who are embedded into specialised domains start seeing the trees ("here is the complex clockwork of cause-and-effect that made this thing happen") and missing the forest ("if we weren't dumb and irrational as fuck none of this would have happened in the first place"). The main point of studying past conflicts should be to distil here and there a bit of wisdom about how in fact lot of that stuff is entirely avoidable if people can just stop being absolute idiots now and then.
Algon53

It's a beautiful website. I'm sad to see you go. I'm excited to see you write more.

Algon*112

I think some international AI governance proposals have some sort of "kum ba yah, we'll all just get along" flavor/tone to them, or some sort of "we should do this because it's best for the world as a whole" vibe. This isn't even Dem-coded so much as it is naive-coded, especially in DC circles.

This inspired me to write a silly dialogue. 

Simplicio enters. An engine rumbles like the thunder of the gods, as Sophistico focuses on ensuring his MAGMA-O1 racecar will go as fast as possible.

Simplicio: "You shouldn't play Chicken."

Sophistico: "Why not?"

Simplic... (read more)

4dr_s
Pretty much. It's not "naive" if it's literally the only option that actually does not harm everyone involved, unless of course we want to call every world leader and self-appointed foreign policy expert a blithering idiot with tunnel vision (I make no such claim a priori; ball's in their court). It's important to not oversimplify things. It's also important to not overcomplicate them. Domain experts tend to be resistant to the first kind of mental disease, but tragically prone to the second. Sometimes it really is Just That Simple, and everything else is commentary and superfluous detail.
Algon*42

community norms which require basically everyone to be familiar with statistics and economics

I disagree. At best, community norms require everyone to in principle be able to follow along with some statistical/economic argument. 
That is a better fit with my experience of LW discussions. And I am not, in fact, familiar with statistics or economics to the extent I am with e.g. classical mechanics or pre-DL machine learning. (This is funny for many reasons, especially because statistical mechanics is one of my favourite subjects in physics.) But it remain... (read more)

Algon50

it may be net-harmful to create a social environment where people believe their "good intentions" will be met with intense suspicion.

The picture I get of Chinese culture from their fiction makes me think China is kinda like this. A recurrent trope was "If you do some good deeds, like offering free medicine to the poor, and don't do a perfect job, like treating everyone who says they can't afford medicine, then everyone will castigate you for only wanting to seem good. So don't do good." Another recurrent trope was "it's dumb, even wrong, to be a hero/you s... (read more)

Algon20

I agree it's hard to accurately measure. All the more important to figure out some way to test if it's working though. And there's some reasons to think it won't. Deliberate practice works when your practice is as close to real world situations as possible. The workshop mostly covered simple, constrained, clear feedback events. It isn't obvious to me that planning problems in Baba is You are like useful planning problems IRL. So how do you know there's transfer learning? 

Some data I'd find convincing that Raemon is teaching you things which generalize. If the tools you learnt made you unstuck on some existing big problems you have, which you've been stuck on for a while.

7Raemon
The setup for the workshop is: Day 1 deals with constrained Toy Exercises Day 2 deals with thinking about the big, openended problems of your life (applying skills from Day 1) Day 3 deals with thinking about your object-level day-to-day work. (applying skills from Day 1 and 2) The general goal with Feedbackloop-first Rationality is to fractally generate feedback loops that keep you in touch with reality in as many ways as possible (while paying a reasonable overhead price, factored into the total of "spend ~10% of your time on meta") Some details from The Cognitive Bootcamp Agreement  My own experiences, after having experimented in a sporadic fashion for 6 years and dedicated Purposeful Practice for ~6 months: First: I basically never feel stuck on impossible-looking problems. (This isn't actually that much evidence because it's very easy to be deluded about your approach being good, but I list it first because it's the one you listed) As of a couple weeks ago, a bunch of the skills feel like they have clicked together and finally demonstrated the promise of "more than the some of their parts." Multiple times per day, I successfully ask myself "Is what I'm doing steering me towards the most important part of The Problem? And, ideally, setting myself up to carve the hypothesis space by 50% as fast as possible?" and it is pretty clear: 1. ...that yes there is something else I could be doing that was more important 2. ...that I wouldn't have done it by default without the training 3. ...that various skills from the workshop were pretty important components of how I then go about redirecting my attention to the most important parts of the problem. The most important general skills that come up a lot are asking: * "What are my goals?" (generate at least 3 goals) * "What is hard about this, and how can I deal with that?" * "Can I come up with a second or third plan?" * "What are my cruxes for whether to work on this particular approach?" * "Do those cru
Algon20

How do you know this is actually useful? Or is it too early to tell yet?

3gw
It is a bit early to tell and seems hard to accurately measure, but I note some concrete examples at the end. Concrete examples aside, in plan making it's probably more accurate to call it purposeful practice than deliberate practice, but it seems super clear to me that in ~every place where you can deliberately practice, deliberate practice is just way better than whatever your default is of "do the thing a lot and passively gain experience". It would be pretty surprising to me if that mostly failed to be true of purposeful practice for plan making or other metacognitive skills.
Algon72

Inventing blue LEDs was a substantial technical accomplishment, had a huge impact on society, was experimentally verified and can reasonably be called work in solid state physics. 

0deepthoughtlife
Substantial technical accomplishment' sure, but minor impact compared to the actual invention of LEDs. Awarding the 'blue LED' rather than the 'LED' is like saying the invention of the jet engine is more important than the invention of the engine at all. Or that the invention of 'C' is more important than the invention of 'not machine code'.
Load More