All of joshuabecker's Comments + Replies

it occurs to me that 'rudeness' in this framework is a sort of protective charm; by casting the person as rude, you discount their credibility and therefore don't have to update your beliefs.

that can end up feeling like the information makes your cooking worse; because you update your belief about your cooking after receiving the information.

2LESS
That's very much true. However, it appears to me the object of frustration is the gesture's sentiment (as evidenced by the girlfriend's focus on the gesture specifically). Thus, I find it dubious that the girlfriend's primary concern was the changes in her own beliefs regarding her cooking.

i'm not sure the simulacrum model is quite necessary to understand people's responses to information. particularly in the first 3, i think the responses can be explained by cognitive dissonance. in 1 & 3, the hearer holds the belief "i offer a good product" and is confronted with the information "someone is not satisfied with my product." in the gym example, the alternatives (skipping entirely, 10-minute self-warmup) are easily explained by "this person is busy." in 2, you perhaps hold the belief "i am a ... (read more)

6joshuabecker
it occurs to me that 'rudeness' in this framework is a sort of protective charm; by casting the person as rude, you discount their credibility and therefore don't have to update your beliefs.

Perhaps it is possible in practice/process to disentangle value alignment issues from factual disagreements. Double-crux seems optimal at reaching consensus on factual truths (e.g., which widget will have a lower error rate?) and would at least *uncover* Carl's crux, if everybody participates in good faith, and therefore make it possible to nonetheless discover the factual truth. Then maybe punt the non-objective argument to a different process like incentive alignment as you discuss.

I absolutely love this, and it leaves me wondering about the role of social in the sabbath. This post mentions early, "Most want more social events, but coordination is hard and events are work. Now there’s always Friday night," but the subject does not come up again. And yet with regard to the historical referent, sociality is baked deeply into the sabbath. For the orthodox version, the minyan rule (plus the no driving rule) requires that people live close together and that they see each other once a week.

On the one hand, community h... (read more)

1bfinn
One issue I have with the regular meal with friends/family bit is that (aside from those in your household, who you would see anyway) this potentially sets up a regular commitment which could well become onerous. In that, if you establish a pattern of seeing the same people week after week, you may after a while start to get bored of it/them (even close friends can pall if seen too often), and want to see other people, or no-one, for the Sabbath meal. Which starts to make Sabbath tedious/stressful if not dealt with, and even if dealt with delicately can create an awkward situation. (Cf during COVID lockdowns my family set up a weekly Zoom meeting, with its own 'rituals' such as a quiz. This got quite tedious after a few months, with nothing much new to say, connection problems, the calls tending to overrun for no good reason, etc. etc. It was eventually broken only by people starting to drop out with excuses, after a few weeks of which everyone got the hint and it finally ended, having long outstayed its welcome.)

but that just kicks the can down the road, leaving the question: "Could I have wanted X?"

1TAG
It does, but that isn't a decisive refutation of anything.
1AprilSR
Eh. The next question to ask is going to depend entirely upon context. I feel like most of the time people use it in practice they’re talking about the extent of capabilities, where whether you were able to want something is irrelevant. There are other cases though.
reading all this has led me to think a lot about using MMOs as a testing ground for sociology

i think you are on the right track---a google scholar search reveals an enormous amount of social science conducted on virtual worlds including topics like teamwork, economics, and religion. don't know about governance systems though.

I find myself wondering about disagreements (or subcomponents of disagreements) where appealing to objective reality may not be possible.

It seems like this is a special case of a broader type of process, fitting into the more general category of collaborative decision-making. (Here I'm thinking of the 5 approaches to conflict: compete, collaborate, avoid, accommodate, and compromise).

In the explicit product-as-widget case, there may always be an appeal to some objectively frameable question: what will make us more money? But even this can ignit... (read more)

2Raemon
Yeah, there are plenty of cases where people actually want different things. I think I agree that some kind of hybrid technique involving negotiation and doublecrux (among other things) might help. Random exploration, don't really have a point yet: Another case might be two people arguing over how to design a widget, where Carl wants to build a widget using the special Widget Design Technique that he invented. Damien wants to build the widget using Some Other Technique. And maybe it turns out Carl's crux is that if they use Carl's Special Widget Design Technique, Carl will look better and get more promotions. I think resolving that sort of situation depends on other background elements that Double Crux won't directly help with. If you're the CEO of a small organization, maybe you can manage to hire people who buy into the company's mission so thoroughly they won't try to coopt Widget Design processes for their personal gain. Or, you might also somehow construct incentives to keep skin-in-the-game, such that it's more in Carl's interest to have the company do well than to get to look good using his Special Widget Design Technique. Ideally, he's incentivized to actually have good epistemics about his technique, and see clearly whether it's better than Damien's Generic Technique (or Damien's own special technique). This is all pretty hard though (especially as the company grows). And there's a bunch stuff outside your control as CEO because the outside world might still reward Carl more if he can tell a compelling story about how his special technique saved the day.
3Pattern
"Why do you believe X?" -> "Why do you want X?" That's a useful distinction, thanks for pointing it out.

I read "At Home In The Universe: The Search for the Laws of Self Organization and Complexity" which is a very accessible and fun read---I am not a physicist/mathematician/biologist, etc, and it all made sense to me. The book talks about evolution, both biological and technological.

And the model described in that book has been quite commonly adapted by social scientists to study problem solving, so it's been socially validated as a good framework for thinking about scientific research.

Sure, though the question of "why is science slowing down" and "what should we do now" are two different questions. If the answer of "why is science slowing down" is simply because---it's getting harder.... then there may be absolutely nothing wrong with our coordination, and no action is required.

I'm not saying we can't do even better, but crisis-response is distinct from self-improvement.

It's worth considering the effects of the "exploration/exploitation" tradeoff: decreasing coordination/efficiency can increase the efficacy of search in problem space over the long run, precisely because efforts are duplicated. When efforts are duplicated, you increase the probability that someone will find the optimal solution. When everyone is highly coordinated, people all look in the same place and you can end up getting stuck in a "local optimum"---a place that's pretty good, but can't be easily improved without s... (read more)

1Nicholas / Heather Kross
I'd be interested in more resources regarding the "low-hanging fruit" theory as related to the structure of ideaspace and how/whether nk space applies to that. Any good resources-for-beginners on Kauffman's work?

Can you help me understand why you see this as a a coordination problem to be solved? Should I infer that you don't buy the "lowest hanging fruit is already picked" explanation?

3ryan_b
I can't speak for Raemon, but I point out that how low the fruit hangs is not a variable upon which we can act. We can act on the coordination question, regardless of of anything else.

Regarding the apparent non-scaling benefits of history: what you call the "most charitable" explanation seems to me the most likely. Thousands of people work at places like CERN and spend 20 years contributing to a single paper, doing things that simply could not be done by a small team. Models of problem-solving on "NK Space" type fitness landscapes also support this interpretation: fitness improvements become increasingly hard to find over time. As you've noted elsewhere, it's easier to pluck low-hanging fruit.

I assume by 'linear' you mean directly proportional to population size.

The diminishing marginal returns of some tasks, like the "wisdom of crowds" (concerned with forming accurate estimates) are well established, and taper off quickly regardless of the difficulty of the task---it's basically follows the law of large numbers and sample error (see "A Note on Aggregating Opinions", Hogarth, 1978). This glosses over some potential complexity but you're probably unlikely to do ever get much benefit from more than a few ... (read more)

3joshuabecker
Regarding the apparent non-scaling benefits of history: what you call the "most charitable" explanation seems to me the most likely. Thousands of people work at places like CERN and spend 20 years contributing to a single paper, doing things that simply could not be done by a small team. Models of problem-solving on "NK Space" type fitness landscapes also support this interpretation: fitness improvements become increasingly hard to find over time. As you've noted elsewhere, it's easier to pluck low-hanging fruit.

Update: this is a pretty large field of research now. The Collective Intelligence Conference is going into its 7th year.

Are you still in Chicago? There was recently a gathering at the Art Museum garden with ~30 people in attendance, and a few people were discussing trying to keep the momentum, myself included. If you are around, I would like to invite you to give it another go. Regardless of your current location, I'd be curious to hear more details about your particular experience in this locale.

2mingyuan
Hi Joshua, sorry I missed this comment! I'm not in Chicago anymore, though I'm still invested in the group's success. I've been meaning to write up a postmortem of the version of Chicago Rationality that I ran (and later passed off to Peter), but I recently started working full-time so I'm not sure when I'll get around to that - for now I can just say a couple of things here. One is that I think Chicago is more spread-out than a lot of major cities, which makes it really hard to pick a good location - e.g., we initially had our meetups in Hyde Park, but we had very low regular attendance because most people weren't willing to trek all the way down to the south side; and when meetups were moved to Harold Washington Library we basically lost all the UChicago students. Similarly Northwestern is way far from any central location you might pick, but in the opposite direction. This makes it really hard to sustain a core group of people who will regularly show up. Another is sort of a catch-22, where because there's not a lot happening in Chicago in terms of rationality or EA, there's not a lot keeping really hardcore rationalists/EAs in the city. Like, I joined a version of Chicago Rationality in March of 2015 that only lasted for three months, at the end of which all four other members moved to the Bay to work for EA organizations. I think this was almost certainly the right call for all four of them, because they didn't have strong roots in Chicago, weren't well-positioned to earn to give, and were good fits for the culture and organizations they joined in the Bay, and most importantly because there was no community for them in Chicago and no way for them to have an impact there. As for why I think there hasn't been a lasting community in Chicago so far: Chicago isn't flooded with programmers like some other cities - which is a big deal because programmers are way disproportionately into rationality/SSC - and it doesn't have particularly unique industries that would

E[x]=0.5

even for the frequentist, and that's what we make decisions with, so focusing on p(x) is a bit of misdirection. The whole frequentist-vs-bayesian culture war is fake. They're both perfectly consistent with well defined questions. (They have to be, because math works.)

And yes to everything else, except...

As to whether god plays dice with the universe... that is not in the scope of probability theory. It's math. Your Bayesian is really a pragmatist, and your frequentist is a straw person.

Great post!

Good thing the author is dead!

I like to think that.... facing the only true existential threat, the author found the cognitive limits of rationality and got the fear. So, unbeknownst to themself, summoned ex machina an article of Faith to keep them warm, for the night is dark and full of terror.

If I'm interested in learning about the claims made by the science/study of decision-making, and not looking to make decisions myself (so perhaps exercises don't matter?) would that change your recommendation? You can further assume that I am moderately well trained in probability theory.

6Scott Garrabrant
No, sorry. It wouldn't be very readable, and it is easy to do yourself.