All of kvas_it's Comments + Replies

kvas_it30

Ideally the kinds of misunderstandings described in this post should not happen between humans because of shared context and common sense. However, they do. Not all humans have the same mental models and the same information context. Common sense is also not as common as it might seem at first.

kvas_it10

AFAIK, in the countries where air conditioning is useful people have it. I live in Germany and here we mostly think that it's not worth the noise pollution and making the facade less pretty. But this too might change now that many people are switching their gas heating to heat pumps (that are basically air conditioners with extra functionality).

kvas_it30

For me your method didn't work, but I found another one. I wave the finger (that's pointing down) in front of the image in a spinning motion synchronized with the leg movement and going in the direction that I want. The finger obscures the dancer quite a bit, which makes it easier for me to imagine it spinning in the "right" direction. Sometimes I'd see it spin in the "right" direction for like 90 degrees and then stubbornly go back again, but eventually it complies and starts spinning in how I want it. Then I can remove the finger and it continues.

kvas_it4037

In many parts of Europe nobody has to work 60-hour weeks just to send their kids to a school with low level of violence.  A bunch of people don't work at all and still their kids seem to have all teeth in place and get some schooling. Not sure what we did here that the US is failing to do, but I notice that the described problem of school violence is a cultural problem -- it's related to poverty, but is not directly caused by it.

3eggsyntax
I think a more central question would be: do a nontrivial number of people in those parts of Europe work at soul-crushing jobs with horrible bosses? If so, what is it that they would otherwise lack that makes them feel obligated to do so?
3A1987dM
Yes (though OTOH conversely there are also things that many Europeans struggle to afford but Americans take for granted, e.g. air conditioning)
Viliam15-5

I agree that there seems to be something uniquely wrong with USA (or maybe it's just a different trade-off than other countries have -- it's difficult to guess which problems are part of a greater equation, and which ones are accidental), but that doesn't answer the central question -- if, judging by looking at some economical numbers, poverty already doesn't exist for centuries, why do we feel so poor; or perhaps, why do we act as if we are poor.

  • it could be that some important numbers are missing from the official set (the oxygen in Anoxia);
  • it could be th
... (read more)
2[comment deleted]

I think value claims are more likely to be parasitic (mostly concerned with copying themselves or participating in a memetic ensemble that's mostly copying itself) than e.g. physics claims, but I don't think you have good evidence to say "mostly parasitic".

My model is that parasitic memes that get a quick and forceful pushback from reality would face an obstacle to propagation compared to parasitic memes for which the pushback from reality is delayed and/or weak. Value claims and claims about longevity (as in your example, although I don't think those are value claims) are good examples of a long feedback cycle, so we should expect more parasites.

I took the survey. It was long but fun. Thanks for the work you've put into designing it and processing the results.

What can I say, your prior does make sense in the real world. Mine was based on the other problems featuring Omega (Newcomb's problem and Counterfactual mugging) where apart from messing with your intuitions Omega was not playing any dirty tricks.

0Dagon
I think this is a different guy named Omega. No mention of prediction or causality tricks, which are the hallmarks of Newcomb's problem.

There's no good reason for assigning 50% probability to game A but neither is there a good reason to assign any other probability. I guess I can say that i'm using something like "fair Omega prior" that assumes that Omega is not trying to trick me.

You and Gurkenglas seem to assume that Omega would try to minimize your reward. What is the reason for that?

0Dagon
Base rate pessimism and TANSTAAFL. Offers of free money are almost always tricks, so my prior is that the next offer is also a trick. I expect not to be paid at all, so choosing the option that's clearly a violation if I'm not paid is a much clearer cheat than choosing the one where Omega can claim to play by the rules and not pay me. If you state that I don't know a probability, I have to use other assumptions. 50/50 is a lazy assumption. Note: this boils down to "where do you get your priors?", which is unsolved in Bayesean rationality.

You could also make a version where you don't know what X is. In this case always reject strategy doesn't work since you would reject k*X in real life after the simulation rejected X. It seems like if you must precommit to one choice, you would have to accept (and get (X+X/k)/2 on average) but if you have a source of randomness, you could try to reject your cake and eat it too. If you accept with probability p and reject with probability 1 - p, your expected utility would be (p*X + (1-p)*p*k*X + p*p*X/k)/2. If you know the value of k, you can calculate the best p and see if random strategy is better than always-accept. I'm still not sure where this is going though.

I also agree with Dagon's first paragraph. Then, since I don't know which game Omega is playing except that either is possible, I will assign 0.5 probability to each game, calculate expected utilities (reject -> $5000, accept -> $550) and reject.

For general form I will reject if k > 1/k + 1, which is the same as k*k - k - 1 > 0 or k > (1+sqrt(5))/2. Otherwise i will accept.

It seems like I'm missing something, though, because it's not clear why you chose these payoffs and not the ones that give some kind of nice answer.

0Dagon
Either is possible and no mention is made of how it's chosen (in fact, it's explicitly stated that the probability is not known), so why would you assign 50% rather than 0% to the chance of game A? If Omega mentioned a few irrelevant options (games C through K) which favored reject, but which it NEVER used (but you don't know that), would you change your acceptance?
0kvas_it
You could also make a version where you don't know what X is. In this case always reject strategy doesn't work since you would reject k*X in real life after the simulation rejected X. It seems like if you must precommit to one choice, you would have to accept (and get (X+X/k)/2 on average) but if you have a source of randomness, you could try to reject your cake and eat it too. If you accept with probability p and reject with probability 1 - p, your expected utility would be (p*X + (1-p)*p*k*X + p*p*X/k)/2. If you know the value of k, you can calculate the best p and see if random strategy is better than always-accept. I'm still not sure where this is going though.

Thank you, this is awesome! I've just convinced my wife to pay more attention to LW discussion forum.

1SquirrelInHell
I think it's pretty smart to NOT follow LW discussion right now... you could suggest the rss feeds (by deluks or me) instead.

And then they judge what some high-status members of their group would say about the particular Quantum Mechanics conundrum. Then, they side with him about that. Almost nobody actually ponders what the Hell is really going on with the Schrodinger's poor cat. Almost nobody.

I find it harder to reason about the question "what would high status people in group X say about Schrodinger's cat?" than about the question "based on what I understand about QM, what would happen to Schrodinger's cat?". I admit that I suck at modelling other peopl... (read more)

2Thomas
Many, many times more people are good at judging other people than at pondering QM (or any other) conundrums. Even if they are not especially good psychologists, they suck in QM even more.

Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)

After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I've come to an opinion that the "disagreement on priorities", as I have originally called it, is more significant than I originally acknowledged.

To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM ... (read more)

You are steelmanning the rationalist position

That could very well be. I had an impression that meta-rationalists are arguing against a strawman, but that would just mean we disagree about the definition of "rationalist position".

I agree that one-true-map rationalism is rather naive and that there are many people who hold this position, but I haven't seen much of this on LW. Actually, LW contains the clearest description of the map/territory relationship that I've seen, no nebulosity or any of that stuff.

0ssica3003
For me, the philosophical implications of: "there is no one true map" was the first quantum leap. How is this statement not a big deal?

Ok, I think I get it. So basically, pissing contests put aside, meta-rationalists should probably just concede that LW-style rationalists are also meta-rational and have a constructive discussion about better ways of thinking (I've actually seen a bit of this, for example in the comments to this post).

Judging from the tone of your comment, I gather that that's the opposite of what many of them are doing. Well, that doesn't really surprise me, but it's kind of sad.

0Viliam
This is how it seems to me. I may be horribly wrong, of course. But the comments on what you linked... ...are similar to how I often feel. It's like the meta-rationalists are saying "rationalists are stupid because they don't see X, Y, Z", and I am like "but I agree with X, Y, Z, and at least two of them are actually mentioned in the Sequences, so why did you have to start with an assumption that rationalists obviously must be stupid?" (I had a colleague at one job who always automatically assumed that other people were idiots, so whenever someone was talking about something this colleague knew about, he interrupted him with: "That is wrong. Here is how it actually is: ." And a few times other people were like: "Hey, but you just repeated in different words what he was already saying before your interrupted him!" The guy probably didn't notice, because he wasn't paying attention.) I am aware of my own hostility in this debate, but it is quite difficult for me to be charitable towards someone who pretty much defines themselves as "better than you" (the "meta-" prefix), proceeds with strawmanning you and refuses to update, and concludes that they are morally superior to you (the Kegan scale). Neither of this seems like an evidence that the other side is open to cooperation.

Thank you, this is a pretty clear explanation. I did read a bit more from meaningness.com yesterday and what I gathered was also pointing in the direction of this sort of meta-epistemological relativism.

However, I still don't really see a significant disagreement. The map/territory distinction, which I see as one of the key ideas of rationalism, seems to be exactly about this. So I see rationalism as saying "the map is not the territory and you never have unmediated access to the territory but you can make maps that are more or less useful in differen... (read more)

0ChristianKl
It's not important whether someone can tell you about how the map isn't the territory when you ask them, the important thing is how they reason in practice.
0entirelyuseless
You are steelmanning the rationalist position; many rationalists do say, either explicitly or implicitly, that there is one true map, which they more or less identify with the territory, and that they have it.
1Viliam
There is no such thing as a too little molehill to make a mountain out of. But there are at least two things I noticed you missed here: First, your description of rationalists is too charitable. On meta-rationalist websites they are typically described as unable to reason about systems, not understanding that their map is not the territory, prone to wishful thinking, and generally as what we call "Vulcan rationalists". (Usually with a layer of plausible deniability, e.g. on one page it is merely said that rationalists are a subset of "eternalists", with a hyperlink to other page that describes "eternalists" as having the aforementioned traits. Each of these claims can be easily defended separately, considering that "eternalists" is a made-up word.) With rationalists defined as this, it is easy to see how the other group is superior. Second, you miss the implication that people disagreeing with meta-rationality are just immature children. There is a development scale from 0 to 5, where meta-rationalists are at level 5, rationalists are at level 4, and everyone else is at some of the lower levels. Another way to express this is the concept of fluidity/nebulosity/whatever, which works like this: You make a map, and place everyone you know as some specific point on this map. (You can then arrange them into groups, etc.) The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point. This obviously makes you the coolest guy in the town -- of course until someone else comes with their map, where you get stuck at one specific point, and they get to be the one above the map. (In some sense, this is what Eliezer also tried with his "winning" and "nameless virtue", only to get reduced to "meh, Kegan level 4" regardless.)

I've read the article and then also A first lesson in meta-rationality but I must confess I still have no idea what he's talking about. The accusations of inscrutability seem to be spot on.

Perhaps I should read more about meta-rationality to get it, but just to keep me motivated, can anyone explain in simple terms what the deal is about, or perhaps give me an example of meta-rationalist belief that rationalists don't share?

3Gordon Seidoh Worley
I'd say the biggest different you'll notice that affects the most things is the change in epistemology. Rationalist epistemology and the epistemology of other similar "rational" systems of thought (cf. scientism, theology) assumes there is a single correct way of understanding the world, which rationalists perhaps having the high ground in viewing the project as finding the correct epistemology regardless of what it implies. The meta-rationalists/post-modern position is that this is not possible because epistemology necessarily influences ontology so we cannot possibly have a single "correct" understanding of the world. In this view an epistemology and the ontology it produces can at best be useful to some telos (purpose) but we cannot assign one the prime position as the "correct" ontology for metaphysical reality because we have no way to decide what "correct" is that is independent of the epistemology in which we develop our understanding of "correct". Thus the epistemology of rationality, which seems to target most accurately predicting reality based on known information, is but one useful way of understanding the world within the meta-rationalist/post-modern view, and others may be more useful for serving other purposes. Both stand in contrast to the pre-rational approach to epistemology which does not assume everything is knowable and will accept mystery where explanation is not available. Not sure if that really achieves the "simple terms" aim, so maybe I can put it like this: The pre-rational person can't know some things. The rational person doesn't know some things. The meta-rational person knows they can't know some things.

I know that David Krueger is one of the people working with 80000 hours on helping people to get into the AI safety field. He also organized a related google group.