It's like virtue and reputation ("honor") were one thing at the time, and now they're two things.
I almost wonder if the problem is less "people stopped caring about being truly-intrinsically-virtuous" and more: People stopped rationalizing their reputation-management as virtuous; which fed into a "it's impractical and uncouth to care about virtue" cycle; which resulted in people having too many degrees of freedom, because it's easier to rationalize arbitrary actions as practical than to rationalize arbitrary actions as virtuous.
Yeah, I was having similar thoughts.
That's a good point. Reputation is less naturally spiritual. I think you can experience it both ways. Imagine someone who thinks about reputation as painted on their heart. Versus someone who is is fine with trying to manipulate their reputation.
Modernity has made people quite averse to talking about and dealing with spirituality. I think maybe a big part of what's going on is that while PR is a material concept, honor is a spiritual concept. It deals with meaning directly rather than only indirectly. Honor matters for its own sake (or can), matters to your soul. Whereas PR can only ever matter indirectly, only as a consequence of other things. No one has PR in their soul.
That would mean that people end up avoiding thinking about and relating to things like honor and reputation because it just feels weird. It feel like the sort of thing that you're not supposed to deal with. It feels like something that science and technology have vaguely disproven.
I get how honor is a spiritual concept but don't really get how reputation is. It seems like reputation is precisely the thing PR is concerned with while it ignores honor.
This is very confusing to me when Anna in the original post talks about "reputation" "honor" and "brand" as equivalent. Reputation and brand are precisely worrying about how others think of you (PR), whereas Honor is about how you think of yourself.
>Thus, I think that the process is relatively reliable but not totally reliable.
Absolutely. That's exactly right.
>My Christian friend claimed that atheists/rationalists/skeptics/evolutionists cannot trust even their own reason (beacuse it is the product of their imperfect brains in their opinion).
It sounds like there's a conflation between 'trust' and 'absolute trust'. Clearly we have some useful notion of trust because we can navigate potentially dangerous situations relatively safely. So using plain language its false to say that atheis...
Consider how justified trust can come into existence.
You're traveling through the forest. You come to moldy looking bridge over a ravine. It looks a little sketchy. So naturally you feel distrustful of the bridge at first. So you look at it from different angles, and shake it a bit. And put a bit of weight on it. And eventually, some deep unconscious part of you will decide that it's either untrustworthy and you'll find another route, or it will decide its trustworthy and you'll cross the bridge.
We don't understand that process, but its reliable anyway.
Thanks, I forgot to make it clear I'm looking for digital versions.
I'm making an online museum of ethos (my ethos). I'm using good and bad art and commentary to make my ethos very visible through aesthetics.
I am doing an art criticism project that’s very important to me, and I’m looking for high res digital versions the art in the following books.
Help with getting these via a university library, or pointers to where I could buy an electronic copy of any of these is much appreciated.
Attempting to blindsight the answer:
In the past I imagine that people were usually trying to 'be a serious person'. And that's still true. But somehow being a serious person is now faker. And I think maybe its because they're being a very scared serious person. Somehow they're a lot more vulnerable from every direction. Or there's a lot more directions they're vulnerable from.
Thanks; this resonates for me and I hadn't thought of it here.
The guess that makes sense to me along these lines: maybe it's less about individual vulnerability from attack/etc. And more that they can sense somehow that the fundamentals of the our collective situation are not viable (environmental collapse, AI, social collapse, who knows, from that visceral perspective I imagine them to have), and yet they don't have a frame for understanding the "this can't keep working," and so it lands in the "in denial" bucket and their "serious person" is fake. (I ...
Is there a good source for many things we know from the Diamond Princess data? Or even just the numbers so far from DP? I'm not sure how to find that data.
Double crux is hard enough with arguments, and here I'm trying to advocate something like double-cruxing aesthetic preferences, which sounds absurdly ambitious. But: imagine if we could talk about why things seem beautiful and appealing, or ugly and unappealing.
My work is basically about this; extracting aesthetic preferences from people (and S1 based inside views more generally).
I haven't done specifically artistic aesthetics, but most thinking relies heavily on aesthetics about which problems are interesting or important, ways of behaving, wa...
Thanks for writing this. Inferential distance + inoculation is a huge problem for transmitting large bodies of understanding in domains that previously didn't look like domains to the student. The student frequently getting a smaller version of the ideas before they can get the full version and that shuts off further interest because they've "got it".
>nobody has done studies measuring hormone levels over time and fitting a differential-equation model of how hormones affect each other's levels
What in the everloving fuck? That really seems like the first thing you should do. Has that at least been done for the shared hormones?
So, in general, as a techy person looking at biology, you need to be aware that most biomedical researchers are not educated in quantitative stuff. Like, when I worked at a biotech company, I got frequent questions from the bench biologists that amounted to "how do I test statistical significance in this experiment?" where the answer was "do a t-test."
This means that in any arbitrary field, you're not necessarily going to find that someone has done the "obvious" applied-math/modeling thing.
Some fields, like genetics or ep...
The point was to raise nominal prices in the first place
That is not how it works.
I don't buy the idea that voters are not the main source of problem and that its voting systems. Voters don't have good incentives to have sensible opinions that go against natural prejudices.
It seems to me that if you found the median political opinion of people, you would have much worse policies in a lot of areas. Probably some would be better, but I would be surprised if it were many.
This is great, I love it
Do you know if they normalize for case difficulty? If a hospital patients seems like it will get worse outcomes.
I just did this and it was pretty easy! And in fact I decided to change the hospital I go to by default.
Genuinely held austarity-type ideologies are popular among people who care a lot about central banking (possibly due to the great depression?), and I'm guessing that's what happened at the BoJ. It seems to be what happened in the US, which made similar mistakes, though less badly.
There's not that many that I know of. I do think its much more intuitive and lets you build more nuanced models that are useful for social sciences. You can fit the exact model that you want instead of needing to fit your case in a preexisting box. However, I don't know of too many examples where this is hugely practically important.
The lack of obviously valuable use cases is part of why I stopped being that interested in MCMC, even though I invested a lot in it.
There is one important industrial application of MCMC: hyperparameter sampling in Bayesian optimization (Gaussian Processes + priors for hyper parameters). And the hyperparameter sampling does substantially improve things.
Funny enough, as a direct result of reading the sequences, I got super obsessed with Bayesian stats and that eventually resulted in writing PyMC3 (which is the software used in the book).
If you want to see a billion examples of details mattering, watch anything about shipbuilding by this guy: https://www.youtube.com/watch?v=jM6R81SiKgA
Great description. Yes, I think that's exactly why people are reluctant to see other people's points.
Yeah, I wasn't too specific on that. I do endorse the piece that jb55 quotes below, but I'm still figuring out what to tell people to do. I'll hopefully have more to say in the coming months.
John Maxwell posted this quote:
The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it.
-- Daniel Kahneman
Ontology lock in. If you have nice stuff built on top of something you'll demand proof commensurate with the value of those things when someone questions the base layer even if those things built on top could be supported by alternative base layers. S1 is cautious about this, which is reasonable. Our environment is much safer for experimentation than it used to be.
I want you to come up to me, put your arm around me, ask me how I am and start telling me about the idea you’ve got. Show me you ought to be in charge, because right now I’m a little lost and you’re not.
My desire is not for some permanent power structure, but for other people to sometimes and temporarily take leadership with the expectation that I will probably do so in the future as well. I think one of the most valuable things I do is sit people down and say 'look, there's this problem you have that you don't see, but I think its fixable. You're stuck...
Yes, I was trying mostly to talk about #2. I like the dominance frame because I think this kind fluid dominance roles is the something like the Proper Use of Dominance. Dominance as enabling swift changes status to track changes in legitimate authority.
Seems like that wasn't really very clear though.
I think I want to additionally emphasize, people being comfortable temporarily taking responsibility for other people. Sometimes I want someone to come in and tell me I have a problem I don't see and how to solve it. I try to do this for others because I think its one of the most valuable services I can provide for people. Letting them see outside themselves.
Thanks :)
Thanks, had to make a new link.
There are certainly people who meet it better than others.
(Sorry for the long delay)
Ah, I see why you're arguing now.
(And an idea that works for central examples but fails for edge cases is an idea that fails.)
Ironically, this is not a universal criteria for the success of ideas. Sometimes its a very useful criteria (think mathematical proofs). Other times, its not a very useful idea (think 'choosing friends' or 'mathematical intuitions').
For example the idea of 'cat' fails for edge cases. Is this a cat? Sort of. Sort of not. But 'cat' is still a useful concept.
Concepts are clusters in thing space, and the ...
Maybe I'm still misunderstanding.
Ahhhh, maybe I see what you're complaining about
Are you primarily thinking of this as applying to creationists etc?
The part of the reason I put the caveat 'people about as reasonable as you' in the first place was to exclude that category of people from what I was talking about.
That is not the central category of people I'm suggesting this for. Also, I'm not clear on why you would think it was.
There's a point intermediate between "completely new" and "just being difficult".
Fair enough. To me, your previous words pattern matched very strongly to 'being difficult because they think this is dumb but don't want to say why because it seems like too much work' (or something). My mistake.
I didn't mean new to LW, I meant new to the questions you were posing and the answers you got.
Back on the topic at hand,
...In order to do that I would have to assume that I know what questions are the right ones and that he does not. Assuming t
Your points have what seem to me like pretty obvious responses. If this is actually new to you, then I'm very happy to have this discussion.
But I suspect that you have some broader point. Perhaps you think my overall point is misguided or something. If that's the case, then I think you should come out and say it rather than what you're doing. I'm totally interested in thinking about and responding to actual points you have, but I'm only interested in having arguments that might actually change my mind or yours.
But again, if this is actually new, I'm very ...
At the very least, Jiro believes that they are not as sensible as him on those topics.
From the article
If Paul is at least as sensible as you are and his arguments sound weak or boring, you probably haven’t grokked his real internal reasons.
Not sure! If it was in the last couple months there's a good chance.
Yup!
this disparity in strength of beliefs is in itself good evidence that there is information we are missing
That's a nice way of summarizing.
I would emphasize the difference between parsing the arguments they're explicitly making and understanding the reasons they actually hold the beliefs they do.
They may not be giving you the arguments that are the most relevant to you. After all, they probably don't know why you don't already believe what they do. They may be focusing on parts that are irrelevant for convincing you.
By the way, nice job trying to ...
Thanks, this was super useful context.
Seems like its more that the institutions are broken rather than few people caring. Or could be that most scientists don't care that much but a significant minority care a lot. And for that to cause lots of change you need money, but to get money you need the traditional funders (who don't care because most scientists don't care) or you need outside help.
Reddit/HN seem like examples of extreme success, we should probably also not behave as if we will definitely enjoy extreme success.
I make the suggestion because precisely because we will definitely lose that war.
I wonder if we could find a scalable way of crossposting facebook and g+ comments? The way Jeff Kaufmann does on his blog (see the comments: https://www.jefftk.com/p/leaving-google-joining-wave)
That would lower the frictions substantially.
I think you may be misunderstanding why people focus on selection mechanisms. Selection mechanisms can have big effects on both the private status returns to quality in comments (~5x) and the social returns to quality (~1000x). Similar effects are much less plausible with treatment effects.
Claim: selection mechanisms are much more powerful than treatment effects.
I think people are using the heuristic: If you want big changes in behavior, focus on incentives.
Selection mechanisms can make relatively big changes in the private status returns to making high qu...
It turns out Cochrane does provide their data. Very nice of them.
Also, at least in this case my own metanalysis based on their data perfectly replicated their results. The inefficiency I thought was there was not there.
Metamed went out of business recently.
I struggle to satisfyingly interpret ⋅, the 'evaluation function'. Or maybe struggling to interpret W, the world timelines (presumably full world evolutions). Any advice on how to think about them?
In particular, how should I understand a ⋅ e = a_0 ⋅ e = w? the agent is different, but the worlds are the same. So then what's the difference between e and w?
I guess ⋅ is something like "the world partitioned into things that are relevantly different for me"? Would appreciate anyone's clarifying thoughts.