zlrth's Shortform Feed
This is my shortform feed (h/t Hazard and Raemon, thanks!). This is for thoughts that are short or half-baked or both.
This is my shortform feed (h/t Hazard and Raemon, thanks!). This is for thoughts that are short or half-baked or both.
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best) What kind of events does your group usually run? What does it usually do? How frequently does your...
This is my shortform feed (h/t Hazard and Raemon, thanks!). This is for thoughts that are short or half-baked or both.
Epistemic effort: I wrote this on a plane flight. I'm often interested in Ribbonfarmian "consider a bad thing. What if it's good? (Here's my favorite example of this.) As regards updating my beliefs, I'm drawn to motivated snobbery. "Motivated" means "this belief improves my experiences;" "snobbery" means "with this belief,...
Epistemic status: This is the first time I've expressed these thoughts. I've thought for a long time that people do their jobs well, and are numbskulls in every other area of life. Here I say that it's OK to be a numbskull. I read Raising the Sanity Waterline some time...
This year's ACX Meetup everywhere in Pittsburgh, PA.
Location: Location may be updated, so (1) email to confirm, (2) thanks in advance for any suggestions and (3) consider outside the Frick Park gate: https://goo.gl/maps/q3T7iMmp6HGQnoTv9 – ///pulse.cubes.admiral
Contact: matthewfmarks@gmail.com
Edited: Indeed! It's happening September 11th, at noon, outside the Frick Park gate.
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best)
What kind of events does your group usually run? What does it usually do?
How frequently does your group organize events or meet?
Who would be a good fit for you group?
Should they have any particular skills or have done some specific background reading?
Broken link:
http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2f14
Expected behavior: You can see the comment, a la archive.org:
https://web.archive.org/web/20170424155218/http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2f14
(do make sure you hit the '+')
Actual behavior: You can't see the comment on the page unless you click "show more" comments. Click "show more," and the page reloads, and it zooms down to see his comment. Given that lesswrong.com/{...}/2f14 is a direct link to that comment, it should show that comment.
Some time ago I stopped telling people I'd be somewhere at ish-o'clock. 4PM-ish for example. I really appreciate when people tell me they'll be somewhere at an exact time, and they're there.
I've heard that people are more on-time for a meeting that starts at 4:05 than one at 4:00, and I've used that tactic (though I'd pick the less-obviously-sneaky 4:15).
Yeah--when the person asking the question said, "90 years," and the Turing award winners raised some hands, couldn't they be interpreted to be specifying a wide confidence interval, which is what you should do when you know you don't have domain expertise with which to predict the future?
This intuitively feels epistemologically arrogant, but it succeeds in solving the probability language discrepancy.
In general I support the thought that you avoid a lot of pitfalls if you're really precise and really upfront about what kinds of evidence you'll accept and not. I suspect that that kind of planning is not discussed enough in rationalist-circles, so I appreciate this post! You're upfront about the fact that you'll accept a non-explicit signal. I see nothing wrong with that, given that you're many inferential steps from a shared understanding of probability.
Oh! Thanks.
First: Yes I agree that my thing is a different thing, different enough to warrant a new name. And I am sneaking in negative affect.
Yeah, no kidding it’s easier to catch people doing it—because it’s a completely different thing!
Indeed, I am implicitly arguing that we should be focused on faults-we-actually-have[0], not faults-it's-easy-to-see-we-don't. My example of this is the above-linked podcast, where the hosts hem and haw and, after thinking about it, decide they have no sacred cows, and declare that Good (full disclosure: I like the podcast).
"Sacred-cow" as "well-formed proposition about the world you'd choose to be ignorant of" is clearly bad to LWers, so much so that it's non-tribal.
[0] And especially, faults-we-have-in-common-with-non-rationalists! I said, "The advantage of this definition is that it’s easier to catch rationalists and non-rationalists doing it." Said Achmiz gave examples using the word "people," but I intended to group rationalists with non-rationalists.
I sometimes hear rationalist-or-adjacent people say, "I don't have any sacred-cow-type beliefs." This is the perspective of this commenter who says, "lesswrong doesn't scandalize easily." Agreed: rationalists-and-adjacents entertain a wide variety of propositions.
The conventional definition of sacred-cow-belief is: A falsifiable belief about the world you wouldn't want falsified, given the chance. For example: If a theist had the opportunity to open a box to see whether God existed, and refused, and wouldn't let anyone else open the box, that belief is a sacred cow.
A more interesting (to me) definition of sacred cow is: a belief that causes you to not notice mistakes you make. The advantage of this definition is that it's... (read more)
Epistemic effort: I wrote this on a plane flight. I'm often interested in Ribbonfarmian "consider a bad thing. What if it's good? (Here's my favorite example of this.)
As regards updating my beliefs, I'm drawn to motivated snobbery. "Motivated" means "this belief improves my experiences;" "snobbery" means "with this belief, I eliminate a class of problems other people have."
An example of motivated snobbery is "tipping well." Here's my sales pitch: Tipping is an iterated prisoner's dilemma, not an evaluation of their performance! I want servers and bartenders to be happy to see me. A friend said this well: If I'm getting rich we're all getting rich.
That this makes bartenders happy and signals to... (read 308 more words →)
(as Elizier says, it is dangerous to be half a rationalist, link, there's a better link somewhere, but I can't find it)
This might be it: http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/
Excerpt:
And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws. So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like. This, I suspect, is one of the primary ways that smart people end up stupid.
(it also mentions that it's dangerous to be half a rationalist.
I'm going to write soon about how I don't care about existential risk, and how I can't figure out why. Am I not a good rationalist? Why can't I seem to care?
In one compound sentence: Personal demons made me a rationalist; personal demons decide what I think/feel is important.
I'm still angsty!
Epistemic status: This is the first time I've expressed these thoughts. I've thought for a long time that people do their jobs well, and are numbskulls in every other area of life. Here I say that it's OK to be a numbskull.
I read Raising the Sanity Waterline some time ago. I thought, "These are great points! I've needed them!" I made arguments that used those points a few times.
When I listened to the Bayesian Conspiracy's episode on it I thought, "How did BC get this article so wrong? RtSW isn't about making oblique attacks on religion by teaching people things like Occam's Razor!"
It is about that!
I think I took... (read 375 more words →)
(Will you all get this comment as an email?) Looking forward to meeting! I'll bring nametags and a sign up sheet and some extemporaneously-chosen food.