All of zlrth's Comments + Replies

zlrth10

(Will you all get this comment as an email?) Looking forward to meeting! I'll bring nametags and a sign up sheet and some extemporaneously-chosen food.

1Lakin
Nametags are a good call
zlrth10

Broken link:

http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2f14

Expected behavior: You can see the comment, a la archive.org:

https://web.archive.org/web/20170424155218/http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2f14

(do make sure you hit the '+')

Actual behavior: You can't see the comment on the page unless you click "show more" comments. Click "show more," and the page reloads, and it zooms down to see his comment. Given that lesswrong.com/{...}/2f14 is a direct link to that comment, it should show that comment.

zlrth20

Some time ago I stopped telling people I'd be somewhere at ish-o'clock. 4PM-ish for example. I really appreciate when people tell me they'll be somewhere at an exact time, and they're there.

I've heard that people are more on-time for a meeting that starts at 4:05 than one at 4:00, and I've used that tactic (though I'd pick the less-obviously-sneaky 4:15).

2Hazard
I also like picking sneaky times! I used to be a big fan of starting things a 3:17, or 4:41.
zlrth30

Yeah--when the person asking the question said, "90 years," and the Turing award winners raised some hands, couldn't they be interpreted to be specifying a wide confidence interval, which is what you should do when you know you don't have domain expertise with which to predict the future?

zlrth30
This intuitively feels epistemologically arrogant, but it succeeds in solving the probability language discrepancy.

In general I support the thought that you avoid a lot of pitfalls if you're really precise and really upfront about what kinds of evidence you'll accept and not. I suspect that that kind of planning is not discussed enough in rationalist-circles, so I appreciate this post! You're upfront about the fact that you'll accept a non-explicit signal. I see nothing wrong with that, given that you're many inferential steps from a shared understanding of probability.

zlrth10

First: Yes I agree that my thing is a different thing, different enough to warrant a new name. And I am sneaking in negative affect.

Yeah, no kidding it’s easier to catch people doing it—because it’s a completely different thing!

Indeed, I am implicitly arguing that we should be focused on faults-we-actually-have[0], not faults-it's-easy-to-see-we-don't. My example of this is the above-linked podcast, where the hosts hem and haw and, after thinking about it, decide they have no sacred cows, and declare that Good (full disclosure: I like the podcast... (read more)

zlrth50

I sometimes hear rationalist-or-adjacent people say, "I don't have any sacred-cow-type beliefs." This is the perspective of this commenter who says, "lesswrong doesn't scandalize easily." Agreed: rationalists-and-adjacents entertain a wide variety of propositions.

The conventional definition of sacred-cow-belief is: A falsifiable belief about the world you wouldn't want falsified, given the chance. For example: If a theist had the opportunity to open a box to see whether God existed, and refused, and wouldn't let anyo... (read more)

1Hazard
I'm with Said Achmiz on not liking your phrasing/word repurposing, but I do like and agree with "Sacred Cows? That's too easy, what's next to hunt for?" Ex. I've noticed recently that "Don't worry about/keep score of the little things" made it hard for me to have a strong and clear picture of where the pain points in my life are. Now I'm trying to keep the "Don't harmfully ruminate on negatives" and simultaneously record everyday a list of "things that happened which I don't like or that I want to change".
4Said Achmiz
Basically this: “The conventional definition of [thing widely agreed-upon to be bad] is ‘[normal definition]’. But a more interesting definition is ‘[completely different definition, of which it’s not at all clear that it’s bad]’. The advantage of this definition is that it’s easier to catch people doing it.” Examples: “The conventional definition of ‘stealing’ is ‘taking something, without permission, which doesn’t belong to you’. But a more interesting definition is ‘owning something which is unethical to own’. The advantage of this definition is that it’s easier to catch people doing it.” “The conventional definition of ‘fraud’ is ‘knowingly deceiving people, for profit’. But a more interesting definition is ‘making money by doing something which runs counter to people’s expectations’. The advantage of this definition is that it’s easier to catch people doing it.” “The conventional definition of ‘adultery’ is ‘having sexual relations with a person other than the one with whom you have a monogamous marriage’. But a more interesting definition is ‘doing something which causes your spouse to experience jealousy’. The advantage of this definition is that it’s easier to catch people doing it.” Yeah, no kidding it’s easier to catch people doing it—because it’s a completely different thing! Why would you call it by the same term (“sacred cow”, “theft”, “fraud”, “adultery”)—unless you wanted to sneak in negative affect, without first doing the work of demonstrating (or, indeed, even explicitly claiming) that the thing described by your new definition is, in fact, bad?
zlrth20
(as Elizier says, it is dangerous to be half a rationalist, link, there's a better link somewhere, but I can't find it)

This might be it: http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/

Excerpt:

And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws.  So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like.  This, I suspect, is one of the primary ways that smart people end up stupid. 

(it also mentions tha... (read more)

zlrth30

I'm going to write soon about how I don't care about existential risk, and how I can't figure out why. Am I not a good rationalist? Why can't I seem to care?

In one compound sentence: Personal demons made me a rationalist; personal demons decide what I think/feel is important.

I'm still angsty!

zlrth10
I personally will have no strong beliefs about the truth value of their hypothesis if I have too much conflicting evidence. However, I won't want to put much effort into testing the hypothesis unless my plans depend on it being true or false.

I like how you said this.

The people with whom I was speaking were successful members of society, so they fell into the uncanny valley for me when they started pushing the idea that everyone has their own truth. I'm not sure if it's better or worse that they didn't quite literally believe that, but
... (read more)
zlrth30

I think I know what you mean by "rationalists really do wipe the floor with the competition." But in the interest of precision, what do you mean? I'm not convinced they do; I alluded to this in my post here: https://www.lesserwrong.com/posts/dPLLnAJbas97GGsvQ/leave-beliefs-that-don-t-constrain-experience-alone

It may be that the community already has a standard article on this; I'd be happy with a link. It may also be that I should read more rigorously about what exactly a rationalist is. If there is no standard article, I'm curious about your thoughts.

2SquirrelInHell
It's not something there are standard materials on, and I duly acknowledge that the bar to make you take off because of rationality is pretty damn high. If you never did, try to personally meet some of the handful of "recognized celebrities" of the rationality community, and take a close look at what they do.
zlrth10

You want to do better than a Nobel Prize? Not the prize of course, but the contribution to society? I'm intrigued. Could you expand on that?

My intrigue comes from my bar-of-what-is-possible, John von Neumann. He probably has more beliefs-that-pay-rent than me, but he also has a "practically unlimited" capacity for work, tons of "mathematical courage," and "awe-inspiring" speed[0]. It'd be so great if those things were simply beliefs-that-pay-rent!

So I tell myself, "To do better than I have been doing, I must inc... (read more)

2Gordon Seidoh Worley
It of course depends on the Nobel Prize being awarded and for what, but I'm thinking in terms of impact where best of all humanity might not be enough, like even if you do the best work of all humanity to address an existential risk, you might still fail to do enough to mitigate the risk.
zlrth10

Responding to the prompt for discussion: Once one finds deeply rooted filters with poor calibration, how should you go about fixing them?

I've heard people comment and meta-comment about how rationality seems to help people only in indirect ways. I'd also say that about myself!

I also have rarely asked for help. This is a deeply-rooted, poorly-calibrated filter. My first answer is: do the hard thing the easy way.

"Complaining more" is my way of "asking for help." Specifically, complaining that's directed toward a tractable p

... (read more)