First I don't think conflating blame and "bad person" is necessarily helpful.
OK, yeah, your view of blame as social incentive (skin-in-the-game) seems superior.
The most case is what is traditionally called "being tempted by sin", e.g., someone procrastinating and not doing what he was supposed to do.
I agree that imposing social costs can be a useful way of reducing this, but I think we would probably have disagreements about how often and in what cases. I think a lot of cases where people blame other people for their failings are...
Well, yes, and I think that's mostly unfortunate. The model of interaction in which people seek to blame each other seems worse -- that is, less effective for meeting the needs and achieving the goals of those involved -- than the one where constructive criticism is employed.
The blame model seems something like this. There are strong social norms which reliably distinguish good actions from bad actions, in a way which almost everyone involved can agree on. These norms are assumed to be understood. When someone violates these norms, the appropriate response...
Edited to "You can’t really impose this kind of responsibility on someone else. It’s compatible with constructive criticism, but not with blame." to try to make the point clearer.
Noticing the things one could be noticing. Reconstructing the field of mnemonics from personal experience. Applied phenomenology. Working toward an understanding of what one's brain is actually doing.
(Commenting in noun phrases. Conveying associations without making assertions.)
I really like the idea, but agree that it is sadly not the right thing here. It would be a fun addition to an Arbital-like site.
These signals could be used outside of automoderation. I didn't focus on the moderation aspect. Automoderation itself really does seem like a moderation system, though. It is an alternate way to address the concerns which would normally be addressed by a moderator.
True, I didn't think about the added burden. This is especially important for a group with frequent newcomers.
I try hard to communicate these distinctions, and distinctions about amount and type of evidence, in conversation. However, it does seem like something more concrete could help propagate norms of making these sorts of distinctions.
And, you make a good point about these distinctions not always indicating the evidence difference that I claimed. I'll edit to add a note about that.
Very cool! I wonder if something like this could be added to a standard productivity/todo tool (thinking of Complice here).
I think the step "how can you prevent this from happening" should perhaps add something like "or how can you work around this" instead -- perhaps you cannot prevent the problem directly, but can come up with alternate routes to success.
I found it surprising that the script ended after a "yes" to "Are you surprised?". Mere surprise seems like too low a bar. I expected the next question to be "...
It's also considered the standard in the literature.
Somewhat. If it is known that the AI actually does not go into infinite loops, then this isn't a problem -- but this creates an interesting question as to how the AI is reasoning about the human's behavior in a way that doesn't lead to an infinite loop. One sort of answer we can give is that they're doing logical reasoning about each other, rather than trying to run each other's code. This could run into incompleteness problems, but not always:
http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf
I find this and the smoker's lesion to have the same flaw, namely: it does not make sense to me to both suppose that the agent is using EDT, and suppose some biases in the agent's decision-making. We can perhaps suppose that (in both cases) the agent's preferences are what is affected (by the genes, or by the physics). But then, shouldn't the agent be able to observe this (the "tickle defense"), at least indirectly through behavior? And won't this make it act as CDT would act?
But: I find the blackmail letter to be a totally compelling case against EDT.
It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn't assume a good outcome. But perhaps you're saying that we should at least have a vision of a good outcome in mind to steer tow...
I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you've "made it"; there are always smarter people to make you feel dumb. So at any level, you'd better get used to asking stupid questions.
...And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be val
so maybe we are arguing from the momentum of our first disagreement :P
I think so, sorry!
The people that in the end tested lucid dreaming were the lucid dreamers themselves.
Ah, right. I agree that invalidates my argument there.
Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.
Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)
Based on our rational approach we are at a disadvantage for discovering these truths.
As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder -- not even a little harder -- to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts ...
That's related to Science Doesn't Trust Your Rationality.
What I'd say is this:
Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they've had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they've talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they're honestly trying to convey an experience they had, though. And it's plausible (though far from certain) that the s...
You must move in much more skeptical circles than me. I've never encountered someone who even "rolled to disbelieve" when told about lucid dreaming (at least not visibly), even among aspiring rationalists; people just seem to accept that it's a thing. But it might be that most of them already heard about it from other sources.
Yes, I think that's right. Especially among those who identify as "skeptics", who see rationality/science as mostly heightened standards of evidence (and therefore lowered standards of disbelief), there can be a tendency to mistake "I have to assign this a low probability for now" for "I am obligated to ignore this due to lack of evidence".
The Bayesian system of rationality rejects "rationality-as-heightened-standard-of-evidence", instead accepting everything as some degree of evidence but requiring us to quantify th...
Malcolm Ocean has also done the "let me see who lives in my head" exercise, inspired by Brienne.
Ah, cool, thanks!
I myself keep a normal journal every day, recording my state of mind and events. This isn't exactly the same thing, but I think it approximates some of the benefits, and it also feeds my desire to record my life so ephemeral things have some concrete backing. I'd recommend that if gratitude journals don't feel right.
For me, regular journalling never felt interesting. I've kept a "research thoughts" journal for a long t...
But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.
Discounting the evidence doesn't actually make it any harder for us to discover those experiences. If we don't want to lose out on such things, then we should try some practices which we assign low probability, to see which ones work. Assigning low probability isn't what makes this hard -- what makes th...
We also have to take into account priors in an individual situation. So, for example, maybe I have found that shamanistic scammers who lie about things related to dreams are pretty common. Then it would make sense for me to apply a special-case rule to disbelieve strange-sounding dream-related claims, even if I tend to believe similarly surprising claims in other contexts (where my priors point to people's honesty).
I didn't write the article, but I think "quick modeling" is referring to the previous post on that blog: simple rationality. It's an idiosyncratic view, though; I think the "quick modeling" idea works just as well if you think of it as referring to Fermi-estimate style fast modeling instead (which isn't that different in any case). The point is really just to have any model of the other person's belief at all (for a broad notion of "model"), and then try to refine that. This is more flexible than the double crux algorithm.
Fro...
Seems there's no way to edit the link, so I have to delete.
Disagreements can lead to bad real-world consequences for (sort of) two reasons:
1) At least one person is wrong and will make bad decisions which lead to bad consequences. 2) The argument itself will be costly (in terms of emotional cost, friendship, perhaps financial cost, etc).
In terms of #1, an unnoticed disagreement is even worse than an unsettled disagreement; so thinking about #1 motivates seeking out disagreements and viewing them as positive opportunities for intellectual progress.
In terms of #2, the attitude of treating disagreements as opportunit...
Yeah, I think the links thing is pretty important. Getting bloggers in the rationalist diaspora to move back to blogging on LW is something of an uphill battle, whereas them or others linking to their stuff is a downhill one.
If double crux felt like the Inevitable Correct Thing, what other things would we most likely believe about rationality in order for that to be the case?
I think this is a potentially useful question to ask for three reasons. One, it can be a way to install double crux as a mental habit -- figure out ways of thinking which make it seem inevitable. Two, to the extent that we think double crux really is quite useful, but don't know exactly why, that's Bayesian evidence for whatever we come up with as potential justification for it. But, three, pinning down su...
Could I get a couple of upvotes so that I could post links? I'd like to put some of the LW-relevant content from weird.solar here now that link posts are a thing.
Basically, this:
https://intelligence.org/2016/07/27/alignment-machine-learning/
It's now MIRI's official 2nd agenda, with the previous agenda going under the name "agent foundations".
Not exactly.
(1) What is the family of calibration curves you're updating on? These are functions from stated probabilities to 'true' probabilities, so the class of possible functions is quite large. Do we want a parametric family? A non-parametric family? We would like something which is mathematically convenient, looks as much like typical calibration curves as possible, but which has a good ability to fit anomalous curves as well when those come up.
(2) What is the prior oven this family of curves? It may not matter too much if we plan on using a lot of d... (read more)