All of ProofOfLogic's Comments + Replies

Not exactly.

(1) What is the family of calibration curves you're updating on? These are functions from stated probabilities to 'true' probabilities, so the class of possible functions is quite large. Do we want a parametric family? A non-parametric family? We would like something which is mathematically convenient, looks as much like typical calibration curves as possible, but which has a good ability to fit anomalous curves as well when those come up.

(2) What is the prior oven this family of curves? It may not matter too much if we plan on using a lot of d... (read more)

0Lumifer
Sure. Any practical implementation will have to figure out all the practical details including the ones that you mention. But that's implementation issues of something that is still straightforward Bayes, at least for a single individual. If you have a history of predictions and know the actual outcomes, you can even just plot the empirical calibration curve without any estimation involved. Now, if you have multiple people involved, things become more interesting and probably call for something like Gelman's favourite multilevel/hierarchical models. But that's beyond what OP asked for -- he wanted a "rigorously mathematically defined system" and that's plain-vanilla Bayes.

First I don't think conflating blame and "bad person" is necessarily helpful.

OK, yeah, your view of blame as social incentive (skin-in-the-game) seems superior.

The most case is what is traditionally called "being tempted by sin", e.g., someone procrastinating and not doing what he was supposed to do.

I agree that imposing social costs can be a useful way of reducing this, but I think we would probably have disagreements about how often and in what cases. I think a lot of cases where people blame other people for their failings are... (read more)

Well, yes, and I think that's mostly unfortunate. The model of interaction in which people seek to blame each other seems worse -- that is, less effective for meeting the needs and achieving the goals of those involved -- than the one where constructive criticism is employed.

The blame model seems something like this. There are strong social norms which reliably distinguish good actions from bad actions, in a way which almost everyone involved can agree on. These norms are assumed to be understood. When someone violates these norms, the appropriate response... (read more)

1lmn
First I don't think conflating blame and "bad person" is necessarily helpful. The most case is what is traditionally called "being tempted by sin", e.g., someone procrastinating and not doing what he was supposed to do. However, not entirely common goals. A group of people can all agree that something must get done while each one also wants to get the most credit for the least amount of work. And don't get me started on situations where most of the participants are only there for a paycheck, a.k.a., the real world. As I see the blame model is about enforcing skin in the game after the fact. When something objectively bad happens, e.g., a bridge collapses, profits collapse, a car accident, it's necessary to enforce skin in that game on those whose decisions were responsible for bringing it about, i.e., make the person responsible for a risky decision bear the downside risk, especially if he would have received the benefits if it had succeeded, e.g., a CEO who decided on a risky strategy and would have gotten a big bonus if it had succeeded should also bear a cost for failure. Of course, in the question of who is responsible social norms might be relevant, e.g., if two cars collided in an intersection, the person who ran the red light is the one responsible.

Edited to "You can’t really impose this kind of responsibility on someone else. It’s compatible with constructive criticism, but not with blame." to try to make the point clearer.

0lmn
Of course, if looked at the kind of responsibility that is compatible with blame, you'd notice it's a lot more in line with the common sense notion of the term.

Noticing the things one could be noticing. Reconstructing the field of mnemonics from personal experience. Applied phenomenology. Working toward an understanding of what one's brain is actually doing.

(Commenting in noun phrases. Conveying associations without making assertions.)

I really like the idea, but agree that it is sadly not the right thing here. It would be a fun addition to an Arbital-like site.

These signals could be used outside of automoderation. I didn't focus on the moderation aspect. Automoderation itself really does seem like a moderation system, though. It is an alternate way to address the concerns which would normally be addressed by a moderator.

True, I didn't think about the added burden. This is especially important for a group with frequent newcomers.

I try hard to communicate these distinctions, and distinctions about amount and type of evidence, in conversation. However, it does seem like something more concrete could help propagate norms of making these sorts of distinctions.

And, you make a good point about these distinctions not always indicating the evidence difference that I claimed. I'll edit to add a note about that.

Very cool! I wonder if something like this could be added to a standard productivity/todo tool (thinking of Complice here).

I think the step "how can you prevent this from happening" should perhaps add something like "or how can you work around this" instead -- perhaps you cannot prevent the problem directly, but can come up with alternate routes to success.

I found it surprising that the script ended after a "yes" to "Are you surprised?". Mere surprise seems like too low a bar. I expected the next question to be "... (read more)

1[anonymous]
Hello, thanks for the feedback! I'll likely go and change parts of the script this weekend, as well as adding an undo button and something of the sort. I agree that the ending prompt of mere surprise isn't good enough or well-defined to let people understand that it's the end of the process. Also, fixing the "done" thing would be good (so it's also an acceptable answer).

It's also considered the standard in the literature.

Somewhat. If it is known that the AI actually does not go into infinite loops, then this isn't a problem -- but this creates an interesting question as to how the AI is reasoning about the human's behavior in a way that doesn't lead to an infinite loop. One sort of answer we can give is that they're doing logical reasoning about each other, rather than trying to run each other's code. This could run into incompleteness problems, but not always:

http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf

I find this and the smoker's lesion to have the same flaw, namely: it does not make sense to me to both suppose that the agent is using EDT, and suppose some biases in the agent's decision-making. We can perhaps suppose that (in both cases) the agent's preferences are what is affected (by the genes, or by the physics). But then, shouldn't the agent be able to observe this (the "tickle defense"), at least indirectly through behavior? And won't this make it act as CDT would act?

But: I find the blackmail letter to be a totally compelling case against EDT.

2Johannes Treutlein
I agree with all of this, and I can't understand why the Smoking Lesion is still seen as the standard counterexample to EDT. Regarding the blackmail letter: I think that in principle, it should be possible to use a version of EDT that also chooses policies based on a prior instead of actions based on your current probability distribution. That would be "updateless EDT", and I think it wouldn't give in to Evidential Blackmail. So I think rather than an argument against EDT, it's an argument in favor of updatelessness.
1Jiro
The blackmail letter has someone reading the AI agent's source code to figure out what it would do, and therefore runs into the objection "you are asserting that the blackmailer can solve the Halting Problem".

It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.

Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn't assume a good outcome. But perhaps you're saying that we should at least have a vision of a good outcome in mind to steer tow... (read more)

0KristenBurke
Yes. I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth. But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren't known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes. So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.

I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you've "made it"; there are always smarter people to make you feel dumb. So at any level, you'd better get used to asking stupid questions.

And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be val

... (read more)
1KristenBurke
It's probably just me but the Stack Exchange community seems to make this hard. Yes, that would be nice. And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain. Actually, just your first paragraph in your response seems to have almost done that, if not entirely. It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.

so maybe we are arguing from the momentum of our first disagreement :P

I think so, sorry!

The people that in the end tested lucid dreaming were the lucid dreamers themselves.

Ah, right. I agree that invalidates my argument there.

Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.

Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)

Based on our rational approach we are at a disadvantage for discovering these truths.

As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder -- not even a little harder -- to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts ... (read more)

1Erfeyah
I don't think there is a gap. I am pointing towards a difficulty. If you are acknowledging the difficulty (which you are) then we are in agreement. I am not sure why it feels like a disagreement, Don't forget that at the start you had a reason for disagreeing which was my erroneous use of the word rationality. I have now corrected that so maybe we are arguing from the momentum of our first disagreement :P

That's related to Science Doesn't Trust Your Rationality.

What I'd say is this:

Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they've had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they've talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they're honestly trying to convey an experience they had, though. And it's plausible (though far from certain) that the s... (read more)

1Erfeyah
Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments. I am using it as a label for a social group and you are using it as an approach to knowledge. Needless to say this is my mistake as the whole point of this post is about improving the rational approach by becoming aware of what I think as a difficult space of truth. That, I feel, is not accurate. Don't forget that my example assumes a world before the means to experimentally verify lucid dreaming were available. The people that in the end tested lucid dreaming were the lucid dreamers themselves. This will inevitably happen for all knowledge that can be verified. It will happen by the people that have it. I am talking about the knowledge that is currently unverifiable (except through experience).

You must move in much more skeptical circles than me. I've never encountered someone who even "rolled to disbelieve" when told about lucid dreaming (at least not visibly), even among aspiring rationalists; people just seem to accept that it's a thing. But it might be that most of them already heard about it from other sources.

Yes, I think that's right. Especially among those who identify as "skeptics", who see rationality/science as mostly heightened standards of evidence (and therefore lowered standards of disbelief), there can be a tendency to mistake "I have to assign this a low probability for now" for "I am obligated to ignore this due to lack of evidence".

The Bayesian system of rationality rejects "rationality-as-heightened-standard-of-evidence", instead accepting everything as some degree of evidence but requiring us to quantify th... (read more)

0Erfeyah
I do like the flexibility of the Bayesian system of rationality and the "assuming is not believing" example you gave. But I do not (at the moment) see how it is any more efficient in the occasion where the evidence is not clearly quantified or is simply really weak. There seems to me to be a dependence of any system of rational analysis to the current stage of quantifiable evidence. In other words, rational analysis can go up to where science currently is. But there is an experimental space that is open for exploration without convincing intellectual evidence. Navigating this space is a bit of a puzzle.. But this is a half baked thought. I will post when I can express it clearly.

Malcolm Ocean has also done the "let me see who lives in my head" exercise, inspired by Brienne.

Ah, cool, thanks!

I myself keep a normal journal every day, recording my state of mind and events. This isn't exactly the same thing, but I think it approximates some of the benefits, and it also feeds my desire to record my life so ephemeral things have some concrete backing. I'd recommend that if gratitude journals don't feel right.

For me, regular journalling never felt interesting. I've kept a "research thoughts" journal for a long t... (read more)

0[anonymous]
Hm, re: journaling, I think it works for a subset of people. I've met friends who, like me, swear by journaling as a great way to keep track of mindspace. But other people seem to (on the outward) do just fine w/o them. To each their own, I guess.

But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.

Discounting the evidence doesn't actually make it any harder for us to discover those experiences. If we don't want to lose out on such things, then we should try some practices which we assign low probability, to see which ones work. Assigning low probability isn't what makes this hard -- what makes th... (read more)

0Erfeyah
I think I see what you are saying. I am phrasing the problem as an issue with rationality when I should have been phrasing it as a type of bias that tends to affect people with a rationality focus. Identifying the bias should allow us to choose a strategy which will in effect be the more rational approach. Did I understand you correctly? P.S: I edited the opening paragraph and conclusion to address yours and entirelyuseless' valid criticism.

We also have to take into account priors in an individual situation. So, for example, maybe I have found that shamanistic scammers who lie about things related to dreams are pretty common. Then it would make sense for me to apply a special-case rule to disbelieve strange-sounding dream-related claims, even if I tend to believe similarly surprising claims in other contexts (where my priors point to people's honesty).

0Erfeyah
Lucid dreaming is actually an interesting one where I always have to start an introduction to it with "It has been scientifically proven, I am not crazy or part of a cult.". Even then I sometimes get sceptical responses. In most cases, I don't think I would be able to communicate it at all if it had not been scientifically proven. Meditation is another one that is highly stigmatised because of its associations with (in many cases demonstrably) crazy claims and.. "thrown out with the bath water" though this seems to be slowly changing as more and more studies are showing its benefits. These are two instances where scientific evidence has surfaced so it is easy to talk about. They are good as indicative examples. The post is about experiences that (assuming they exist) have not yet enter the area discoverable by our current scientific tools.

I didn't write the article, but I think "quick modeling" is referring to the previous post on that blog: simple rationality. It's an idiosyncratic view, though; I think the "quick modeling" idea works just as well if you think of it as referring to Fermi-estimate style fast modeling instead (which isn't that different in any case). The point is really just to have any model of the other person's belief at all (for a broad notion of "model"), and then try to refine that. This is more flexible than the double crux algorithm.

Fro... (read more)

0[anonymous]
Hm, okay. I was unsure where it differed with double crux exactly; thanks for the additional info.

Seems there's no way to edit the link, so I have to delete.

Disagreements can lead to bad real-world consequences for (sort of) two reasons:

1) At least one person is wrong and will make bad decisions which lead to bad consequences. 2) The argument itself will be costly (in terms of emotional cost, friendship, perhaps financial cost, etc).

In terms of #1, an unnoticed disagreement is even worse than an unsettled disagreement; so thinking about #1 motivates seeking out disagreements and viewing them as positive opportunities for intellectual progress.

In terms of #2, the attitude of treating disagreements as opportunit... (read more)

0Lumifer
See my reply to Jess.

Yeah, I think the links thing is pretty important. Getting bloggers in the rationalist diaspora to move back to blogging on LW is something of an uphill battle, whereas them or others linking to their stuff is a downhill one.

If double crux felt like the Inevitable Correct Thing, what other things would we most likely believe about rationality in order for that to be the case?

I think this is a potentially useful question to ask for three reasons. One, it can be a way to install double crux as a mental habit -- figure out ways of thinking which make it seem inevitable. Two, to the extent that we think double crux really is quite useful, but don't know exactly why, that's Bayesian evidence for whatever we come up with as potential justification for it. But, three, pinning down su... (read more)

Could I get a couple of upvotes so that I could post links? I'd like to put some of the LW-relevant content from weird.solar here now that link posts are a thing.

3Dagon
Upvoted, but I have to say I dislike link posts in discussion. The vast majority are wrong for LW - either oversimple or just repeating what's already in the wiki or sequences. Some are on-topic and well-written, but even then the community interaction is broken when there are comments both on the hosting site and LW. If you have something LW-appropriate to say, make a post and include the link. note: since links are now a thing, I'm likely in the minority.
1MrMind
I have upvoted you. Make us proud.

Basically, this:

https://intelligence.org/2016/07/27/alignment-machine-learning/

It's now MIRI's official 2nd agenda, with the previous agenda going under the name "agent foundations".

2scarcegreengrass
Okay, thanks.

Reminds me of the general tone of Nate Soares' Simplifience stuff.