Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: entirelyuseless 24 February 2017 03:26:06PM 0 points [-]

Smoking lesion is "seen as the standard counterexample" at least on LW pretty much because people wanted to agree with Eliezer.

Comment author: ProofOfLogic 24 February 2017 05:07:34PM 1 point [-]

It's also considered the standard in the literature.

[Link] The Monkey and the Machine

5 ProofOfLogic 23 February 2017 09:38PM
Comment author: Jiro 03 February 2017 07:43:21PM 1 point [-]

The blackmail letter has someone reading the AI agent's source code to figure out what it would do, and therefore runs into the objection "you are asserting that the blackmailer can solve the Halting Problem".

Comment author: ProofOfLogic 07 February 2017 03:57:14AM 1 point [-]

Somewhat. If it is known that the AI actually does not go into infinite loops, then this isn't a problem -- but this creates an interesting question as to how the AI is reasoning about the human's behavior in a way that doesn't lead to an infinite loop. One sort of answer we can give is that they're doing logical reasoning about each other, rather than trying to run each other's code. This could run into incompleteness problems, but not always:

http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf

Comment author: ProofOfLogic 02 February 2017 10:52:28PM 1 point [-]

I find this and the smoker's lesion to have the same flaw, namely: it does not make sense to me to both suppose that the agent is using EDT, and suppose some biases in the agent's decision-making. We can perhaps suppose that (in both cases) the agent's preferences are what is affected (by the genes, or by the physics). But then, shouldn't the agent be able to observe this (the "tickle defense"), at least indirectly through behavior? And won't this make it act as CDT would act?

But: I find the blackmail letter to be a totally compelling case against EDT.

Comment author: JenniferRM 30 January 2017 05:21:28PM 3 points [-]

I appreciate the poll, but a large part of my goal was to just get a lot of comments, hopefully at the "Ping" level, because I want to see how many people are here with at least that amount of "social oomph" when the topic is themselves.

For people responding to this poll, please also give a very small overall comment that you used the poll.

Comment author: ProofOfLogic 31 January 2017 10:34:39AM 1 point [-]

ping

Comment author: KristenBurke 27 January 2017 01:49:25PM 1 point [-]

So at any level, you'd better get used to asking stupid questions.

It's probably just me but the Stack Exchange community seems to make this hard.

I think it would be nice if someone wrote a post on "visceral comparative advantage" giving tips on how to intuitively connect "the best thing I could be doing" with comparative advantage rather than absolute notions.

Yes, that would be nice. And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain. Actually, just your first paragraph in your response seems to have almost done that, if not entirely.

I don't think many people on the "front lines" as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don't know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn't think of now.

It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.

Comment author: ProofOfLogic 29 January 2017 09:20:28PM 0 points [-]

It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.

Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn't assume a good outcome. But perhaps you're saying that we should at least have a vision of a good outcome in mind to steer toward.

And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain.

Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you're not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.

Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can't find any solutions which pass our threshold. It's then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what's relatively best?

Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it's just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it's ignoring trade-offs by the fallacy of absolute thinking.

Comment author: KristenBurke 26 January 2017 04:09:36PM 1 point [-]

This does help, thank you. I'd come to similar judgments and maybe couldn't sustain them long because I didn't know of anyone else with them.

I think this also happens to help me ask my question better. What I'd also like to know:

What are the intended trajectories of people on the front-lines? Is it merging with super AIs to remain on the front-lines, or is it "gaming" in lower intelligence reservations structured by yet more social hierarchies and popularity contests? Is this a false dichotomy?

Neither is ultimately repugnant to me or anything. Nothing future pharmaceuticals couldn't probably fix. I just truly don't know what they think that they can expect. If I did, maybe I could have a better idea of what I can personally expect so that I don't unnecessarily choose some trajectory in exceeded vain.

I guess, above, what I was trying to communicate—if there's something there at all to communicate—is a kind of appreciation for how not-fun it may be to have no choice but to be in a lower intelligence reservation, being someone with analogous first-hand experience. So if all of us ultimately have no choice in such a matter, what would be some things we might see in value journals living in a reservation? (Assuming the values wouldn't be prone to be fundamentally derived from any kind of idolatry.)

Comment author: ProofOfLogic 27 January 2017 08:58:10AM 0 points [-]

I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you've "made it"; there are always smarter people to make you feel dumb. So at any level, you'd better get used to asking stupid questions.

And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be valuable, satisfying, or something to particularly look forward to. Also why I'm asking.

I think it would be nice if someone wrote a post on "visceral comparative advantage" giving tips on how to intuitively connect "the best thing I could be doing" with comparative advantage rather than absolute notions. I'm not quite sure how to do it myself. The inability to be satisfied by a small niche is something that made a lot more sense when humans lived in small tribes and there was a decent chance to climb to the top.

I don't think many people on the "front lines" as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don't know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn't think of now.

Comment author: Erfeyah 26 January 2017 01:02:28AM 0 points [-]

I don't think there is a gap. I am pointing towards a difficulty. If you are acknowledging the difficulty (which you are) then we are in agreement. I am not sure why it feels like a disagreement, Don't forget that at the start you had a reason for disagreeing which was my erroneous use of the word rationality. I have now corrected that so maybe we are arguing from the momentum of our first disagreement :P

Comment author: ProofOfLogic 26 January 2017 09:16:43AM 0 points [-]

so maybe we are arguing from the momentum of our first disagreement :P

I think so, sorry!

Comment author: Erfeyah 26 January 2017 12:50:01AM *  1 point [-]

Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments. I am using it as a label for a social group and you are using it as an approach to knowledge. Needless to say this is my mistake as the whole point of this post is about improving the rational approach by becoming aware of what I think as a difficult space of truth.

If scientists used the kind of "rationality" described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can't rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.

That, I feel, is not accurate. Don't forget that my example assumes a world before the means to experimentally verify lucid dreaming were available. The people that in the end tested lucid dreaming were the lucid dreamers themselves. This will inevitably happen for all knowledge that can be verified. It will happen by the people that have it. I am talking about the knowledge that is currently unverifiable (except through experience).

Comment author: ProofOfLogic 26 January 2017 09:14:44AM 0 points [-]

The people that in the end tested lucid dreaming were the lucid dreamers themselves.

Ah, right. I agree that invalidates my argument there.

Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.

Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)

Comment author: Erfeyah 25 January 2017 06:34:12PM 0 points [-]

And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.

From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but start working my way through a bunch of other false claims instead.

Exactly, that is why I am pointing towards the problem. Based on our rational approach we are at a disadvantage for discovering these truths. I want to use this post as a reference to the issue as it can become important in other subjects.

I can choose to try out lucid dreaming, not because I've found scientific evidence that it works, but because it's presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me "this is a real effect and has effects you'll find worth the cost of trying it out", I believe them.

Yes, that is the other way in. Trust and respect. Unfortunately, I feel we tend to surround ourselves with people that are similar to us and thus selecting our acquaintances in the same way we select ideas to focus on. In my experience (which is not necessarily indicative), people tend to just blank out unfamiliar information or consider it a bit of an eccentricity. In addition, as stated, if a subject requires substantial effort before you can confirm its validity it becomes exponentially harder to communicate even in these circumstances.

Comment author: ProofOfLogic 26 January 2017 12:09:58AM 0 points [-]

Based on our rational approach we are at a disadvantage for discovering these truths.

As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder -- not even a little harder -- to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts of claims only helps us because it allows us to make good decisions about how much of our time to spend investigating such claims.

What you seem to be missing (maybe?) is that we need to have a general policy which we can be satisfied with in "situations of this kind". You're saying that what we should really do is trust our friend who is telling us about lucid dreaming (and, in fact, I agree with that policy). But if it's rational for us to ascribe a really low probability (I don't think it is), that's because we see a lot of similar claims to this which turn out to be false. We can still try a lot of these things, with an experimental attitude, if the payoff of finding a true claim balances well against the number of false claims we expect to sift through in the process. However, we probably don't have the attention of looking at all such cases, which means we may miss lucid dreaming by accident. But this is not a flaw in the strategy; this is just a difficulty of the situation.

I'm frustrated because it seems like you are misunderstanding a part of the response Kindly and I are making, but you're doing a pretty good job of engaging with our replies and trying to sift out what you think and where you start disagreeing with our arguments. I'm just not quite sure yet where the gap between our views is.

View more: Next