Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: DilGreen 01 October 2010 02:32:50PM 11 points [-]

I think that EY's problem with this point of view is a typical one that I find here at LW: a consideration of the rational thinker as loner in heroic mode, who is expected to ignore all contexts (social, environmental, whatever) that are not explicitly stated as part of the problem presentation. On the other hand, these students were in a physics class, and the question is obviously not part of normal conversation.

In response to comment by DilGreen on Fake Explanations
Comment author: matteyas 03 August 2017 08:59:45PM 0 points [-]

Are you saying that in an environment for learning about- and discussing rationality, we should strive for a less-than-ideal rationality (that is, some form of irrationality) just because of practical contexts that people often run into and choose the easy way out of?

Would you become equally suspicious of the math teacher's point of view if some person from a math problem buys 125 boxes with 6 watermelons each, since he won't be able to handle that amount in most practical contexts?

Comment author: Unnamed2 29 October 2008 03:45:20AM 2 points [-]

It's interesting that Eliezer ties intelligence so closely to action ("steering the future"). I generally think of intelligence as being inside the mind, with behaviors & outcomes serving as excellent cues to an individual's intelligence (or unintelligence), but not as part of the definition of intelligence. Would Deep Blue no longer be intelligent at chess if it didn't have a human there to move the pieces on the board, or if it didn't signal the next move in a way that was readily intelligible to humans? Is the AI-in-a-box not intelligent until it escapes the box?

Does an intelligent system have to have its own preferences? Or is it enough if it can find the means to the goals (with high optimization power, across domains), wherever the goals come from? Suppose that a machine was set up so that a "user" could spend a bit of time with it, and the machine would figure out enough about the user's goals, and about the rest of the world, to inform the user about a course of action that would be near-optimal according to the user's goals. I'd say it's an intelligent machine, but it's not steering the future toward any particular target in outcome space. You could call it intelligence as problem-solving.

Comment author: matteyas 28 July 2017 05:15:58PM *  0 points [-]

First paragraph

There is only action, or interaction to be precise. It doesn't matter whether we experience the intelligence or not, of course, just that it can be experienced.

Second paragraph

Sure, it could still be intelligent. It's just more intelligent if it's less dependent. The definition includes this since more cross-domain ⇒ less dependence.

Comment author: Nebu 16 March 2009 09:37:15PM 17 points [-]

I voted up on your post, Yvain, as you've presented some really good ideas here. Although it may seem like I'm totally missing your point by my response to your 3 scenarios, I assure you that I am well aware that my responses are of the "dodging the question" type which you are advocating against. I simply cannot resist to explore these 3 scenarios on their own.

Pascal's Wager

In all 3 scenarios, I would ask Omega further questions. But these being "least convenient world" scenarios, I suspect it'd be all "Sorry, can't answer that" and then fly away. And I'd call it a big jerk.

For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.

So then I'd be stuck trying to decide whether God doesn't exist, or logic is incorrect (i.e. reality can be logically self inconsistent). I'm tempted to adopt Catholicism (for the same reason I would one-box on Newcomb: I want the rewards), but I'm not sure how my brain could handle a non-logical reality. So I really don't know what would happen here.

But let's say Omega additionally tells me that Catholicism is actually self-consistent, and I just misunderstood something about it, before flying away. In that case, I guess I'd start to study Catholicism. If my revised view of Catholicism has me believe that it does some rather cruel stuff (stone people for minor offenses, etc.) then I'd have to weight that against my desire to not suffer eternal torture.

I mean, eternal torture is pretty frickin' bad. I think in the end, I'd convert. And I'd also try to convert as many other people as possible, because I suspect I'd need to be cruel to fewer people if fewer people went against Christianity.

The God-Shaped Hole

To clarify your scenario, I'm guessing Omega explicitly tells me that I will be happier if I believe something untrue (i.e. God). I would probably reject God in this case, as Omega is implicitly confirming that God does not exist, and I do care about truth more than happiness. I've already experience this in other manners, so this is a much easier scenario for me to imagine.

Extreme Altruism

I don't think I can overcome this challenge. No matter how much I think about it, I find myself putting up semantic stop signs. In my "least convenient world", Omega tells me that Africa is so poverty stricken, and that my contribution would be so helpful, that I would be improving the lives of billions of people, in exchange for giving up all my wealth. While I might not donate all my money to save 10, I think I value billions of lives more than my own life. Do I value it more than my own happiness? This is an extremely painful question for me to think about, so I stop thinking about it.

"Okay", I say to Omega, "what if I only donate X percent of my money, and keep the rest for myself?" In one possible "least convenient world", Omega tells me that the charity is run by some nutcase whom, for whatever reason, will only accept an all-or-nothing deal. Well, when I phrase it like that, I feel like not donating anything, and blaming it on the nutcase. So suppose instead Omega tells me "There's some sort of principles of economy of scale which is too complicated for me to explain to you which basically means that your contribution will be wasted unless you contribute at least Y amount of dollars, which coincidentally just happens to be your total net worth." Again, I'm torn and find it difficult to come to a conclusion.

Alternative, I say to Omega "I'll just donate X percent of my money." Omega tells me "that's good, but it's not optimum." And I reply "Okay, but I don't have to do the optimum." but then Omega convinces me that actually, yes, I really should be doing the optimum somehow. Perhaps something along the line of how my current "ignore Africa altogether" behaviour is better than the behaviour of going to Africa and killing, torturing, raping everyone there. That doesn't mean that the "ignore Africa" strategy is moral.

Comment author: matteyas 18 July 2017 11:02:05AM 0 points [-]

For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.

The point is that in the least convenient world for you, Omega would say whatever it is that you would need to hear to not slip away. I don't know what that is. Nobody but you do. If it is about eternal damnation for you, then you've hopefully found your holy grail, and as some other poster pointed out, why this is the holy grail for you can be quite interesting to dig into as well.

The point raised, as I see it, is just to make your stance on Pascal's wager contend against the strongest possible ideas.

In response to The Modesty Argument
Comment author: matteyas 20 October 2014 08:35:34AM *  0 points [-]

If genunie Bayesians will always agree with each other once they've exchanged probability estimates, shouldn't we Bayesian wannabes do the same?

An example I read comes to mind (it's in dialogue form): "This is a very common error that's found throughout the world's teachings and religions," I continue. "They're often one hundred and eighty degrees removed from the truth. It's the belief that if you want to be Christ-like, then you should act more like Christ—as if the way to become something is by imitating it."

It comes with a fun example, portraying the absurdity and the potential dangers of the behavior: "Say I'm well fed and you're starving. You come to me and ask how you can be well fed. Well, I've noticed that every time I eat a good meal, I belch, so I tell you to belch because that means you're well fed. Totally backward, right? You're still starving, and now you're also off-gassing like a pig. And the worst part of the whole deal—pay attention to this trick—the worst part is that you've stopped looking for food. Your starvation is now assured."

In response to Circular Altruism
Comment author: matteyas 18 October 2014 12:22:01AM 0 points [-]

This threshold thing is interesting. Just to make the idea itself solid, imagine this. You have a type of iron bar that can bend completely elastically (no deformation) if forces less than 100N is applied to it. Say they are more valuable if they have no such deformations. Would you apply 90N to 5 billion bars or 110N to one bar?

With this thought experiment, I reckon the idea is solidified and obvious, yes? The question that still remains, then, is whether dust specks in eyes is or is not affected by some threshold.

Though I suppose the issue could actually be dropped completely, if we now agree that the idea of threshold is real. If there is a threshold and something is below that threshold, then the utility of doing it is indeed zero, regardless of how many times you do it. If something is above the threshold, shut up (or don't) and multiply.

Comment author: bigjeff5 03 March 2011 06:53:16PM 19 points [-]

I hate to break it to you, but if setting two things beside two other things didn't yield four things, then number theory would never have contrived to say so.

Numbers were invented to count things, that is their purpose. The first numbers were simple scratches used as tally marks circa 35,000 BC. The way the counts add up was derived from the way physical objects add up when grouped together. The only way to change the way numbers work is to change the way physical objects work when grouped together. Physical reality is the basis for numbers, so to change number theory you must first show that it is inconsistent with reality.

Thus numbers have a definite relation to the physical world. Number theory grew out of this, and if putting two objects next to two other objects only yielded three objects when numbers were invented over forty thousand years ago, then number theory must reflect that fact or it would never have been used. Consequently, suggesting 2+2=4 would be completely absurd, and number theorists would laugh in your face at the suggestion. There would, in fact, be a logical proof that 2+2=3 (much like there is a logical proof that 2+2=4 in number theory now).

All of mathematics are, in reality, nothing more than extremely advanced counting. If it is not related to the physical world, then there is no reason for it to exist. It follows rules first derived from the physical world, even if the current principles of mathematics have been extrapolated far beyond the bounds of the strictly physical. I think people lose sight of this far too easily (or worse, never recognize it in the first place).

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

Comment author: matteyas 09 October 2014 12:42:25AM 1 point [-]

I hate to break it to you, but if setting two things beside two other things didn't yield four things, then number theory would never have contrived to say so.

At what point are there two plus two things, and at what point are there four things? Would you not agree that a) the distinction itself between things happens in the brain and b) the idea of the four things being two separate groups with two elements each is solely in the mind? If not, I'd very much like to see some empirical evidence for the addition operation being carried out.

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math.

English is so firmly grounded in the physical reality that when observations don't line up with what our english tells us, we must change our understanding of reality, not of english.

I hope the absurdity is obvious, and that there are no problems to make models of the world with english alone. So, do you find it more likely that math is connected to the world because we link it up explicitly or because it is an intrinsic property of the world itself?

Comment author: matteyas 04 October 2014 08:20:58PM *  0 points [-]

It's a bit unfortunate that these articles are so old; or rather that people aren't as active presently. I'd have enjoyed some discussion on a few thoughts. Take for instance #5, I shall paste it for convenience:

If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."

It struck me that this is very deeply embedded in us, or at least in me. I read this and noticed that my thought was along the lines of "yes, how silly, it could be a non-colored egg." What's wrong with this? What's felt is an egg shape, not an egg. Might as well be something else entirely.

So how deep does this one go; and how deep should we unravel it? I guess "all the way down" is the only viable answer. I can assign a high probability that it is an egg, I simply shouldn't conclude anything just yet. When is it safe to conclude something? I take it the only accurate answer would be "never." So we end up with something that I believe most of us holds as true already: Nothing is certain.

It is of course a rather subtle distinction going from 'certain' to 'least uncertain under currently assessed information'. Whenever I speak about physics or other theoretical subjects, I'm always in the mindset that what I'm discussing is on the basis of "as is currently understood," so in that area it feels rather natural. I suppose it's just a bit startling to find that the chocolate I just ate is only chocolate as a best candidate rather than as a true description of reality; that biases can be found in such "personal" places.

Comment author: matteyas 28 September 2014 02:56:26PM *  0 points [-]

I have a question related to the initial question about the lone traveler. When is it okay to initiate force against any individual who has not initiated force against anyone?

Bonus: Here's a (very anal) cop out you could use against the least convenient possible world suggestion: Such a world—as seen from the perspective of someone seeking a rational answer—has no rational answer for the question posed.

Or a slightly different flavor for those who are more concerned with being rational than with rationality: In such a world, I—who value rational answers above all other answers—will inevitably answer the question irrationally. :þ