Comment author: wafflepudding 29 September 2016 02:28:39AM 0 points [-]

I'd agree that certain worlds would have the building of the LHC pushed back or moved forward, but I doubt there would be many where the LHC was just never built. Unless human psychology is expected to be that different from world to world?

Comment author: hairyfigment 29 September 2016 10:15:19PM 0 points [-]

...As I pointed out recently in another context, humans have existed for tens of thousands of years or more. Even civilization existed for millenia before obvious freak Isaac Newton started modern science. Your position is a contender for the nuttiest I've read today.

Possibly it could be made better by dropping this talk of worlds and focusing on possible observers, given the rise in population. But that just reminds me that we likely don't understand anthropics well enough to make any definite pronouncements.

Comment author: James_Miller 14 September 2016 03:20:51PM 3 points [-]

I agree with your first paragraph, but Adams has described how his Trump writing has decimated his ability to earn money as a public speaker because people who hire such speakers want to avoid controversy. Adams appearing on the podcast of an obscure college professor was an act of altruism.

Comment author: hairyfigment 15 September 2016 04:55:08PM 2 points [-]

So you think he assigns a lower probability to Trump winning than someone unfamiliar with his argument might suppose? In theory he might have lost more than 2mill * 0.38 = 760000, but the higher that probability goes the worse your argument sounds.

Comment author: Gunnar_Zarncke 09 September 2016 07:50:09PM 0 points [-]

On the other hand fear of and end of the world (as they knew it) seems to be not unlikely at any time.

Creating reference classes as small as you like is easy. But the predictive power diminishes accordingly...

Comment author: hairyfigment 09 September 2016 09:16:57PM 0 points [-]

Double negatives exist to help hide what you're saying. If it's somewhat likely, show me a single clear example that predates Christianity. The story of Noah says such a flood will never happen again. The Kali Yuga was supposed to last more than 400000 years.

Comment author: entirelyuseless 09 September 2016 12:53:27PM 0 points [-]

See the Gospels for examples.

Comment author: hairyfigment 09 September 2016 05:32:12PM 0 points [-]

That's what I thought you meant. But Christianity has existed for less than 4% of humanity's time, and what we ordinarily call "the ancient world" started 3000-6000 years earlier.

Comment author: entirelyuseless 09 September 2016 02:33:13AM 0 points [-]

"The probability of these ideas occurring to an ancient person..."

In the ancient world it was very common to predict the imminent end of the world.

And in my own case, before ever having heard of the Doomsday argument, the argument occurred to me exactly in the context of thinking about the possible end of the world.

So it doesn't seem particularly unlikely to occur to an ancient person.

Comment author: hairyfigment 09 September 2016 08:25:34AM 0 points [-]

In the ancient world it was very common to predict the imminent end of the world.

How so?

Comment author: reguru 02 September 2016 11:32:52AM *  -2 points [-]

I think he's aware of the stereotype, but obviously, from my perspective, people are getting triggered left and right that rationality might somehow be wrong.

Of course not wrong in the sense that rationality in the matrix might still be considered "superior" over all other Ways in the matrix. But it is still the matrix and we're happy to play that game because it's fun :)

Comment author: hairyfigment 05 September 2016 09:24:35AM 0 points [-]

So, you keep using that word, "rationality," even though we've mentioned that LW uses it to mean something else. I don't know what you or the creator of the video mean by it, but I'm confident it's not the same. Perhaps instead of claiming that "people are getting triggered," you should ask yourself if you've succeeded in getting your most basic point across, or if we might be confused about which subject matter you want to address. Consider throwing out the video's words and finding new ones.

In addition, a good way to establish that your subject matter is real and not imaginary is to show people. When talking about stupidity this can be rude, but sometimes it seems unavoidable. I suppose in principle you could describe a time when you made the mistake in question.

Comment author: entirelyuseless 04 September 2016 03:42:58PM 3 points [-]

Why is this rational? A great deal of the deterrent value of a criminal justice system consists in telling stories. If you simply state the facts, they might be much less deterring. Thus "they are going to lock him away and feed and house him for free for the next ten years," might look more like an additional benefit than a deterrent.

Comment author: hairyfigment 05 September 2016 03:18:11AM -1 points [-]

I believe you're missing the point. Saying "He is going to pay his debt to society," does not tell you much of anything unless you know all the context. Because the person who says it often does not want to inform you so much as they want to influence you or someone else.

Comment author: AlexMennen 18 August 2016 08:57:56PM 0 points [-]

unless you think a 'slight' change in goals would produce a slight change in outcomes.

It depends what sorts of changes. Slight changes is what subgoals are included in the goal result in much larger changes in outcomes as optimization power increases, but slight changes in how much weight each subgoal is given relative to the others in the goal can even result in smaller changes in outcomes as optimization power increases if it becomes possible to come close to maxing out each subgoal at the same time. It seems plausible that one could leave the format in which goals are encoded in the brain intact while getting a significant increase in capabilities, and that this would only cause the kinds of goal changes that can lead to results that are still not too bad according to the original goal.

Comment author: hairyfigment 18 August 2016 09:46:05PM 0 points [-]

maxing out each subgoal at the same time

seems kind of ludicrous if we're talking about empathy and sadism.

Comment author: thrawnca 18 August 2016 02:42:02AM *  0 points [-]

The beggars-and-gods formulation is the same problem.

I don't think so; I think the element of repetition substantially alters it - but in a good way, one that makes it more useful in designing a real-world agent. Because in reality, we want to design decision theories that will solve problems multiple times.

At the point of meeting a beggar, although my prospects of obtaining a gold coin this time around are gone, nonetheless my overall commitment is not meaningless. I can still think, "I want to be the kind of person who gives pennies to beggars, because overall I will come out ahead", and this thought remains applicable. I know that I can average out my losses with greater wins, and so I still want to stick to the algorithm.

In the single-shot scenario, however, my commitment becomes worthless once the coin comes down tails. There will never be any more 10K; there is no motivation any more to give 100. Following my precommitment, unless it is externally enforced, no longer makes any sense.

So the scenarios are significantly different.

Comment author: hairyfigment 18 August 2016 08:53:06AM 0 points [-]

So say it's repeated. Since our observable universe will end someday, there will come a time when the probability of future flips is too low to justify paying if the coin lands tails. Your argument suggests you won't pay, and by assumption Omega knows you won't pay. But then on the previous trial you have no incentive to pay, since you can't fool Omega about your future behavior. This makes it seem like non-payment propagates backward, and you miss out on the whole sequence.

Comment author: hairyfigment 17 August 2016 10:14:14PM 0 points [-]

I consider modified uploads much more likely to result in outcomes worse than extinction. I don't even know what you could be imagining when you talk about intermediate outcomes, unless you think a 'slight' change in goals would produce a slight change in outcomes.

My go-to example of a sub-optimal outcome better than death is (Spoilers!) from Friendship is Optimal - the AI manipulates everyone into becoming virtual ponies and staying under her control, but otherwise maximizes human values. This is only possible because the programmer made an AI to run a MMORPG, and added the goal of maximizing human values within the game. You would essentially never get this result with your evolutionary algorithm; it seems overwhelmingly more likely to give you a mind that still wants to be around humans and retains certain forms of sadism or the desire for power, but lacks compassion.

View more: Prev | Next