Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: AlexMennen 20 July 2017 12:14:20AM 0 points [-]

Can anyone point me to any good arguments for, or at least redeeming qualities of, Integrated Information Theory?

Comment author: AlexMennen 28 June 2017 07:58:38PM 0 points [-]

So you might have two sentences A and B shown as two circles, then "A and B" is their intersection, "A or B" is their union, etc. But "A implies B" doesn't mean one circle lies inside the other, as you might think! Instead it's a shape too, consisting of all points that lie inside B or outside A (or both).

There's nothing intuitionistic about this. You can do exactly the same thing with classical logic, if you just forget about the topological "other details" that you alluded to.

Comment author: komponisto 21 June 2017 04:04:10AM *  0 points [-]

Decision theory (which includes the study of risks of that sort) has long been a core component of AI-alignment research.

Comment author: AlexMennen 21 June 2017 09:24:08PM 0 points [-]

Decision theory (which includes the study of risks of that sort)

No, it doesn't. Decision theory deals with abstract utility functions. It can talk about outcomes A, B, and C where A is preferred to B and B is preferred to C, but doesn't care whether A represents the status quo, B represents death, and C represents extreme suffering, or whether A represents gaining lots of wealth and status, B represents the status quo, and C represents death, so long as the ratios of utility differences are the same in each case. Decision theory has nothing to do with the study of s-risks.

Comment author: komponisto 20 June 2017 11:05:52PM *  9 points [-]

I feel a weird disconnect on reading comments like this. I thought s-risks were a part of conventional wisdom on here all along. (We even had an infamous scandal that concerned one class of such risks!) Scott didn't "see it before the rest of us" -- he was drawing on an existing, and by now classical, memeplex.

It's like when some people spoke as if nobody had ever thought of AI risk until Bostrom wrote Superintelligence -- even though that book just summarized what people (not least of whom Bostrom himself) had already been saying for years.

Comment author: AlexMennen 20 June 2017 11:55:36PM 2 points [-]

We even had an infamous scandal that concerned one class of such risks!

Yes, but the claim that that risk needs to be taken seriously is certainly not conventional wisdom around here.

Comment author: AlexMennen 10 June 2017 04:41:41AM 1 point [-]

There's a difference between contradictory preferences and time-inconsistent preferences. A rational agent can both want to live at least one more year and not want to live more than a hundred years, and this is not contradicted by the possibility that the agent's preferences will have changed 99 years later, so that the agent then wants to live at least another year. Of course, the agent has an incentive to influence its future self to have the same preferences it does (ie so that 99 years later, it wants to die within a year), so that its preferences are more likely to get achieved.

Comment author: siIver 08 June 2017 04:01:56PM *  3 points [-]

This is the ultimate example of... there should be a name for this.

You figure out that something is true, like utilitarianism. Then you find a result that seems counter intuitive. Rather than going "huh, I guess my intuition was wrong, interesting" you go "LET ME FIX THAT" and change the system so that it does what you want...

man, if you trust your intuition more than the system, then there is no reason to have a system in the first place. Just do what is intuitive.

The whole point of having a system like utilitarinism is that we can figure out the correct answers in an abstarct, general way, but not necessarily for each particular situation. Having a system tells us what is correct in each situation, not vice versa.

The utility monster is nothing to be fixed. It's a natural consequence of doing the right thing, that just happens to make some people uncomfortable. It's hardly the only uncomfortable consequence of utilitarianism, either.

Comment author: AlexMennen 09 June 2017 05:39:20AM 0 points [-]

Sometimes when explicit reasoning and intuition conflict, intuition turns out to be right, and there is a flaw in the reasoning. There's nothing wrong with using intuition to guide yourself in questioning a conclusion you reached through explicit reasoning. That said, DragonGod did an exceptionally terrible job of this.

Comment author: Dagon 08 June 2017 04:22:15PM 1 point [-]

By bounding utility, you also enforce diminishing marginal utility to a much greater degree than most people claim to experience it. If one good thing is utility 0.5, a second good thing must be less than 0.5, and a third good thing pretty much is worthless.

personally, my objection to utilitarianism is more fundamental than this. I don't believe utility is an objective scalar measure that can be compared across persons (or even across independent decisions for a person). It's just a convenient mathematical formalism for a decision theory.

Comment author: AlexMennen 09 June 2017 05:31:29AM 1 point [-]

By bounding utility, you also enforce diminishing marginal utility to a much greater degree than most people claim to experience it. If one good thing is utility 0.5, a second good thing must be less than 0.5, and a third good thing pretty much is worthless.

If utility is bounded between -1 and 1, then 0.5 is an extremely large amount of utility, not just some generic good thing. Bounded utility functions do not contradict common sense beliefs about how diminishing marginal returns works.

Comment author: DragonGod 08 June 2017 06:09:00PM 0 points [-]

No. We look at Utility at points in time. One good thing is 0.5. We then calculate subsequent utility of another good thing after receiving that one good thing. You reevaluate the utility again after the first occurrence of the event.

Comment author: AlexMennen 09 June 2017 05:22:40AM 0 points [-]

We look at Utility at points in time.

You shouldn't. That's not how utility works.

Comment author: DragonGod 08 June 2017 08:56:48PM 0 points [-]

An event is any outcome from which an individual can derive utility.
 
The negation of an event is the event not happening.

Comment author: AlexMennen 09 June 2017 05:18:50AM 1 point [-]

Given an outcome X, there are many outcomes other than X, which generally have different utilities. Thus there isn't one utility value for X not happening.

Comment author: DragonGod 08 June 2017 06:11:01PM 0 points [-]

I didn't say that. Is there any part of the post you what me to clarify?

Comment author: AlexMennen 08 June 2017 07:41:07PM 0 points [-]

The sum of the utility of an event and its negation is 0.

If two events are independent then the utility of both events occurring is the sum of their individual utilities.

Utilities are defined over outcomes, which don't have negations or independence relations with other outcomes. There is no such thing as the utility of an event in standard expected utility theory, and no need for such a concept.

View more: Next