In response to Utilons vs. Hedons
Comment author: PhilGoetz 11 August 2009 01:40:56AM 4 points [-]

I have the sense that much of this was written as a response to this paradox in which maximizing expected utility tells you to draw cards until you die.

Psychohistorian wrote:

There's a bigger problem causing that causes our intuition to reject this hypothetical as "just wrong:" it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy.

The paradox is stated in utilons, not hedons. But if your hedons were measured properly, your inability to imagine them now is not an argument. This is Omega we're talking about. Perhaps it will augment your mind to help you reach each doubling. Whatever. It's stipulated in the problem that Omega will double whatever the proper metric is. Futurists should never accept "but I can't imagine that" as an argument.

As for utilons, most people assign a much greater value to "not dying," compared with having more hedons. Thus, a hedonic reading of the problem returns an error because repeated doubling feels meaningless, and a utilon reading (may) return an error if we assign a significant enough negative value to death. But if we look at it purely in terms of numbers, we end up very, very happy right up until we end up very, very dead.

We need to look at it purely in terms of numbers if we are rationalists, or let us say "ratio-ists". Is your argument really that numeric analysis is the wrong thing to do?

Changing the value you assign life vs. death doesn't sidestep the paradox. We can rescale the problem by an affine transformation so that your present utility is 1 and the utility of death is 0. That will not change the results of expected utility maximization.

Comment author: outlawpoet 11 August 2009 03:00:30AM 2 points [-]

I seem to have missed some context for this, I understand that once you've gone down the road of drawing the cards, you have no decision-theoretic reason to stop, but why would I ever draw the first card?

A mere doubling of my current utilons measured against a 10% chance of eliminating all possible future utilons is a sucker's bet. I haven't even hit a third of my expected lifespan given current technology, and my rate of utilon acquisition has been accelerating. Quite aside from the fact that I'm certain my utility function includes terms regarding living a long time, and experiencing certain anticipated future events.

Comment author: asciilifeform 14 June 2009 05:26:01AM 5 points [-]

Well, Lenat did. Whether or in what capacity a computer program was involved is an open question.

Comment author: outlawpoet 15 June 2009 12:43:22AM 7 points [-]

It's useful evidence that EURISKO was doing something. There were some extremely dedicated and obsessive people involved in Traveller, back then. The idea that someone unused to starship combat design of that type could come and develop fleets that won decisively two years in a row seems very unlikely.

It might be that EURISKO acted merely as a generic simulator of strategy and design, and Lenat did all the evaluating, and no one else in the contest had access to simulations of similar utility, which would negate much of the interest in EURISKO, I think.

Comment author: RichardKennaway 14 June 2009 12:05:36PM *  4 points [-]

I have found Haase's thesis online. Would it be irresponsible of me to post the link here? (It is not actually hard to find.)

ETA: How concerned should we be that DARPA is going full steam ahead for strong AI? Perhaps not very much, given the failure of at least two of their projects along these lines:

Comment author: outlawpoet 15 June 2009 12:35:45AM 2 points [-]

There are a number of DARPA and IARPA projects we pay attention to, but I'd largely agree that their approaches and basic organization makes them much less worrying.

They tend towards large, bureaucratically hamstrung projects, like PAL, which the last time I looked included work and funding for teams at seven different universities, or they suffer from extreme narrow focus, like their intelligent communication initiatives, which went from being about adaptive routing via deep introspection of multimedia communication and intelligent networks, to just being software radios and error correction.

They're worth keeping any eye on mostly because they have the money to fund any number of approaches, and often in long periods. But the biggest danger isn't their funded, stated goals, it's the possibility of someone going off-target, and working on generic AI in the hopes of increasing their funding or scope in the next evaluation, which could be a year or more later.

Comment author: phane 15 May 2009 04:38:30AM 0 points [-]

From what anecdotal evidence I have, I'd say it doesn't have much to do with argument. People who discard their religious beliefs do so after feeling emotional alienation. The antagonistic context of a "my side versus their side" debate isn't amenable to that.

It's one thing to be told some (presumably good) reason to reject the God hypothesis. It's another to be honestly forced to reconcile it with events in your life story. Maybe they just don't "feel it" anymore; God's presence in their life isn't what it used to be. Or maybe they're forced to wrestle with the problem of evil, because something bad happened to a loved one. Maybe they have a spiritual-but-secular experience that makes it seem like the whole God idea is small-minded. Whatever the case, it takes a kind of emotional punch and not just a line of reasoning.

At least, that's what I would think.

Comment author: outlawpoet 18 May 2009 06:06:44PM 2 points [-]

I jumped the theist fence after reading a book whose intellectual force was too great to be denied outright, and too difficult to refute point by point. I hate being wrong, and feeling stupid, and the arguments from the book stayed in my thoughts for a long time.

I didn't formalize my thoughts until later, but if my atheism had a cause, it was THE CASE AGAINST GOD by George H Smith. I was very emotionally satisfied with my religion and it's community beforehand.

Comment author: hrishimittal 27 April 2009 04:14:06PM 1 point [-]

You can measure time per day on OB/LW or any other app/site using Rescuetime.

http://www.rescuetime.com/

Comment author: outlawpoet 27 April 2009 04:41:58PM 1 point [-]

I use ManicTime, myself

http://www.manictime.com/

Comment author: JulianMorrison 27 April 2009 12:37:33AM -1 points [-]

Those won't divide the parties outside the US. Every political party in Britain aside from the extreme fringe are for the availability of abortion and government provision of free healthcare, for example.

And things that do divide the parties here, like compulsory ID cards, don't divide the parties in the US.

Comment author: outlawpoet 27 April 2009 12:46:28AM 2 points [-]

I'm not really interested in actual party divisions so much as I am interested in a survey of beliefs.

Affiliation seems like much less useful information, if we're going to use Aumann-like agreement processes on this survey stuff.

Comment author: michael 26 April 2009 10:27:51PM *  2 points [-]

Why ask for political parties? Political views are complicated, if all you can do is pick a party this complexity is lost. All those not from the US (like myself) might additionally have a hard time picking a party.

Those are not easy problems to solve and it is certainly questionable if thinking of some more specific questions about political views and throwing them all together will get you meaningful reliable and valid results. As long as you cannot do better than that asking just for preferred political parties is certainly good enough.

Comment author: outlawpoet 26 April 2009 10:40:55PM 1 point [-]

Yes, it might be more useful to list some wedge issues that usually divide the parties in the US.

Comment author: pjeby 26 April 2009 10:12:52PM 4 points [-]

If a man's prestige in the seduction community depends on his reports of how many women he has seduced, then, in the absence of non-gameable standards of observational evidence, this potentially invalidates everything they have ever concluded about anything.

As I understand it, gurus usually compete in the field, with students watching. It's not how many you did pick up in the past, it's how many can you pick up today, with what degree of elegance/speed, and with how "hot" of girls, as judged by the watching students. Such a rating method may not be objective, and lead to debates over who "won" a showdown, but it keeps them from devolving into complete non-usefulness.

By the way, in-field trainers and coaches are routinely expected to demonstrate for their students in the field, usually when, like Luke with Yoda, the student says that, "but that's impossible!" (Trainers sometimes remark that this is the most pressure-filled part of their job, not because they need validation from the woman or fear rejection, but because they'll be embarrassed in front of several students if they can't show some kind of positive result on cue.)

Comment author: outlawpoet 26 April 2009 10:21:12PM 0 points [-]

Doesn't that make the problem worse, though?

If the feedback is esteem of students in the field, then you're rewarding the mentor who picks his battles carefully, who can sell what happened on any encounter in a positive and understandable light. The honest mentors and 'researchers' who approach a varied population, analyze their performance without upselling, and accrete performance over time(as you'd expect with a real, generic skill) will lose out.

Comment author: outlawpoet 26 April 2009 10:00:03PM 0 points [-]

I found the last survey interesting because of the use of ranges and confidence measures. Are there any other examples of this that a community response would be helpful for?

Comment author: outlawpoet 26 April 2009 09:05:32PM 6 points [-]

What is the time-urgency, if you don't mind my asking? Other than Vassar's ascension, the Summer of Code projects, and LessWrong, I wasn't aware of anything going on at SingInst with any kind of schedule.

My first attempt at volunteering for Eliezer ended badly, for outside and personal reasons, and I haven't seriously considered it since, mostly because I didn't really understand the short-term goals of SingInst(Or I didn't agree with what I did understand of them).

Also, to be honest, the last thing that I found useful (in terms of my Singularitarian goals) to come out of it was CEV, which was quite a while ago now. Are there new projects, or private projects coming to public view? Why now?

View more: Prev | Next