Well, Lenat did. Whether or in what capacity a computer program was involved is an open question.
It's useful evidence that EURISKO was doing something. There were some extremely dedicated and obsessive people involved in Traveller, back then. The idea that someone unused to starship combat design of that type could come and develop fleets that won decisively two years in a row seems very unlikely.
It might be that EURISKO acted merely as a generic simulator of strategy and design, and Lenat did all the evaluating, and no one else in the contest had access to simulations of similar utility, which would negate much of the interest in EURISKO, I think.
I have found Haase's thesis online. Would it be irresponsible of me to post the link here? (It is not actually hard to find.)
ETA: How concerned should we be that DARPA is going full steam ahead for strong AI? Perhaps not very much, given the failure of at least two of their projects along these lines:
High Yield Cognitive Systems. The Wikipedia article (itself defunct) includes the grandiose claim that it failed because human-level AI was not ambitious enough.
Physical intelligence. Current.
There are a number of DARPA and IARPA projects we pay attention to, but I'd largely agree that their approaches and basic organization makes them much less worrying.
They tend towards large, bureaucratically hamstrung projects, like PAL, which the last time I looked included work and funding for teams at seven different universities, or they suffer from extreme narrow focus, like their intelligent communication initiatives, which went from being about adaptive routing via deep introspection of multimedia communication and intelligent networks, to just being software radios and error correction.
They're worth keeping any eye on mostly because they have the money to fund any number of approaches, and often in long periods. But the biggest danger isn't their funded, stated goals, it's the possibility of someone going off-target, and working on generic AI in the hopes of increasing their funding or scope in the next evaluation, which could be a year or more later.
From what anecdotal evidence I have, I'd say it doesn't have much to do with argument. People who discard their religious beliefs do so after feeling emotional alienation. The antagonistic context of a "my side versus their side" debate isn't amenable to that.
It's one thing to be told some (presumably good) reason to reject the God hypothesis. It's another to be honestly forced to reconcile it with events in your life story. Maybe they just don't "feel it" anymore; God's presence in their life isn't what it used to be. Or maybe they're forced to wrestle with the problem of evil, because something bad happened to a loved one. Maybe they have a spiritual-but-secular experience that makes it seem like the whole God idea is small-minded. Whatever the case, it takes a kind of emotional punch and not just a line of reasoning.
At least, that's what I would think.
I jumped the theist fence after reading a book whose intellectual force was too great to be denied outright, and too difficult to refute point by point. I hate being wrong, and feeling stupid, and the arguments from the book stayed in my thoughts for a long time.
I didn't formalize my thoughts until later, but if my atheism had a cause, it was THE CASE AGAINST GOD by George H Smith. I was very emotionally satisfied with my religion and it's community beforehand.
You can measure time per day on OB/LW or any other app/site using Rescuetime.
I use ManicTime, myself
Those won't divide the parties outside the US. Every political party in Britain aside from the extreme fringe are for the availability of abortion and government provision of free healthcare, for example.
And things that do divide the parties here, like compulsory ID cards, don't divide the parties in the US.
I'm not really interested in actual party divisions so much as I am interested in a survey of beliefs.
Affiliation seems like much less useful information, if we're going to use Aumann-like agreement processes on this survey stuff.
Why ask for political parties? Political views are complicated, if all you can do is pick a party this complexity is lost. All those not from the US (like myself) might additionally have a hard time picking a party.
Those are not easy problems to solve and it is certainly questionable if thinking of some more specific questions about political views and throwing them all together will get you meaningful reliable and valid results. As long as you cannot do better than that asking just for preferred political parties is certainly good enough.
Yes, it might be more useful to list some wedge issues that usually divide the parties in the US.
If a man's prestige in the seduction community depends on his reports of how many women he has seduced, then, in the absence of non-gameable standards of observational evidence, this potentially invalidates everything they have ever concluded about anything.
As I understand it, gurus usually compete in the field, with students watching. It's not how many you did pick up in the past, it's how many can you pick up today, with what degree of elegance/speed, and with how "hot" of girls, as judged by the watching students. Such a rating method may not be objective, and lead to debates over who "won" a showdown, but it keeps them from devolving into complete non-usefulness.
By the way, in-field trainers and coaches are routinely expected to demonstrate for their students in the field, usually when, like Luke with Yoda, the student says that, "but that's impossible!" (Trainers sometimes remark that this is the most pressure-filled part of their job, not because they need validation from the woman or fear rejection, but because they'll be embarrassed in front of several students if they can't show some kind of positive result on cue.)
Doesn't that make the problem worse, though?
If the feedback is esteem of students in the field, then you're rewarding the mentor who picks his battles carefully, who can sell what happened on any encounter in a positive and understandable light. The honest mentors and 'researchers' who approach a varied population, analyze their performance without upselling, and accrete performance over time(as you'd expect with a real, generic skill) will lose out.
I found the last survey interesting because of the use of ranges and confidence measures. Are there any other examples of this that a community response would be helpful for?
What is the time-urgency, if you don't mind my asking? Other than Vassar's ascension, the Summer of Code projects, and LessWrong, I wasn't aware of anything going on at SingInst with any kind of schedule.
My first attempt at volunteering for Eliezer ended badly, for outside and personal reasons, and I haven't seriously considered it since, mostly because I didn't really understand the short-term goals of SingInst(Or I didn't agree with what I did understand of them).
Also, to be honest, the last thing that I found useful (in terms of my Singularitarian goals) to come out of it was CEV, which was quite a while ago now. Are there new projects, or private projects coming to public view? Why now?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I have the sense that much of this was written as a response to this paradox in which maximizing expected utility tells you to draw cards until you die.
Psychohistorian wrote:
The paradox is stated in utilons, not hedons. But if your hedons were measured properly, your inability to imagine them now is not an argument. This is Omega we're talking about. Perhaps it will augment your mind to help you reach each doubling. Whatever. It's stipulated in the problem that Omega will double whatever the proper metric is. Futurists should never accept "but I can't imagine that" as an argument.
We need to look at it purely in terms of numbers if we are rationalists, or let us say "ratio-ists". Is your argument really that numeric analysis is the wrong thing to do?
Changing the value you assign life vs. death doesn't sidestep the paradox. We can rescale the problem by an affine transformation so that your present utility is 1 and the utility of death is 0. That will not change the results of expected utility maximization.
I seem to have missed some context for this, I understand that once you've gone down the road of drawing the cards, you have no decision-theoretic reason to stop, but why would I ever draw the first card?
A mere doubling of my current utilons measured against a 10% chance of eliminating all possible future utilons is a sucker's bet. I haven't even hit a third of my expected lifespan given current technology, and my rate of utilon acquisition has been accelerating. Quite aside from the fact that I'm certain my utility function includes terms regarding living a long time, and experiencing certain anticipated future events.