Comment author: David_J._Balan 26 June 2008 03:21:44AM 0 points [-]

Along the lines of some of the commenters above, it's surely not telling Eliezer anything he doesn't already know to say that there are lots of reasons to be scared that a super-smart AI would start doing things we wouldn't like even without believing that an AI is necessarily a fundamentally malevolent ghost that will wriggle out of whatever restraints we put it in.

Comment author: David_J._Balan 01 February 2008 06:27:11PM 0 points [-]

Thursday, February 21st at 7:00 pm? Why bother? Surely the singularity thing will have happened by then:)

Comment author: David_J._Balan 12 December 2007 07:38:00PM 4 points [-]

Eliezer's reminder that even rationalists are human, and so are subject to human failings such as turning a community into a cult, is welcome. But it's a big mistake to dismiss explanations such as "Perhaps only people with innately totalitarian tendencies would try to become the world's authority on everything." There is a huge degree of heterogeneity across people in every relevant metric, including a tendency toward totalitarianism. I can't imagine that anyone disputes this. And if the selection process for being in a certain position tends to advantage people with those tendencies, so that they are selected into them, that might well explain a large part of how people in those positions behave.

Comment author: David_J._Balan 10 December 2007 07:03:27PM 0 points [-]

Why the hating on summer camp? The good ones are wonderful.

In response to Applause Lights
Comment author: David_J._Balan 11 September 2007 06:53:06PM 3 points [-]

The democracy booster probably meant that people with little political power should not be ignored. And that's not an empty statement; people with little political power are ignored all the time.

Comment author: David_J._Balan 27 August 2007 02:13:34PM 0 points [-]

I'm pretty out of my depth here, but I'll echo what some people have said above. Before people started scientifically doing either one, would it have been obvious that a simple model would be very successful at predicting the behavior of, say, subatomic particles but would be very unsuccessful at predicting the weather? That is, it seems like there really are some phenomena where it is more true and others where it is less true that predictions can be generally and successfully made using straightforward intuitive models. It seems like the label "emergent" is just a (useful) label for the stuff where this can't be done.

In response to Semantic Stopsigns
Comment author: David_J._Balan 25 August 2007 09:57:26PM 0 points [-]

I think some theists would say that the "who made God" question is a semantic stop sign, but that this is OK. That is, they would say that *they* are not capable in probing into the question any further, but that the leaders of their religion (with the help of the sacred texts) are capable of doing so, and they bring back from the other side the answer that the religion is true and everything is OK.

As for liberal democracy, it's clearly an error to assert without further argument that liberal democracy will solve all future problems. But it is not a mistake to say that it is far and away the most successful thing that humans have ever come up with, and so that it is the best framework in which to try to address future problems.

In response to Hindsight bias
Comment author: David_J._Balan 23 August 2007 05:23:02PM 0 points [-]

Of course it's always hard to know what truth is in situations like this, but there appears to be evidence that the people who were actually in charge of preventing terrorism were actively worried about something much like what actually happened, and were ignored by their superiors.

Comment author: David_J._Balan 08 August 2007 03:54:39PM 1 point [-]

Eliezer, you shouldn't have chased Anna away.

Comment author: David_J._Balan 13 July 2007 12:35:57AM 1 point [-]

I'm very sympathetic to the idea contained in the post. In fact, I used to say something similar in my graduate student gig as a freshman orientation guy. But teaching and learning real thought are hard. Can everybody teach it and learn it? Is there any sense in which what goes on now is not optimal, but is (or at least is not too far from) constrained optimal?

View more: Prev | Next