Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: denis_bider 27 April 2009 04:46:06PM 1 point [-]

Well. I finally got around to reading The Unwilling Warlord, and I must say that, despite the world of Ethshar being mildly interesting, the book is disappointment. It builds up nice and well in the first 2/3 of the book, but in the last 1/3, when you expect it to unfold and flourish in some interesting, surprising, revealing manner, Watt-Evans instead decides to pursue the lamest, boringest plot possible, all the while insulting the reader's intelligence.

For the last 1/3 of the book, Watt-Evans attempts to make the eventual reasons for Vond's undoing a "mystery". He suggests that Sterren knows the answer, but the reader is not told what it is. When the end finally arrives, it is a disapointing anti-climax as Watt-Evans chooses the most non-eventful possible outcome that has been blatantly obvious all the while.

He employs an exceedingly lame plot device where Vond is so stupid he just doesn't see it coming. The author neither takes the opportunity to explain what the Calling is, nor does he have Sterren take Vond down in a more interesting manner, such as having Sterren go to the Towers of Lumeth and turning them off, or something.

Yes, the writing has some positive traits such as Eliezer described, but overall it's much lamer and more amateurish than I expected. Given the recommendation, I would have expected this to be much better fiction than it turns out it is.

Comment author: denis_bider 12 February 2009 07:54:00PM 3 points [-]

Neh. Eliezer, I'm kind of disappointed by how you write the tragic ending ("saving" humans) as if it's the happy one, and the happy ending (civilization melting pot) as if it's the tragic one. I'm not sure what to make of that.

Do you really, actually believe that, in this fictional scenario, the human race is better off sacrificing a part of itself in order to avoid blending with the super-happies?

It just blows my mind that you can write an intriguing story like this, and yet draw that kind of conclusion.

Comment author: denis_bider 31 January 2009 08:50:14PM 0 points [-]

Excellent. I was reluctant to start reading at first, but when I did, I found it entertaining. This should be a TV series. :)

In response to Complex Novelty
Comment author: denis_bider 22 December 2008 01:14:07AM 0 points [-]

Eliezer: This post is an example of how all your goals and everything you're doing is affected by your existing preferences and biases.

For some reason, you see Peer's existence as described by Greg Egan as horrible. You propose an insight-driven alternative, but this seems no more convincing to me than Peer's leg carving. I think Peer's existence is totally acceptable, and might even be delightful. If Peer wires himself to get ultimate satisfaction from leg carving, then by definition, he is getting ultimate satisfaction from leg carving. There's nothing wrong with that.

More importantly - no alternative you might propose is more meaningful!

There's also nothing wrong with being a blob lying down on a pillow having a permanent fantastic orgasm.

The one argument I do have against these preoccupations is that they provide no progress towards avoiding threats to one's existence. In this respect, the most sensible preoccupation to wire yourself for would be something that involves preserving life, and other creatures' lives as well, if you care for that as the designer.

Satisfying that, the options are open. What's really wrong with leg carving?

Comment author: denis_bider 06 September 2008 08:18:12PM 2 points [-]

I stumbled over the same quote. What "gift"? From whom? What "responsibility"? And just how is being "lucky" at odds with being "superior"?

To see the nonsense, let me paraphrase:

"Because giftedness is not to be talked about, no one tells human children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior animals, but lucky ones. That the gift brings with it obligations to other animals on Earth to be worthy of it."

The few people who honestly believe that are called a lunatic fringe. And yet, it is the same statement as Murray's, merely in a wider context.

Comment author: denis_bider 04 September 2008 06:58:01PM 0 points [-]

What Kevin Dick said.

The benefit to each player from mutual cooperation in a majority of the rounds is much more than the benefit from mutual defection in all rounds. Therefore it makes sense for both players to invest at the beginning, and cooperate, in order to establish each other's trustworthiness.

Tit-for-tat seems like it might be a good strategy in the very early rounds, but as the game goes on, the best reaction to defection might become two defections in response, and in the last rounds, when the other party defects, the best response might be all defections until the end.

Comment author: denis_bider 03 September 2008 10:33:44PM 3 points [-]

An excellent way to pose the problem.

Obviously, if you know that the other party cares nothing about your outcome, then you know that they're more likely to defect.

And if you know that the other party knows that you care nothing about their outcome, then it's even more likely that they'll defect.

Since the way you posed the problem precludes an iteration of this dilemma, it follows that we must defect.

Comment author: denis_bider 02 September 2008 06:02:55PM 0 points [-]

Eliezer: what I proposed is not a superintelligence, it's a tool. Intelligence is composed of multiple factors, and what I'm proposing is stripping away the active, dynamic, live factor - the factor that has any motivations at all - and leaving just the computational part; that is, leaving the part which can navigate vast networks of data and help the user make sense of them and come to conclusions that he would not be able to on his own. Effectively, what I'm proposing is an intelligence tool that can be used as a supplement by the brains of its users.

How is that different from Google, or data mining? It isn't. It's conceptually the same thing, just with better algorithms. Algorithms don't care how they're used.

This bit of technology is something that will have to be developed to put together the first iteration of an AI anyway. By definition, this "making sense of things" technology needs to be strong enough that it allows a user to improve the technology itself; that is what an iterative, self-improving AI would be doing. So why let the AI self-improve itself, which more likely than not will run amok, despite the designers' efforts and best intentions? Why not use the same technology that the AI would use to improve itself, to improve _your_self? Indeed, it seems ridiculous not to do so.

To build an AI, you need all the same skills that you would need to improve yourself. So why create an external entity, when _you_ can be that entity?

Comment author: denis_bider 02 September 2008 05:44:48PM 9 points [-]

Looks like the soldier quote is gonna be big in comments. I think it's out of place too, and as opposed to most other quotes that Eliezer comes up with, it doesn't make a lot of sense. In the same way as: "It is the scalpel, not the surgeon, or the nurse, that fixed your wounds!"

Soldiers are tools wielded by the structure in power, and it is the structure in power that determines whether the soliders are going to protect your rights and take them away.

Perhaps, "The One" might argue, it is a different kind of person who becomes a soldier in an army that "protects freedom" rather than an army that oppresses its countrymen. There are probably more such idealists among the soldiers in the US army, than among troops commanded by the Burmese generals.

Even so, though, the idealist soldier does what he's commanded to do, and whether that which he does actually protects freedom or not, is largely determined by the structure of power, not the idealist soldier. He remains a tool, a hammer wielded by someone else's will.

Comment author: denis_bider 02 September 2008 12:48:52AM 0 points [-]

Kaj makes the efficiency argument in favor of full-fledged AI, but what good is efficiency when you have fully surrendered your power?

What good is being the president of a corporation any more, when you've just pressed a button that makes a full-fledged AI run it?

Forget any leadership role in a situation where an AI comes to life. Except in the case that it is completely uninterested in us and manages to depart into outer space without totally destroying us in the process.

View more: Next