denis_bider

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Well. I finally got around to reading The Unwilling Warlord, and I must say that, despite the world of Ethshar being mildly interesting, the book is disappointment. It builds up nice and well in the first 2/3 of the book, but in the last 1/3, when you expect it to unfold and flourish in some interesting, surprising, revealing manner, Watt-Evans instead decides to pursue the lamest, boringest plot possible, all the while insulting the reader's intelligence.

For the last 1/3 of the book, Watt-Evans attempts to make the eventual reasons for Vond's undoing a "mystery". He suggests that Sterren knows the answer, but the reader is not told what it is. When the end finally arrives, it is a disapointing anti-climax as Watt-Evans chooses the most non-eventful possible outcome that has been blatantly obvious all the while.

He employs an exceedingly lame plot device where Vond is so stupid he just doesn't see it coming. The author neither takes the opportunity to explain what the Calling is, nor does he have Sterren take Vond down in a more interesting manner, such as having Sterren go to the Towers of Lumeth and turning them off, or something.

Yes, the writing has some positive traits such as Eliezer described, but overall it's much lamer and more amateurish than I expected. Given the recommendation, I would have expected this to be much better fiction than it turns out it is.

Neh. Eliezer, I'm kind of disappointed by how you write the tragic ending ("saving" humans) as if it's the happy one, and the happy ending (civilization melting pot) as if it's the tragic one. I'm not sure what to make of that.

Do you really, actually believe that, in this fictional scenario, the human race is better off sacrificing a part of itself in order to avoid blending with the super-happies?

It just blows my mind that you can write an intriguing story like this, and yet draw that kind of conclusion.

Excellent. I was reluctant to start reading at first, but when I did, I found it entertaining. This should be a TV series. :)

Eliezer: This post is an example of how all your goals and everything you're doing is affected by your existing preferences and biases.

For some reason, you see Peer's existence as described by Greg Egan as horrible. You propose an insight-driven alternative, but this seems no more convincing to me than Peer's leg carving. I think Peer's existence is totally acceptable, and might even be delightful. If Peer wires himself to get ultimate satisfaction from leg carving, then by definition, he is getting ultimate satisfaction from leg carving. There's nothing wrong with that.

More importantly - no alternative you might propose is more meaningful!

There's also nothing wrong with being a blob lying down on a pillow having a permanent fantastic orgasm.

The one argument I do have against these preoccupations is that they provide no progress towards avoiding threats to one's existence. In this respect, the most sensible preoccupation to wire yourself for would be something that involves preserving life, and other creatures' lives as well, if you care for that as the designer.

Satisfying that, the options are open. What's really wrong with leg carving?

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.

I think your time would be better spent actually working, or writing about, the actual details of the problems that need to be solved. Alternately, instead of adding to the already enormous cumulative volume of your posts, perhaps you might try writing something clearer and shorter.

But just piling more on top of what's already been written doesn't seem like it will have an influence.

I stumbled over the same quote. What "gift"? From whom? What "responsibility"? And just how is being "lucky" at odds with being "superior"?

To see the nonsense, let me paraphrase:

"Because giftedness is not to be talked about, no one tells human children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior animals, but lucky ones. That the gift brings with it obligations to other animals on Earth to be worthy of it."

The few people who honestly believe that are called a lunatic fringe. And yet, it is the same statement as Murray's, merely in a wider context.

What Kevin Dick said.

The benefit to each player from mutual cooperation in a majority of the rounds is much more than the benefit from mutual defection in all rounds. Therefore it makes sense for both players to invest at the beginning, and cooperate, in order to establish each other's trustworthiness.

Tit-for-tat seems like it might be a good strategy in the very early rounds, but as the game goes on, the best reaction to defection might become two defections in response, and in the last rounds, when the other party defects, the best response might be all defections until the end.

An excellent way to pose the problem.

Obviously, if you know that the other party cares nothing about your outcome, then you know that they're more likely to defect.

And if you know that the other party knows that you care nothing about their outcome, then it's even more likely that they'll defect.

Since the way you posed the problem precludes an iteration of this dilemma, it follows that we must defect.

Eliezer: what I proposed is not a superintelligence, it's a tool. Intelligence is composed of multiple factors, and what I'm proposing is stripping away the active, dynamic, live factor - the factor that has any motivations at all - and leaving just the computational part; that is, leaving the part which can navigate vast networks of data and help the user make sense of them and come to conclusions that he would not be able to on his own. Effectively, what I'm proposing is an intelligence tool that can be used as a supplement by the brains of its users.

How is that different from Google, or data mining? It isn't. It's conceptually the same thing, just with better algorithms. Algorithms don't care how they're used.

This bit of technology is something that will have to be developed to put together the first iteration of an AI anyway. By definition, this "making sense of things" technology needs to be strong enough that it allows a user to improve the technology itself; that is what an iterative, self-improving AI would be doing. So why let the AI self-improve itself, which more likely than not will run amok, despite the designers' efforts and best intentions? Why not use the same technology that the AI would use to improve itself, to improve _your_self? Indeed, it seems ridiculous not to do so.

To build an AI, you need all the same skills that you would need to improve yourself. So why create an external entity, when you can be that entity?

Looks like the soldier quote is gonna be big in comments. I think it's out of place too, and as opposed to most other quotes that Eliezer comes up with, it doesn't make a lot of sense. In the same way as: "It is the scalpel, not the surgeon, or the nurse, that fixed your wounds!"

Soldiers are tools wielded by the structure in power, and it is the structure in power that determines whether the soliders are going to protect your rights and take them away.

Perhaps, "The One" might argue, it is a different kind of person who becomes a soldier in an army that "protects freedom" rather than an army that oppresses its countrymen. There are probably more such idealists among the soldiers in the US army, than among troops commanded by the Burmese generals.

Even so, though, the idealist soldier does what he's commanded to do, and whether that which he does actually protects freedom or not, is largely determined by the structure of power, not the idealist soldier. He remains a tool, a hammer wielded by someone else's will.

Load More