simon2

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
simon200

Psy-Kosh: I was using the example of pure baby eater values and conscious babies to illustrate the post Nick Tarleton linked to rather than apply it to this one.

Michael: if it's "inevitable" that they will encounter aliens then it's inevitable that each fragment will in turn encounter aliens, unless they do some ongoing pre-emptive fragmentation, no? But even then, if exponential growth is the norm among even some alien species (which one would expect) the universe should eventually become saturated with civilizations. In the long run, the only escape is opening every possible line from a chosen star and blowing up all the stars at the other ends of the lines.

Hmm. I guess that's an argument in favour of cooperating with the superhappies. Though I wonder if they would still want to adopt babyeater values if the babyeaters were cut off, and if the ship would be capable of doing that against babyeater resistance.

simon200

Nick, note that he treats the pebblesorters in parallel with the humans. The pebblesorters' values lead them to seek primeness and Eliezer optimistically supposes that human values lead humans to seek an analogous rightness.

What Eliezer is trying to say in that post, I think, is that he would not consider it right to eat babies even conditional on humanity being changed by the babyeaters to have their values.

But the choice to seek rightness instead of rightness' depends on humans having values that lead to rightness instead of rightness'.

simon210

It's evidence of my values which are evidence of typical human values. Also, I invite other people to really think if they are so different.

Eliezer tries to derive his morality from human values, rather than simply assuming that it is an objective morality, or asserting it as an arbitrary personal choice. It can therefore be undermined in principle by evidence of actual human values.

simon260

Also, I think I would prefer blowing up the nova instead. The babyeater's children's suffering is unfortunate no doubt but hey, I spend money on ice cream instead of saving starving children in Africa. The superhappies' degrading of their own, more important, civilization is another consideration.

(you may correctly protest about the ineffectiveness of aid - but would you really avoid ice cream to spend on aid, if it were effective and somehow they weren't saved already?)

simon200

If blowing up Huygens could be effective, why did it even occur to you to blow up Earth before you thought of this?

simon200

Sure it's a story, but one with an implicit idea of human terminal values and such.

I'm actually inclined to agree with Faré that they should count the desire to avoid a few relatively minor modifications over the eternal holocaust and suffering of baby-eater children.

I originally thought Eliezer was a utilitarian, but changed my mind due to his morality series.

(Though I still thought he was defending something that was fairly similar to utilitarianism. But he wasn't taking additivity as a given but attempting to derive it from human terminal values themselves - so if human terminal values don't say that we should apply equal additivity to baby-eater children, and I think they don't, then Eliezer's morality, I would have thought, would not apply additivity to them.)

This story however seems to show suspiciously utilitarian-like characteristics in his moral thinking. Or maybe he just has a different idea of human terminal values.

simon200

...with babyeater values.

simon200

Actually, I'm not sure if that's what I thought about their intentions towards the babyeaters, but I at least didn't originally expect them to still intend to modify themselves and humanity.

simon230

John Maxwell:

No, they are simply implementing the original plan by force.

When I originally read part 5, I jumped to the same conclusion you did, based presumably on my prior expectations of what a reasonable being would do. But then I read nyu2's comment which assumed the opposite and went back to look at what the text actually said, and it seemed to support that interpretation.

simon200

It seems we are at a disadvantage relative to Eliezer in thinking of alternative endings, since he has a background notion of what things are possible and what aren't, and we have to guess from the story.

Things like:

How quickly can you go from star to star?
Does the greater advancement of the superhappies translate into higher travel speed, or is this constrained by physics?
Can information be sent from star to star without couriering it with a ship, and arrive in a reasonable time?
How long will the lines connected to the novaing star remain open?
Can information be left in the system in a way that it would likely be found by a human ship coming later?
Is it likely that there are multiple stars that connect the nova to one, two or all three alderson networks?

And also about behaviour:

Will the superhappies have the system they use to connect with the nova under guard?
How long will it be before the babyeaters send in another ship? the humans, if no information is received?
How soon will the superhappies send in their ships to begin modifying the babyeaters?

Here's another option with different ways to implement it depending on the situation (possibly already mentioned by others, if so, sorry):

Cut off the superhappy connection, leaving or sending info for other humans to discover, so they deal with the babyeaters at their leisure.
Go back to give info to humans at Huygens, then cut off the superhappy connection.
Go back to get reinforcements, then quickly destroy the babyeater civilization (suicidally if necessary) and the novaing star (immediately after the fleet goes from it to the babyeater star(s), if necessary).

In all cases, I assume the superhappies will be able to guess what happened in retrospect. If not, send them an explicit message if possible.

Load More