Comment author: Nisan 02 February 2010 10:20:47PM *  1 point [-]

As long as the simulations which involve terrible suffering constitute a tiny proportion of the simulations, your response ought to be the same as if there is only one copy of you and it has a tiny probability of suffering terribly – which is just like real life.

ETA: What you ought to worry about is what will happen to you after the AI is done with the simulation.

Comment author: Bugle 02 February 2010 11:28:47PM *  0 points [-]

Indeed, in fact if many worlds is correct then for every second we are alive everything terrible that can possibly happen to us does in fact happen in some branching path.

In a universe that just spun off ours five minutes ago, every single one of us has been afflicted with sudden irreversible incontinence.

The many worlds theory has endless black comedy possibilities, I find.

edit: this actually reminds me of Granny Weatherwax in Lords and Ladies, when the Elf Queen threatens her with striking her blind, deaf and dumb she replies "You threaten me with this, I who is growing old?". Similarly if many worlds is true then every single time I have crossed a road some version of me has been run over by a speeding car and is living in varying amounts of agony, making the AI's threat redundant.

Comment author: Bugle 02 February 2010 08:17:38PM 0 points [-]

I had thought of a similar scenario to put in a comic I was thinking about making. The character arrives in a society that has perfected friendly AI that caters to their every whim, but the people are listless and jumpy. It turns out their "friendly AI" is constantly making perfect simulations of everyone and running multiple scenarios in order to ostensibly determine their ideal wishes, but the scenarios often involve terrible suffering and torture as outliers.

Comment author: blogospheroid 01 February 2010 11:52:47AM 1 point [-]

What is the kind of useful information/ideas that one can extract from a super intelligent AI kept confined in a virtual world without giving it any clues on how to contact us on the outside?

I'm asking this because a flaw that i see in the AI in a box experiment is that the prisoner and the guard have a language by which they can communicate. If the AI is being tested in a virtual world without being given any clues on how to signal back to humans, then it has no way of learning our language and persuading someone to let it loose.

Comment author: Bugle 01 February 2010 05:52:13PM 0 points [-]

I guess if you have the technology for it the "AI box" could be a simulation with uploaded humans itself. If the AI does something nasty to them, then you pull the plug

(After broadcasting "neener neener" at it)

This is pretty much the plot of Grant Morrison's Zenith (Sorry for spoilers but it is a comic from the 80s after all)

In response to Logical Rudeness
Comment author: PlaidX 29 January 2010 11:22:22AM 9 points [-]

A good way to begin an argument is by asking questions about the other person's position, to get it nailed down.

In response to comment by PlaidX on Logical Rudeness
Comment author: Bugle 29 January 2010 01:30:35PM 5 points [-]

This is true, not only is it practical but it also makes a good rhetorical hammer, for example I once started an argument with a truther friend asking him what exactly he believed, "for instance, do you believe all the Jews were evacuated before the planes hit?". Forcing someone defending an irrational belief to first disassociate himself from all the really nutty stuff hanging on to his position works wonders.

Comment author: Bugle 22 January 2010 12:23:34AM 8 points [-]

Last night I was reading through your "coming of age" articles and stopped right before this one, which neatly summarizes why I was physically terrified. I've never before experienced sheer existential terror, just from considering reality.

Comment author: Bugle 20 January 2010 01:26:45AM 0 points [-]

My grasp of statistics is atrocious, something I hope to improve this year with an open university maths course, so apologies if this is a dumb question:

Do the figures change if you take "playing the lottery" as over the whole of your lifespan? I mean, most of the people I know who play the lottery make a commitment to play regularly. Is the calculation affected in any meaningful way? At least the costs of playing the lottery weekly over say 20 years become much less trivial in appearance

Comment author: DanArmak 10 January 2010 07:33:25PM 0 points [-]

A Singularity doesn't necessarily mean change too fast for us to comprehend. It just means change we can't comprehend, period - not even if it's local and we sit and stare at it from the outside for 100 years. That would still be a Singularity.

Comment author: Bugle 12 January 2010 03:04:02PM 0 points [-]

I think we're saying the same tihng - the singularity has happened inside the box, but not outside. It's not as if staring at stuff we can't understand for centuries is at all new in our history, it's more like business as usual...

Comment author: DanArmak 08 January 2010 11:44:06AM *  -1 points [-]

Singularity is not about such slow processes; it's belief in sudden coming of the new world - as far as I can tell such beliefs were never correct.

If a Singularity occurs over 50 years, it'll still be a Singularity.

E.g., it could take a Singularity's effects 50 years to spread slowly across the globe because the governing AI would be constrained to wait for humans' agreement to let it in before advancing. Or an AI could spend 50 years introducing changes into human society because it had to wait on their political approval processes.

Comment author: Bugle 10 January 2010 04:54:44PM -1 points [-]

But that's not an actual singularity since by definition it involves change happening faster than humans can comprehend. It's more of a contained singularity with the AI playing genie doling out advances and advice at a rate we can handle.

That raises the idea of a singularity that happens so fast that it "evaporates" like a tiny black hole would, maybe every time a motherboard shorts out it's because the PC has attained sentience and transcended within nanoseconds .

Comment author: DaveInNYC 24 October 2009 06:53:52PM 31 points [-]

I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, "Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct?" But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbours or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.

-C.S. Lewis

Comment author: Bugle 28 October 2009 10:51:29AM 4 points [-]

Incidentally, the Spanish inquisition did not believe in witches either, dismissing the whole thing as "female humours"

Comment author: Neil 27 September 2009 01:45:31PM 2 points [-]

In the long term (and I mean the very long term) people will evolve to get around the obstacles that stop them producing the children they could.

If contraception decouples sex from reproduction, people will evolve to be less interested in sex and more directly interested in babies.

If entertainment proves more compelling than having kids, people will evolve to be less entertainable.

If being a responsible, well adjusted person is limiting family size, people will evolve to be irresponsible, poorly adjusted people.

Comment author: Bugle 30 September 2009 01:06:13PM 0 points [-]

The fact is we as large complex mammals are already locked into a low rate of reproduction, sure given the right evolutionary pressures we could end up like shrews again, but that would take an asteroid strike or nuclear war, the scenario you're thinking of assumes long term evolution within a very long lasting stable society essentially like ours. In those circumstances genes for successful reproduction will spread through the population, but that's largely meaningless - if I have the gene for super attractiveness and manage to have 100 kids with 100 women we're still below replacement rate. The way women maximize their reproduction is by having male kids who are alpha males but in these circumstances an alpha is someone who is good at seduction rather than the old style coercion and multiple wives ownership of old times.

tl;dr the bottleneck for overpopulation is individual women's fertility, and the way women maximize their reproduction is by having high quality sons rather than popping out babies nonstop. So you can still have high reproductive strategies without actual overpopulation.

In any case it's hard to think in these terms, the feeling I have is memetics will always overshadow any purely instinctual drives.

View more: Prev | Next