In response to The I-Less Eye
Comment author: rosyatrandom 28 March 2010 07:36:05PM *  0 points [-]

This 0.5^99 figure only appears if each copy bifurcates iteratively.

Rather than

1 becoming 2, becoming 3, becoming 4, ... becoming 100

We'd have

1 becoming 2, becoming 4, becoming 8, ... becoming 2^99

Comment author: rosyatrandom 24 March 2010 02:00:23AM *  0 points [-]

I think a good term for what I believe in might be 'abstractionism'; essentially, I believe in all possible things, and all entities existing in all possible contexts.

From this perspective, matter, mind and mathematics are all the same kind of stuff: patterns. The mind is a pattern than can be abstracted from processes functioning to solve problems which, at a high level, implement our thoughts. Those processes can be performed by brains running in the kinds of universe we are familiar with, which run on ontological frameworks consistent, at least for observable parts, with the mathematics we know and love.

What is it all made of? Just information that can be endlessly traced downward through infinite contextual abstractions. In the end, there are only two definitive aspects: Everything (the kaleidoscopic, crystalline pattern of patterns) and the manner by which elements abstract (which links Nothing to Anything to Everything).

Or, as some of you may recognise it, The Dust Theory. In the end, all other theories require an arbitrary contextual abacadabra.

Comment author: MrHen 02 February 2010 03:35:29PM 3 points [-]

"Trial and error" probably wouldn't be necessary.

Comment author: rosyatrandom 02 February 2010 03:42:31PM 6 points [-]

No, but it's there as a baseline.

So in the original scenario above, either:

  • the AI's lying about its capabilities, but has determined regardless that the threat has the best chance of making you release it
  • the AI's lying about its capabilities, but has determined regardless that the threat will make you release it
  • the AI's not lying about its capabilities, and has determined that the threat will make you release it

Of course, if it's failed to convince you before, then unless its capabilities have since improved, it's unlikely that it's telling the truth.

Comment author: rosyatrandom 02 February 2010 03:29:05PM *  28 points [-]

If the AI can create a perfect simulation of you and run several million simultaneous copies in something like real time, then it is powerful enough to determine through trial and error exactly what it needs to say to get you to release it.

Comment author: rosyatrandom 04 January 2010 05:02:58PM 3 points [-]

I think this post makes an excellent point, and brings to light the aspect of Bayesianism that always made me uncomfortable.

Everyone knows we are not really rational agents; we do not compute terribly fast or accurately (as Morendil states), we are often unaware of our underlying motivations and assumptions, and even those we know about are often fuzzy, contradictory and idealistic.

As such, I think we have different ways of reasoning about things, making decisions, assigning preferences, holding and overcoming inconsistencies, etc.. While it is certainly useful to have a science of quantitative rationality, I doubt we think that way at all... and if we tried, we would quickly run into the qualitative, irrational ramparts of our minds.

Perhaps a Fuzzy Bayesianism would be handy: something that can handle uncertainty, ambivalence and apathy in any of its objects. Something where we don't need to put in numbers where numbers would be a lie.

Doing research in biology, I can assure you that the more decimal places of accuracy I see, the more I doubt its reliability.

Comment author: rosyatrandom 01 December 2009 12:14:15PM 0 points [-]

Since morality is subjective, then don't the morals change depending upon what part of this scenario you are in (inside/outside)?

I operate from the perspective (incidentally, I like the term 'modal immortality') that my own continued existence is inevitable; the only thing that changes is the possibility distribution of contexts and ambiguities. By shutting down 99/100 instances, you are more affecting your own experience with the simulations than their's with you (if the last one goes, too, then you can no longer interact with it), especially if, inside a simulation, other external contexts are also possible.

Comment author: rosyatrandom 29 September 2009 06:09:44AM *  0 points [-]

Go for it. I have extreme difficulty trying to work out how it might even make sense that all possible(*) realities don't exist....

To me, the killer arguments are:

  • How arbitrary both the arrangement of the universe, and the universe itself is,

  • How impossible it is to pin down what existence is, compared to an abstracted implementation#

  • How consciousness itself implies uncertainty and indescernibility between contexts.

(*) In a meaningful sense, of course.

Comment author: rosyatrandom 23 September 2009 03:14:28PM 1 point [-]

If continuity of consciousness immortality arguments also hold, then it simply doesn't matter whether doomsdays are close - your future will avoid those scenarios.

Comment author: rosyatrandom 10 July 2009 11:10:31AM *  1 point [-]

Firstly, this kind of multiverse is esentially the same as a parallel worlds one; the only difference is which dimension you take the multiplicity to occur in. I prefer parallel worlds as it implies a logical branching structure, a cladistic tree which provides an overlying system to the otherwise arbitrary worlds.

Second, without some kind of anthropic principle or similar filtering mechanism, then islands of order only appear at the very tip of a mountain, surrounded by masses of increasing disorder. Any order that has been apparent so far has no reason not to disappear in a fizz of entropy in the next moment.

My main feeling on this is that the very makeup of our brains and consciousness requires a universe that works in a certain way. We impose things like time and who knows how may other basic laws onto the world just because they are written into our souls.

In response to Return of the Survey
Comment author: rosyatrandom 03 May 2009 04:03:57AM 1 point [-]

Funny, 'karmawhore' was the 1st term that leapt to my mind, too. And yes, I did take the survey and am one....

View more: Prev | Next