Wiki Contributions

Comments

Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.

Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren't writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.

edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.

With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn't require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.

The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation - it is mere data processing - it might become human experimentation decades after functional uploads.

Heh. Well, it's not radioactive, the radon is. It is inert but it dissolves in membranes, changing electrical properties.

That was more a note on the Dr_Manhattan's comment.

With regards to 'economic advantage', the advantage has to outgrow the overall growth for the state of carbon originals to decline. Also, you may want to read Accelerando by Charles Stross.

If we have an AGI, it will figure out what problems we need solved and solve them.

Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole 'valuing real world' thing into an AI, without adding any friendliness, actually restricting it's generality when it comes to doing something useful.

Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary for the team competent enough to build a full AGI, to not kill everyone, and therefore you should donate (Previously, the position was you should donate so we build FAI before someone builds UFAI, but Luke Muehlhauser been generalizing to non-FAI solutions). That notion is rendered highly implausible when you pin down the meaning of AGI, as we did in this discourse. For the UFAI to happen and kill everyone, a potentially vastly more competent and intelligent team that SI has to fail spectacularly.

Only if his own thing isn't also your own thing.

Will require simulation of me or a brain implant that effectively makes it extension of me. Do not want the former, and the latter is IA.

Or what if the 'mountain people' are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that's the reality.

sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not to just go into simulated seizure ). From that you might progress to sane but stupefied uploads, with very significant IQ drop. Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to. It will take a lot of gradual improvement until there are well working uploads, and even then I am pretty sure that nearly anyone would be utterly unable to massively self improve on one's own in any meaningful way rather than just screw itself into insanity, without supervision; sane person shouldn't even attempt that because if your improvement is making things worse then the next improvement will make things even worse, and one needs external verification.

The 'predicted effects on external reality' is a function of prior input and internal state.

The idea of external reality is not incoherent. The idea of valuing external reality with a mathematical function is.

Note, by the way, that valuing 'wire in the head' is also a type of 'valuing external reality', not in the sense of 'external' as in wire being outside the box that runs AI, but external in sense of wire being outside the algorithm of the AI. When that point is being discussed here, SI seem to magically acquire an understanding of distinction between outside an algorithm and inside of algorithm to argue that wireheading won't happen. The confusion between model and reality appears and disappears at most convenient moments.

nor highly useful (in a singularity-inducing sense).

I'm not clear what we mean by singularity here. If we had an algorithm that works on well defined problems we could solve practical problems. edit: Like improving that algorithm, mind uploading, etc.

Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI,

Effective at what? Would it cure cancer sooner? I doubt so. An "AGI" with a goal it wants to do, resisting any control, is a much more narrow AI than the AI that basically solves systems of equations. Who would I rather hire: impartial math genius that solves the tasks you specify for him, or a brilliant murderous sociopath hell bent on doing his own thing? The latter's usefulness (to me, that's it) is incredibly narrow.

and far more dangerous.

Besides being effective at being worse than useless?

That's why it's primarily what SIAI is worried about.

I'm not quite sure that there's 'why' and 'what' in that 'worried'.

Load More