With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn't require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to ...
Heh. Well, it's not radioactive, the radon is. It is inert but it dissolves in membranes, changing electrical properties.
That was more a note on the Dr_Manhattan's comment.
With regards to 'economic advantage', the advantage has to outgrow the overall growth for the state of carbon originals to decline. Also, you may want to read Accelerando by Charles Stross.
If we have an AGI, it will figure out what problems we need solved and solve them.
Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole 'valuing real world' thing into an AI, without adding any friendliness, actually restricting it's generality when it comes to doing something useful.
Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary f...
Or what if the 'mountain people' are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that's the reality.
sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not...
The 'predicted effects on external reality' is a function of prior input and internal state.
The idea of external reality is not incoherent. The idea of valuing external reality with a mathematical function is.
Note, by the way, that valuing 'wire in the head' is also a type of 'valuing external reality', not in the sense of 'external' as in wire being outside the box that runs AI, but external in sense of wire being outside the algorithm of the AI. When that point is being discussed here, SI seem to magically acquire an understanding of distinction between...
nor highly useful (in a singularity-inducing sense).
I'm not clear what we mean by singularity here. If we had an algorithm that works on well defined problems we could solve practical problems. edit: Like improving that algorithm, mind uploading, etc.
Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI,
Effective at what? Would it cure cancer sooner? I doubt so. An "AGI" with a goal it wants to do, resisting any control, is a much more narrow AI than the AI that basically solves systems of equations...
Actually, this is example of something incredibly irritating about this entire singularity topic: verbal sophistry of no consequence. What do you call 'powerful' has absolutely zero relation to anything. A powerful drill doesn't tend to do something significant regardless of how you stop it. Neither does powerful computer. Nor should powerful intelligence.
What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn't be a vastly greater explanation for the SI behavior...
the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.
The prevalence of X is defined how?
...And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.
Be specific as of what is the input domain of the 'function' in question.
And yes, there is the difference: one is well defined and what is the AI research works towards, and other is part of extensive AI fear rationalization framework, where it is confused with the notion of generality of intelligence, as to presume that the practical AIs will maximize the "somethings", followed by the notion that pretty much all "somethings" would be dangerous to maximize. The utility is a purely descriptive notion; the AI that decides on actions is a...
It's valuing external reality. Valuing sensory inputs and mental models would just result in wireheading.
Mathematically any value that AI can calculate from external anything is a function of sensory input.
'Vague' presumes the level of precision that is not present here. It is not even vague. It's incoherent.
Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning; extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.
...Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and stil
Yes. I am sure Holden is being very polite, which is generally good but I've been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The 'resistance to feedback' is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won't pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.
SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.
With regards to the 'message', i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to "rationally discussing", what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it's first 10 years and first over 2 millions dollars in other people's money.
All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal's original wager, the Thor and other deities are to be ignored by omission.
On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.
On the rationality movement, here's a quote from Holden.
...Apparent poorly grounded belief
It is not just their chances of success. For the donations to matter, you need SI to succeed where without SI there is failure. You need to get a basket of eggs, and have all the good looking eggs be rotten inside but one fairly rotten looking egg be fresh. Even if a rotten looking egg is nonetheless more likely to be fresh inside than one would believe, it is highly unlikely situation.
It has to also be probable that their work averts those risks, which seem incredibly improbable by any reasonable estimate. If the alternative Earth was to adopt a strategy of ignoring prophetic groups of 'idea guys' similar to SI and ignore their pleads for donations so that they can hire competent researchers to pursue their ideas, I do not think that such decision would have increased the risk by more than a miniscule amount.
But are you sure that you are not now falling for typical mind fallacy?
The very premise of your original post is that it is not all signaling; that there is a substantial number of the honest-and-naive folk who not only don't steal but assume that others don't.
I do not see why are you even interested in asking that sort of question if you have such a view - surely under such view you will steal if you are sure you will get away with it, just as you would e.g. try to manipulate and lie. edit: I.e. the premise seems incoherent to me. You need honesty to exist for your method to be of any value; and you need honesty not to exist for your method to be harmless and neutral. If the language is all signaling, the most your method will do is weed you out at the selection process - you have yourself already sent the wrong signal that you believe language to be all deception and signaling.
It too closely approximates the way the herein proposed unfriendly AI would reason - get itself a goal (predict stealing on job for example) and then proceed to solving it, oblivious to the notion of fairness. I've seen several other posts over the time that rub in exactly same wrong way, but I do not remember exact titles. By the way, what if I am to use the same sort of reasoning as in this post on the people who have a rather odd mental model of artificial minds (or mind uploaded fellow humans) ?
Good point. More intricate questions like this, with 'nobody could resist' wording, are also much more fair. The question as of what the person believes is the natural human state are more dubious.
I'm not quite sure what is the essence of your disagreement, or what relation the honest people already being harmed has with the argument I made.
I'm not sure what you think my disagreement should have focused on - the technique outlined in the article can be effective and is used in practice, and there is no point to be made that it is bad for the persons employing it; I can not make an argument convincing an antisocial person not to use this technique. However, this technique depletes the common good that is the normal human communication; it is an exam...
However, imagine if the typical mind fallacy was correct. The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?"
To be honest, this is a perfect example of what is so off-putting about this community. This method is simply socially wrong - it works against both the people who stole, and people who had something stolen from, who get penalized for the honest answer and, if such methods are to be more widely employed, are now inclined to second-guess and say "no, I don't think anyo...
Real-life employment personality questionnaires are more subtle than this. They might ask things like, "Nobody could resist buying a stolen item they wanted if the price was low enough: Agree/DIsagree", or "Wanting to steal something is a natural human reaction if a person is treated unfairly: Agree/Disagree".
That is, they test for thieves' typical rationalizations, rather than asking straight-up factual questions. Xachariah's example question isn't a good use of typical-mind fallacy, because it doesn't ask a purely theory-of-mind que...
I'm not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.
This is not, in any way shape or form, the same skill as the ability to manage a nonprofit.
Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.
edit: Furthermore, when it comes to works on rationality as 'applied math of optimization',...
I think it is fair to say Earth was doing the "AI math" before the computers. Extending to the today - there is a lot of mathematics to be done for a good, safe AI - but how are we to know that the SI has the actionable effort planning skills required to correctly identify and fund research in such mathematics?
I know that you believe that you have the required skills; but note that in my model such belief results from both the presence of extraordinary effort planning skill, and from absence of effort planning skills. The prior probability of ex...
If my writings (on FAI, on decision theory, and on the form of applied-math-of-optimization called human rationality) so far haven't convinced you that I stand a sufficient chance of identifying good math problems to solve to maintain the strength of an input into existential risk, you should probably fund CFAR instead. This is not, in any way shape or form, the same skill as the ability to manage a nonprofit. I have not ever, ever claimed to be good at managing people, which is why I kept trying to have other people doing it.
Great article, however, there is a third important option which is 'request proof then, if passed, donate' (Holden seem to have opted for this in his review of S.I., but it is broadly applicable in general).
For example if there is a charity promising to save 10 millions people using a method X that is not very likely to work, but is very cheap - a Pascal Wager like situation. In this situation, even if this charity is presently the best in terms of expected payoff it may be a better option still to, rather than paying the full sum, pay only enough for a b...
It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.
On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of "utility function" in the context ...
It seems to me that the premise of funding SI is that people smarter (or more appropriately specialized) than you will then be able to make discoveries that otherwise would be underfunded or wrongly-purposed.
But then SI has to have dramatically better idea what research has to be funded to protect the mankind, than every other group of people capable of either performing such research or employing people to perform such research.
Muehlhauser has stated that SI should be compared to alternatives in form of the organizations working on the AI risk mitigati...
Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.
Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren't writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.
edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.