All of JaneQ's Comments + Replies

JaneQ-30

Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.

Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren't writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.

edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.

0NancyLebovitz
This is an area that hasn't been addressed by the law, for the very good reason that it isn't close to being a problem yet. I don't know whether people outside LW have been looking at the ethical status of uploads. I agree with you that there's no way to have uploading without making mistakes first. And possibly no way to have FAI without it having excellent simulations of people so that it can estimate what to do. That's a good point about consent.
JaneQ-20

With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn't require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to ... (read more)

0NancyLebovitz
There's a consensus here that conscious computer programs have the same moral weight as people, so getting uploading moderately wrong in some directions is worse than getting it completely wrong.
JaneQ20

Heh. Well, it's not radioactive, the radon is. It is inert but it dissolves in membranes, changing electrical properties.

JaneQ-20

That was more a note on the Dr_Manhattan's comment.

With regards to 'economic advantage', the advantage has to outgrow the overall growth for the state of carbon originals to decline. Also, you may want to read Accelerando by Charles Stross.

0knb
There is no reason why this would be true. The economy can grow enormously while per-capita income and standard of living fall. This has happened before, the global economy and population grew enormously after the transition to agriculture, but living standards probably actually fell, and farmers were shorter, poorer, harder-working and more malnourished than their forager ancestors. It is not inevitable (or even very likely, IMO) that the economy will perpetually outgrow population. I read it years ago, and wasn't impressed. Why is that relevant?
JaneQ-40

If we have an AGI, it will figure out what problems we need solved and solve them.

Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole 'valuing real world' thing into an AI, without adding any friendliness, actually restricting it's generality when it comes to doing something useful.

Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary f... (read more)

JaneQ20

Or what if the 'mountain people' are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that's the reality.

sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not... (read more)

4JenniferRM
My initial reaction was shock that a heavier-than-air radioactive gas might go into someone's lungs on purpose. It triggers a lot of my "scary danger" heuristics for gases. Googling turned up a bunch of fascinating stuff. Thanks for the surprise! For anyone else interested, educational content includes: * Since its 1951 report of use in humans, xenon has been viewed as the closest candidate for an ideal anesthetic gas for its superior hemodynamic stability and swift recovery period * Neuroprotective and neurotoxic properties of the ‘inert’ gas, xenon * First baby given xenon gas to prevent brain injury * Sulfur hexafluoride has a similar speed of sound (and hence affect on voice sound) but isn't mind altering Neat!
0knb
Well, yes I am aware that my scenario is not literally descriptive of the world right now. The purpose is to inspire an intuitive understanding of why the economic reality of a society with strong upload technology would encourage destroying carbon copies of people who have been uploaded. I am not worried either. Nothing I said assumes a first-mover advantage or hard takeoff from the first mind upload. I'm describing society after upload technology has matured. I'm certainly not assuming uploads will be self-improving, so it seems you are pretty comprehensively misunderstanding my point. I do assume uploads will become faster, due to hardware improvements. After some time, the ease and low cost of copying uploads will likely make them far more numerous than physical humans, and their economic advantages (being able to do orders of magnitude more work per year than physical humans) will drive wages far below human subsistence standards (even if the wages allow a great lifestyle for the uploads).
2NancyLebovitz
Some of the basic problems will presumably be (partially?) solved with animal research before uploading is tried with humans. One of the challenges of uploading would be including not just the current record, but also the ability to learn and heal.
JaneQ10

The 'predicted effects on external reality' is a function of prior input and internal state.

The idea of external reality is not incoherent. The idea of valuing external reality with a mathematical function is.

Note, by the way, that valuing 'wire in the head' is also a type of 'valuing external reality', not in the sense of 'external' as in wire being outside the box that runs AI, but external in sense of wire being outside the algorithm of the AI. When that point is being discussed here, SI seem to magically acquire an understanding of distinction between... (read more)

5Wei Dai
I think I'm getting a better idea of where our disagreement is coming from. You think of external reality as some particular universe, and since we don't have direct knowledge of what that universe is, we can only apply our utility function to models of it that we build using sensory input, and not to external reality itself. Is this close to what you're thinking? If so, I suggest that "valuing external reality" makes more sense if you instead think of external reality as the collection of all possible universes. I described this idea in more detail in my post introducing UDT.
JaneQ10

nor highly useful (in a singularity-inducing sense).

I'm not clear what we mean by singularity here. If we had an algorithm that works on well defined problems we could solve practical problems. edit: Like improving that algorithm, mind uploading, etc.

Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI,

Effective at what? Would it cure cancer sooner? I doubt so. An "AGI" with a goal it wants to do, resisting any control, is a much more narrow AI than the AI that basically solves systems of equations... (read more)

0DanielLC
If we have an AGI, it will figure out what problems we need solved and solve them. It may not beat a narrow AI (ANI) in the latter, but it will beat you in the former. You can thus save on the massive losses due to not knowing what you want, politics, not knowing how to best optimize something, etc. I doubt we'd be able to do 1% as well without an FAI as with one. That's still a lot, but that means that a 0.1% chance of producing an FAI and a 99.9% chance of producing a UAI is better than a 100% chance of producing a whole lot of ANIs. Only if his own thing isn't also your own thing.
4gwern
I hope you don't mind if I don't reply to you any further until it's clear whether you're a Dmytry sockpuppet.
1gwern
I'll rephrase: your argument from alternatives is as much bullshit as invoking Dunning-Kruger. Both an argument and its opposite cannot lead to the same conclusion unless the argument is completely irrelevant to the conclusion. If alternatives matter at all, there must be some number of alternatives which reflect better on SI than the other numbers.
0[anonymous]
Let's avoid inflationary use of Pascal's wager.
JaneQ30

Actually, this is example of something incredibly irritating about this entire singularity topic: verbal sophistry of no consequence. What do you call 'powerful' has absolutely zero relation to anything. A powerful drill doesn't tend to do something significant regardless of how you stop it. Neither does powerful computer. Nor should powerful intelligence.

1DanielLC
In this case, I'm defining a powerful intelligence differently. An AI that is powerful in your sense is not much of a risk. It's basically the kind of AI we have now. It's neither highly dangerous, nor highly useful (in a singularity-inducing sense). Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI, and far more dangerous. That's why it's primarily what SIAI is worried about.
JaneQ-40

What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn't be a vastly greater explanation for the SI behavior... (read more)

3gwern
What is this, the second coming of C.S. Lewis and his trilemma? SI must either be completely right and demi-gods who will save us all or they must be deluded fools who suffer from some psychological bias - can you really think of no intermediates between 'saviors of humanity' and 'deluded fools who cannot possibly do any good', which might apply? I just wanted to point out that invoking DK is an incredible abuse of psychological research and does not reflect well on either you or Dymtry, and now you want me to justify SI entirely... Alternatives would also be evidence against donating, too, since what makes you think they are the best one out of all the alternatives? Curious how either way, one should not donate!
JaneQ30

the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.

The prevalence of X is defined how?

And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.

... (read more)
-3TheOtherDave
OK. Thanks for your time.
JaneQ40

Be specific as of what is the input domain of the 'function' in question.

And yes, there is the difference: one is well defined and what is the AI research works towards, and other is part of extensive AI fear rationalization framework, where it is confused with the notion of generality of intelligence, as to presume that the practical AIs will maximize the "somethings", followed by the notion that pretty much all "somethings" would be dangerous to maximize. The utility is a purely descriptive notion; the AI that decides on actions is a... (read more)

3TheOtherDave
(shrug) It seems to me that even if I ignore everything SI has to say about AI and existential risk and so on, ignore all the fear-mongering and etc., the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea. And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X. If any part of that is as incoherent as you suggest, and you're capable of pointing out the incoherence in a clear fashion, I would appreciate that.
JaneQ30

It's valuing external reality. Valuing sensory inputs and mental models would just result in wireheading.

Mathematically any value that AI can calculate from external anything is a function of sensory input.

'Vague' presumes the level of precision that is not present here. It is not even vague. It's incoherent.

5Wei Dai
Given the same stream of sensory inputs, external reality may be different depending on the AI's outputs, and the AI can prefer one output to another based on their predicted effects on external reality even if they make no difference to its future sensory inputs. Even if you were right that valuing external reality is equivalent to valuing sensory input, how would that make it incoherent? Or are you saying that the idea of "external reality" is inherently incoherent?
0DanielLC
It can only be said to be powerful if it will tend to do something significant regardless of how you stop it. If what it does has anything in common, even if it's nothing beyond "signficant", it can be said to value that.
2TheOtherDave
Sure, but the kind of function matters for our purposes. That is, there's a difference between an optimizing system that is designed to optimize for sensory input of a particular type, and a system that is designed to optimize for something that it currently treats sensory input of a particular type as evidence of, and that's a difference I care about if I want that system to maximize the "something" rather than just rewire its own perceptions.
JaneQ50

Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning; extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.

Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and stil

... (read more)
9VincentYu
Speaking of yourself in the third person? Dmytry, you are abusing sockpuppet accounts. Your use of private_messaging was questionable, but at least you declared it as an alias early on. Right now you are using a sockpuppet with the intent to deceive.
3gwern
That seems unlikely. Leading both? Mediocrity is sufficient to push them entirely out of the DK gap; your thinking DK applies is just another example of what I mean by these being fragile easily over-interpreted results. (Besides blatant misapplication, please keep in mind that even if DK had been verified by meta-analysis of dozens of laboratory studies, which it has not, that still only gives a roughly 75% chance that the effect applies outside the lab.) Without specifics, one cannot argue against that. So you're just engaged in reference class tennis. ('No, you're wrong because the right reference class is magicians!')
JaneQ60

Yes. I am sure Holden is being very polite, which is generally good but I've been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The 'resistance to feedback' is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won't pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.

JaneQ30

SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.

With regards to the 'message', i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to "rationally discussing", what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it's first 10 years and first over 2 millions dollars in other people's money.

3David_Gerard
Note that that second paragraph is one of Holden Karnofsky's objections to SIAI: a high opinion of its own rationality that is not so far substantiable from the outside view.
JaneQ40

All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal's original wager, the Thor and other deities are to be ignored by omission.

On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.

On the rationality movement, here's a quote from Holden.

Apparent poorly grounded belief

... (read more)
1Viliam_Bur
Could you give me some examples of other people and organizations trying to prevent the risk of an Unfriendy AI? Because for me, it's not like I believe that SI has a great chance to develop the theory and prevent the danger, but rather like they are the only people who even care about this specific risk (which I believe to be real). As soon as the message becomes widely known, and smart people and organizations will start rationally discussing the dangers of Unfriendly AI, and how to make a Friendly AI (avoiding some obvious errors, such as "a smart AI simply must develop a human-compatible morality, because it would be too horrible to think otherwise"), then there is a pretty good chance that some of those organization will be more capable than SI to reach that goal: more smart people, better funding, etc. But at this moment, SI seems to be the only one paying attention to this topic.
JaneQ30

It is not just their chances of success. For the donations to matter, you need SI to succeed where without SI there is failure. You need to get a basket of eggs, and have all the good looking eggs be rotten inside but one fairly rotten looking egg be fresh. Even if a rotten looking egg is nonetheless more likely to be fresh inside than one would believe, it is highly unlikely situation.

0Paul Crowley
I'm afraid I'm not getting your meaning. Could you fill out what corresponds to what in the analogy? What are all the other eggs? In what way do they look good compared to SI?
JaneQ20

It has to also be probable that their work averts those risks, which seem incredibly improbable by any reasonable estimate. If the alternative Earth was to adopt a strategy of ignoring prophetic groups of 'idea guys' similar to SI and ignore their pleads for donations so that they can hire competent researchers to pursue their ideas, I do not think that such decision would have increased the risk by more than a miniscule amount.

0JenniferRM
This kind of seems like political slander to me. Maybe I'm miscalibrated? But it seems like you're thinking of "reasonable estimates" as things produced by groups or factions, treating SI as a single "estimate" in this sense, and lumping them with a vaguely negative but non-specified reference class of "prophetic groups". The packaged claims function to reduce SI's organizational credibility, and yet it references no external evidence and makes no testable claims. For your "prophetic groups" reference class, does it include 1930's nuclear activists, 1950's environmentalists, or 1970's nanotechnology activists? Those examples come from the socio-political reference class I generally think of SI as belonging to, and I think of them in a mostly positive way. Personally, I prefer to think of "estimates" as specific predictions produced by specific processes at specific times, and they seem like they should be classified as "reasonable" or not on the basis of their mechanisms and grounding in observables in the past and the future. The politics and social dynamics surrounding an issue can give you hints about what's worth thinking about, but ultimately you have to deal with the object level issues, and the object level issues will screen off the politics and social dynamics once you process them. The most reasonable tool for extracting a "coherent opinion" from someone on the subject of AGI that is available to the public that I'm aware of is the uncertain future. (Endgame: Singularity is a more interesting tool in some respects. It's interesting for building intuitions about certain kinds of reality/observable correlations because it has you play as a weak but essentially benevolent AGI rather than as humanity, but (1) it is ridiculously over-specific as a prediction tool, and (2) seems to give the AGI certain unrealistic advantages and disadvantages for the sake of making it more fun as a game. I've had a vague thought to fork it, try to change it to be more realis
4Vladimir_Nesov
People currently understand the physical world sufficiently to see that supernatural claims are bogus, and so there is certainty about impossibility of developments predicated on supernatural. People know robust and general laws of physics that imply impossibility of perpetual motion, and so we can conclude in advance with great certainty that any perpetual motion engineering project is going to fail. Some long-standing problems in mathematics were attacked unsuccessfully for a long time, and so we know that making further progress on them is hard. In all these cases, there are specific pieces of positive knowledge that enable the inference of impossibility or futility of certain endeavors. In contrast, a lot of questions concerning Friendly AI remain confusing and unexplored. It might turn out to be impossibly difficult to make progress on them, or else a simple matter of figuring out how to apply standard tools of mainstream mathematics. We don't know, but neither do we have positive knowledge that implies impossibility or extreme difficulty of progress on these questions. In particular, the enormity of consequences does not imply extreme improbability of influencing those consequences. It looks plausible that the problem can be solved.
0Paul Crowley
I think your estimate of their chances of success is low. But even given that estimate, I don't think it's Pascalian. To me, it's Pascalian when you say "my model says the chances of this are zero, but I have to give it non-zero odds because there may be an unknown failing in my model". I think Heaven and Hell are actually impossible, I'm just not 100% confident of that. By contrast, it would be a bit odd if your model of the world said "there is this risk to us all, but the odds of a group of people causing a change that averts that risk are actually zero".
JaneQ10

But are you sure that you are not now falling for typical mind fallacy?

The very premise of your original post is that it is not all signaling; that there is a substantial number of the honest-and-naive folk who not only don't steal but assume that others don't.

1Xachariah
Um... yes? The typical mind fallacy refers to thinking people act similar to me, and I just mentioned I haven't used that particular type of signaling nor noticed the signaling until jimmy pointed it out. In fact, I can't even imagine why someone would use that form of signaling. The second part of your statement seems to belie a misunderstanding of signalling. Honest folks signal. If you don't steal, you also need to successfully signal that you don't steal, otherwise nobody will know you don't steal. Signaling isn't just a special word for lie.
JaneQ40

I do not see why are you even interested in asking that sort of question if you have such a view - surely under such view you will steal if you are sure you will get away with it, just as you would e.g. try to manipulate and lie. edit: I.e. the premise seems incoherent to me. You need honesty to exist for your method to be of any value; and you need honesty not to exist for your method to be harmless and neutral. If the language is all signaling, the most your method will do is weed you out at the selection process - you have yourself already sent the wrong signal that you believe language to be all deception and signaling.

0TheOtherDave
Well, there's two different questions here... the first is what is in fact true about human communication, and the second is what's right and wrong. I might believe, as Xachariah does, that language is fundamentally a mechanism for manipulating the behavior of others rather than for making true statements about the world... and still endorse using language to make true statements about the world, and reject using language to manipulate the behavior of others. Indeed, if I were in that state, I might even find it useful to assert that language is fundamentally a mechanism for making true statements about the world, if I believed that doing so would cause others to use it that way, even though such an assertion would (by my hypothetical view) be a violation of moral principles I endorse (since it would be using language to manipulate the behavior of others).
JaneQ10

It too closely approximates the way the herein proposed unfriendly AI would reason - get itself a goal (predict stealing on job for example) and then proceed to solving it, oblivious to the notion of fairness. I've seen several other posts over the time that rub in exactly same wrong way, but I do not remember exact titles. By the way, what if I am to use the same sort of reasoning as in this post on the people who have a rather odd mental model of artificial minds (or mind uploaded fellow humans) ?

2prase
Can you clarify? I am not sure what odd mental models and, more generally, situations you have in mind.
JaneQ10

Good point. More intricate questions like this, with 'nobody could resist' wording, are also much more fair. The question as of what the person believes is the natural human state are more dubious.

JaneQ60

I'm not quite sure what is the essence of your disagreement, or what relation the honest people already being harmed has with the argument I made.

I'm not sure what you think my disagreement should have focused on - the technique outlined in the article can be effective and is used in practice, and there is no point to be made that it is bad for the persons employing it; I can not make an argument convincing an antisocial person not to use this technique. However, this technique depletes the common good that is the normal human communication; it is an exam... (read more)

0TheOtherDave
Thinking about this, I'm curious about the last part. Naively, it seems to me that if I'm being evaluated by a system and i know that the system penalizes respondents who have non standard interests and do not answer questions in the standard ways, but I don't have access to the model being used, then if I want to improve my score what I ought to do is pretend to have standard interests and to answer questions in standard ways (even when such standard answers don't reflect my actual thoughts about the question). I might or might not be able to do that, depending on how good my own model of standard behavior is, but it doesn't seem that I would need to know about the model being used by the evaluators. What am I missing?
6Xachariah
I think the heart of the disagreement is that your beliefs about communication are wildly divergent from my beliefs about communication. You mention communication as some sort of innately good thing that can be corrupted by speakers. I'm of the school of thought that all language is about deception and signaling. Our current language is part of a never ending arms race between lies, status relationships, and the need to get some useful information across. The whole idea that you could have a tragedy of the commons seems odd to me. A new lying or lie detection method is invented, we use it until people learn the signal or how to signal around it, then we adopt some other clever technique. It's been this way since Ogg first learned he could trick his opponent by saying "Ogg no smash!" If you hold that language is sacred, then this technique is bad. If you hold that language is a perpetual arms race of deception and counter-deception, this is just another technique that will be used until it's no longer useful.
JaneQ180

However, imagine if the typical mind fallacy was correct. The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?"

To be honest, this is a perfect example of what is so off-putting about this community. This method is simply socially wrong - it works against both the people who stole, and people who had something stolen from, who get penalized for the honest answer and, if such methods are to be more widely employed, are now inclined to second-guess and say "no, I don't think anyo... (read more)

pjeby130

Real-life employment personality questionnaires are more subtle than this. They might ask things like, "Nobody could resist buying a stolen item they wanted if the price was low enough: Agree/DIsagree", or "Wanting to steal something is a natural human reaction if a person is treated unfairly: Agree/Disagree".

That is, they test for thieves' typical rationalizations, rather than asking straight-up factual questions. Xachariah's example question isn't a good use of typical-mind fallacy, because it doesn't ask a purely theory-of-mind que... (read more)

6prase
Do you think majority of this community approves of the described method? Or are you put off just by having a discussion about the method, even if people mostly rejected it in the end?
8gwern
First, working against people who stole is an unalloyed good. It eliminates the deadweight loss of the thieves not valuing the goods as much as the customers would have, the reduced investment & profit from theft (which directly affects honest employees' employment as it turns hiring into more of a lemon market), and redistributes money towards the honest employees who do not help themselves to a five-finger discount. Reducing theft is ceteris paribus a good thing. This is nowhere close to being an argument that this is a bad thing because it hurts the honest people, because you have not shown that the harm is disproportionate: the honest people are already being harmed. Even if you had any sort of evidence for that, which of course you don't, this would hardly be a 'perfect' example. Second, these methods - particularly the Bayesian truth serum - work only over groups, so no individual can be reliably identified as a liar or truth-teller in real-world contexts since one expects (in the limit) all answers to be represented at some frequency. This leads to some inherent limits to how much truth can be extracted. (Example: how do you know if someone is a habitual thief, or just watches a lot of cynical shows like The Wire?)
1TheOtherDave
Can you expand on what you consider the terms of the social contract that is human language?
2badger
The basic point is still worthwhile: predictions about other people also reveal something about yourself. The naive implementation could end up punishing honest participants, but slightly more sophisticated methods promote honesty. Bayesian truth serum (mentioned a couple times already) does this through payments to and from survey participants. Payments can also be socially inappropriate, so finding more practical and acceptable methods is an active area of research.
-1drethelin
People lie. Getting them to reveal their lies is wrong? It seems like the only bad outcome is people just lying in a different way.
JaneQ70

I'm not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.

This is not, in any way shape or form, the same skill as the ability to manage a nonprofit.

Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.

edit: Furthermore, when it comes to works on rationality as 'applied math of optimization',... (read more)

4AlexanderD
It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements. The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered. This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss. Money can be made, according to proverb, by being faster, luckier, or smarter. It's impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it's very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter. While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what's more, you'll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity. I'm sure this must have been addressed somewhere, of course - there is simply way too much written in too many places by too many smart people. But it is odd to me that SI's page on Strategic Insight doe
JaneQ10

I think it is fair to say Earth was doing the "AI math" before the computers. Extending to the today - there is a lot of mathematics to be done for a good, safe AI - but how are we to know that the SI has the actionable effort planning skills required to correctly identify and fund research in such mathematics?

I know that you believe that you have the required skills; but note that in my model such belief results from both the presence of extraordinary effort planning skill, and from absence of effort planning skills. The prior probability of ex... (read more)

If my writings (on FAI, on decision theory, and on the form of applied-math-of-optimization called human rationality) so far haven't convinced you that I stand a sufficient chance of identifying good math problems to solve to maintain the strength of an input into existential risk, you should probably fund CFAR instead. This is not, in any way shape or form, the same skill as the ability to manage a nonprofit. I have not ever, ever claimed to be good at managing people, which is why I kept trying to have other people doing it.

JaneQ50

Great article, however, there is a third important option which is 'request proof then, if passed, donate' (Holden seem to have opted for this in his review of S.I., but it is broadly applicable in general).

For example if there is a charity promising to save 10 millions people using a method X that is not very likely to work, but is very cheap - a Pascal Wager like situation. In this situation, even if this charity is presently the best in terms of expected payoff it may be a better option still to, rather than paying the full sum, pay only enough for a b... (read more)

4gwern
That's true; the 'Value of Information' is often overlooked. This is one reason I am keen on the Brain Preservation Prize: unlike SENS or SIAI or cryonics, here we have cheap tests which can be carried out soon and will give us decent information on how well it's working.
JaneQ20

It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.

On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of "utility function" in the context ... (read more)

0johnlawrenceaspden
Surely "Effort planning is a key to success"? Also, and not just wanting to flash academic applause lights but also genuinely curious, which mathematical successes have been due to effort planning? Even in my own mundane commercial programming experiences, the company which won the biggest was more "This is what we'd like, go away and do it and get back to us when it's done..." than "We have this Gantt chart...".
JaneQ50

It seems to me that the premise of funding SI is that people smarter (or more appropriately specialized) than you will then be able to make discoveries that otherwise would be underfunded or wrongly-purposed.

But then SI has to have dramatically better idea what research has to be funded to protect the mankind, than every other group of people capable of either performing such research or employing people to perform such research.

Muehlhauser has stated that SI should be compared to alternatives in form of the organizations working on the AI risk mitigati... (read more)

9Vladimir_Nesov
Disagree. There are many remaining theoretical (philosophical and mathematical) difficulties whose investigation doesn't depend on the current level of technology. It would've been better to start working on the problem 300 years ago, when AI risk was still far away. Value of information on this problem is high, and we don't (didn't) know that there is nothing to be discovered, it wouldn't be surprising if some kind of progress is made.
1Jonathan_Graehl
Hilarious, and an unfairly effective argument. I'd like to know such people, who can entertain an idea that will still be tantalizing yet unresolved a century out. Yes. I agree with everything else, too, with the caveat that SI is not the first organization to draw attention to AI risk) - not that you said so.