Yes, this is what is meant by "assert social dominance". The suggestion was to do less of it, though, not more.
What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis.
I know this. I am not making argument here (or actually, trying not to). I'm stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).
value states of the world instead of states of their minds
Easier said than done. Valuing state of the world is hard; you have to rely on senses.
Precisely, thank you! I hate arguing such points. Just because you can say something in English does not make it an utility function in the mathematical sense. Furthermore, just because in English it sounds like modification of utility function, does not mean that it is mathematically a modification of utility function. Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).
With all of them? How so?
If even widely read bloggers like EY don't qualify to affect your opinions, it sounds as though you're ignoring almost everyone.
I think you discarded one of conditionals. I read Bruce Schneier's blog. Or Paul Graham's. Furthermore, it is not about disagreement with the notion of AI risk. It's about keeping the data non cherry picked, or less cherry picked.
Thanks. Glad you like it. I did put some work into it. I also have a habit of keeping epistemic hygiene by not generating a hypothesis first then cherry-picking examples in support of it later, but that gets a lot of flak outside scientific or engineering circles.
To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.
I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself... it does actually make a lot more sense in context of self preservation.
Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can't trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I ...
Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences.
Not something done here. If someone else is interested they can find the places this has been discussed previously (or you could do some background research yourself.) For my part I'll just explicitly deny that this represents any sort of consensus lesswrong position, lest the casual reader be mislead.
What if you were to assume I am philosophical zombie?
That would be troubling indeed. It would mean I have become a rather confused and incompetent philosopher.
It's not irrational, it's just weak evidence.
Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can't process everything.
This another example of method of thinking I dislike - thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it's charity which is set at constant percentage point. Someone else winning doesn't imply you are losing.
Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer's individual biases.
Not if there is self selection for coincidence of their biases with Eliezer's. Even worse if the reasoning you outlined is employed to lower risk estimates.
e.g. the lone genius point basically amounts to ad hominem
But why it is irrational, exactly?
but empirically, people trying to do things seems to make it more likely that they get done.
As long as they don't use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?
The hyper-foom is the worst. The cherry picked filtration of what to advertise is also pretty bad.
For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don't go down this road for confirmation, that's how self reinforcing cults are made.
The issue is that it is a doomsday cult if one is to expect extreme outlier (on doom belief) who had never done anything notable beyond being a popular blogger, to be the best person to listen to. That is incredibly unlikely situation for a genuine risk. Bonus cultism points for knowing Bayesian inference but not applying it here. Regardless of how real is the AI risk. Regardless of how truly qualified that one outlier may be. It is an incredibly unlikely world-state where the AI risk would be best coming from someone like that. No matter how fucked up is the scientific review process, it is incredibly unlikely that world's best AI talk is someone's first notable contribution.
how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don't even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.
but brings forward the date by which we must solve it
Does it really? I already explained that if someone makes an automated engineering tool, all users of that tool are at least as powerful as some (U)FAI based upon this engineering tool. Addition of independent will onto tank doesn't make it suddenly win the war against much larger force of tanks with no independent will.
You are rationalizing the position here. If you actually reason forwards, it is clear that creation of such tools may, instead, be the life-saver when someone who thought he solved mor...
Less Wrong has discussed the meme of "SIAI agrees on ideas that most people don't take seriously? They must be a cult!"
Awesome, it has discussed this particular 'meme', to prevalence of viral transmission of which your words seem to imply it attributes it's identification as cult. Has it, however, discussed good Bayesian reasoning and understood the impact of a statistical fact that even when there is a genuine risk (if there is such risk), it is incredibly unlikely that the person most worth listening to will be lacking both academic credenti...
It is unclear to me that artificial intelligence adds any risk there, though, that isn't present from natural stupidity.
Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.
edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.
Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.
Actually your scenario already happened... Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.
For more subtle cases though - see, the problem is substitution of 'intellectually omnipotent omniscient entity' for AI. If the AI tells to assassinate foreign official, nobody's going to do that; got to be starting the nuclear war via butterfly effect, and that's pretty much intractable.
There are machine learning techniques like genetic programming that can result in black-box models.
Which are even more prone to outputting crap solutions even without being superintelligent.
I'm assuming that the modelling portion is a black box so you can't look inside and see why that solution is expected to lead to a reduction in global temperatures.
Let's just assume that mister president sits on nuclear launch button by accident, shall we?
It isn't an amazing novel philosophical insight that type-1 agents 'love' to solve problems in the wrong way. It is fact of life apparent even in the simplest automated software of that kind. You, of course, also have some pretty visualization of what is the scenario where the parameter was minimized o...
See, that's what is so incredibly irritating about dealing with people who lack any domain specific knowledge. You can't ask it, "how can we reduce global temperatures" in the real world.
You can ask it how to make a model out of data, you can ask it what to do to the model so that such and such function decreases, it may try nuking this model (inside the model), and generate such solution. You got to actually put a lot of effort, like replicating it's in-model actions in real world in mindless manner, for this nuking to happen in real world. (and you'll also have the model visualization to examine, by the way)
I think the problem is conflating different aspects of intelligence into one variable. The three major groups of aspects are:
1: thought/engineering/problem-solving/etc; it can work entirely within mathematical model. This we are making steady progress at.
2: real-world volition, especially the will to form most accurate beliefs of the world. This we don't know how to solve, and don't even need to automate. We ourselves aren't even a shining example of 2, but generally don't care so much about that. 2 is a hard philosophical problem.
3: Morals.
Even strongly ...
Pretty ordinary meaning: Bunch of people trusting extraordinary claims not backed with any evidence or expert consensus, originating from a charismatic leader who is earning living off cultists. Subtype doomsday. Now, I don't give any plus or minus points for the leader and living off cultists part, but the general lack of expert concern of the issue is a killer. Experts being people with expertise on relevant subject (but no doomsday experts allowed; has to be something practically useful or at least not all about the doomsday itself. Else you start count...
If it starts worrying more than astronomers do, sure. The few is as in percentile, at same level of the worry.
More generally, if the degree of the belief is negatively correlated with achievements in relevant areas of expertise, then the extreme forms of belief are very likely false. (And just in case: comparing to Galileo is cherry picking. For each Galileo there's a ton of cranks)
Yep. Majorly awesome scenario degrades into ads vs adblock when you consider everything in the future not just the self willed robot. Matter of fact, a lot of work is put into constructing convincing strings of audio and visual stimuli, and into ignoring those strings.
You're still falling into the same trap, thinking that your work is ok as long as it doesn't immediately destroy the Earth. What if someone takes your proof generator design, and uses the ideas to build something that does affect the real world?
Well let's say in 2022 we have a bunch of tools along the lines of automatic problem solving, unburdened by their own will (not because they were so designed but by simple omission of immense counter productive effort). Someone with a bad idea comes around, downloads some open source software, cobbles together so...
Well, there's this implied assumption that super-intelligence that 'does not share our values' shares our domain of definition of the values. I can make a fairly intelligent proof generator, far beyond human capability if given enough CPU time; it won't share any values with me, not even the domain of applicability; the lack of shared values with it is so profound as to make it not do anything whatsoever in the 'real world' that I am concerned with. Even if it was meta - strategic to the point of potential for e.g. search for ways to hack into a mainframe ...
I'm kind of dubious that you needed 'beware of destroying mankind' in a physics textbook to get Teller to check if nuke can cause thermonuclear ignition in atmosphere or seawater, but if it is there, I guess it won't hurt.
Here's another reason why I don't like "AI risk": it brings to mind analogies like physics catastrophes or astronomical disasters, and lets AI researchers think that their work is ok as long as they have little chance of immediately destroying Earth. But the real problem is how do we build or become a superintelligence that shares our values, and given this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't ...
Choosing between mathematically equivalent interpretations adds 1 bit of complexity that doesn't need to be added. Now, if EY had derived the Born probabilities from first principles, that'd be quite interesting.
Seems like a prime example of where to apply rationality: what are the consequences to trying to work on AI risk right now? Versus on something else? Does AI risk work have good payoff?
What's of the historical cases? The one example I know of is this: http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf (thermonuclear ignition of atmosphere scenario). Can a bunch of people with little physics related expertise do something about such risks >10 years before? Beyond the usual anti war effort? Bill Gates will work on AI risk when it becomes clear what to do about it.
You read fiction, some of it is made to play on fears, i.e. to create more fearsome scenarios. The ratio between fearsome, and nice scenarios, is set by market.
You assume zero bias? See, the issue is that I don't think you have a whole lot of signal getting through the graph of unknown blocks. Consequently, any residual biases could win the battle.
My point was that when introducing a new idea, the initial examples ought to be optimized to clearly illustrate the idea, not for "important to discuss".
Not a new idea. Basic planning of effort . Suppose I am to try and predict how much income will a new software project bring, knowing that I have bounded time for making this prediction, much shorter time than the production of software itself that is to make the income. Ultimately, thus rules out the direct rigorous estimate, leaving you with 'look at available examples of similar projects, d...
In very short summary, that is also sort of insulting so I am having second thoughts on posting that:
Math homework takes time.
See, one thing I never really even got about LW. So you have some black list of biases, which is weird because the logic is known to work via white list and rigour in using just the whitelisted reasoning. So you supposedly get rid of biases (opinions on this really vary). You still haven't gotten some ultra powers that would instantly get you through enormous math homework which is prediction of anything to any extent what so ever...
Empirical data needed. (ideally the success rate on non self administered metrics).
I've heard so too, then I followed news on Fukushima, and the clean up workers were treated worse than Chernobyl cleanup workers, complete with lack of dosimeters, food, and (guessing with a prior from above) replacement respirators - you need to replace this stuff a lot but unlike food you can just reuse and pretend all is fine. (And tsunami is no excuse)
I think the issue is that our IQ is all too often just like engine in a car to climb hills with. You can go where-ever, including downhill.
Still a ton better than most other places i've been to, though.
You need to keep in mind that we are stuck on this planet, and the super-intelligence is not; i'm not assuming that the super-intelligence will be any more benign than us; on the contrary the AI can go and burn resources left and right and eat Jupiter, it's pretty big and dense (dense means low lag if you somehow build computers inside of it). It's just that for AI to keep us, is easier than for entire mankind to keep 1 bonsai tree.
Also, we mankind as meta-organism are pretty damn short sighted.
Latency is the propagation delay. Until you propagate through the hard path at all, the shorter paths are the only paths that you could propagate through. There is no magical way for skipping multiple unknown nodes in a circuit and still obtaining useful values. It'd be very easy to explain in terms of electrical engineering (the calculation of signal propagation of beliefs through the inference graphs is homologous to the calculation of signal propagation through a network of electrical components; one can construct an equivalent circuit for specific reas...
Why would those correlations invalidate it, assuming we have controlled for origin and education, and are sampling society with low disparity? (e.g. western Europe).
Don't forget we have a direct causal mechanism at work; failure to predict; and we are not concerned with the feelings so much as with the regrettable actions themselves (and thus don't need to care if the intelligent people e.g. regret for longer, or intelligent people notice more often that they could have done better, which can easily result in more intelligent people experiencing feeling o...
Quoting from
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That'd be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world i... (read more)