All of Dmytry's Comments + Replies

Dmytry
20

"You might wish to read someone who disagrees with you:"

Quoting from

http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish by acting in the world.

I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That'd be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world i... (read more)

0John_Maxwell
I'm not sure there is a firm boundary between goals respecting events inside the computer and those respecting events outside. Who has made this optimizing compiler claim that you attack? My impression was that AI paranoia advocates were concerned with efficient cross domain optimization, and optimizing compilers would seem to have a limited domain of optimization. By the way, what do you think the most compelling argument in favor of AI paranoia is? Since I've been presenting arguments in favor of AI paranoia, here are the points against this position I can think of offhand that seem most compelling: * The simplest generally intelligent reasoning architectures could be so complicated as to be very difficult to achieve and improve, so that uploads would come first and even future supercomputers running them would improve them slowly: http://www.overcomingbias.com/2010/02/is-the-city-ularity-near.html * I'm not sure that a "good enough" implementation of human values, that was trained on a huge barrage of moral dilemmas and the solutions humans say they want implemented that were somehow sampled from a semi-rigorously defined sample space of moral dilemmas, would be that terrible. Our current universe certainly wasn't optimized for human existence, but we're doing all right at this point. * It seems possible that seed AI is very difficult to create by accident, in the same way an engineered virus would be very difficult to create by accident, so that for a long time it is possible for researchers to build a seed AI but they don't do it because of rudimentary awareness of risks (which aren't present today though).
[+]Dmytry
-110

Yes, this is what is meant by "assert social dominance". The suggestion was to do less of it, though, not more.

Dmytry
-20

What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis.

I know this. I am not making argument here (or actually, trying not to). I'm stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).

7[anonymous]
In that case, you might want to consider rewriting your post. Right now, the crazy messiah vibe is coming through very strongly. Either back it up and stop wasting our time, or rewrite it to assert less social dominance. If you do the latter without the former, people get cranky.
Dmytry
20

value states of the world instead of states of their minds

Easier said than done. Valuing state of the world is hard; you have to rely on senses.

4[anonymous]
Well, yes, but behind the scenes you need a sensible symbolic representation of the world, with explicitly demarcated levels of abstraction. So, when the system is pathing between 'the world now' and 'the world it wants to get to,' the worlds in which it believes there are a lot of paperclips are in very different parts of state space than the worlds which contain the most paperclips, which is what it's aiming for. Being unable to differentiate would be a bug in the seed AI, one which would not occur later if it did not originally exist.
[+]Dmytry
-80
5[anonymous]
Er... what? I'm a software developer too (in training, anyway). Sometimes I'm wrong about things. It's not unusual, or the fault of the material I was reading when I made the mistake. I'm not even certain you're wrong. What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis. If you want to change my mind, then give me something to work with. EDIT: You're right about one thing -- Less Wrong has a huge image problem; but that's entirely tangential to the question at issue.
Dmytry
20

Precisely, thank you! I hate arguing such points. Just because you can say something in English does not make it an utility function in the mathematical sense. Furthermore, just because in English it sounds like modification of utility function, does not mean that it is mathematically a modification of utility function. Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).

7CarlShulman
I think I disagree with you, depending on what you mean here. Limited "intentionality" (as in Dennett's intentional stance) shows up as soon as you have a system that selects the best of several actions using prediction algorithms and an evaluation function: a chess engine like Rybka in the context of a game can be modeled well as selecting good moves. That intentionality is limited because the system has a tightly constrained set of actions and only evaluates consequences using a very limited model of the world, but these things can be scaled up. Robust problem-solving and prediction algorithms capable of solving arbitrary problems would be terribly hard, but intentionality would not be much of a further problem. On the other hand if we talk about very narrowly defined problems then systems capable of doing well on those will not be able to address the very economically and scientifically important mass of ill-specified problems. Also, the separability of action and analysis is limited: Rybka can evaluate opening moves, looking ahead a fair ways, but it cannot provide a comprehensive strategy to win a game (carrying on to the end) without the later moves. You could put a "human in the loop" who would use Rybka to evaluate particular moves, and then make the actual move, but at the cost of adding a bottleneck (humans are slow, cannot follow thousands or millions of decisions at once). The more experimentation and interactive learning are important, the less viable the detached analytical algorithm.
Dmytry
00

With all of them? How so?

-1ShardPhoenix
There are two main challenges: complexity of human values and safe self-modification. In order to correctly define the "charity percentage" so that what the AI leaves us is actually desirable, you need to be able to define human values about as well as a full FAI. Self-modification safety is needed so that it doesn't just change the charity value to 0 (which with a sufficiently general optimizer can't be prevented by simple measures like just "hard-coding" it), or otherwise screw up its own (explicit or implicit) utility function. If you are capable of doing all that, you may as well make a proper FAI.
Dmytry
30

If even widely read bloggers like EY don't qualify to affect your opinions, it sounds as though you're ignoring almost everyone.

I think you discarded one of conditionals. I read Bruce Schneier's blog. Or Paul Graham's. Furthermore, it is not about disagreement with the notion of AI risk. It's about keeping the data non cherry picked, or less cherry picked.

Dmytry
-30

Thanks. Glad you like it. I did put some work into it. I also have a habit of keeping epistemic hygiene by not generating a hypothesis first then cherry-picking examples in support of it later, but that gets a lot of flak outside scientific or engineering circles.

Dmytry
70

To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.

I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself... it does actually make a lot more sense in context of self preservation.

[+]Dmytry
-50
3wedrifid
This doesn't make sense as a reply to the context. I'm not sure it makes any sense as a matter of English grammar either.
Dmytry
-10

Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can't trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I ... (read more)

0timtyler
That's not so much an assumption as an initial action plan. Many of the denizens here don' t want to build artificial people initially. They do want an artificial moral agent - but not one whose experiences are regarded as being intrinsicallly valuable - at least not straight away. Of course you could build agents with valued experiences - the issue is more whether it is a good idea to do so initially. If you start with a non-person, you could still wind up building synthetic people eventually - if it was agreed that doing so was a good idea. If you look at something like the iRobot movie, those robots were't valued much there either. Machines will probably start out being enslaved by humans, not valued as peers.
0Normal_Anomaly
Who said they did this and where? Assuming that's what they meant to say, I would like to go chew them out. More likely you and they got hit by illusion of transparency.
7Viliam_Bur
This seems like you are talking about some existing AI that already has a mechanism for having and evaluating its experiences. But this is not the case. We are discussing how to build an AI, and it seems like good idea to make an AI without experiences (if such words make sense), so it can't be hurt by doing what we value. And if this were not possible, I assume we would try to make an AI that has goals compatible with us, so what makes us happy, makes AI happy too. AI values will be created by us, just like our values were created by nature. We don't suffer because we don't have a different set of values. (Actually, we do suffer because we have conflicting values, so we are often not able to satisfy all of them, but that's another topic.) For example I would feel bad about a future without any form of art. But I would not feel bad about a future without any form of paperclips. Clippy would be probably horrified, and assuming a sympathetic view, ze would feel sorry that the blind gods of evolution have crippled me by denying me the ability to value paperclips. However, I don't feel harmed by the lack of this value, I don't suffer, I am perfectly OK with this situation. So by analogy, if we manage to create the AI with the right set of values, it will be perfectly OK with that situation too.
wedrifid
130

Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences.

Not something done here. If someone else is interested they can find the places this has been discussed previously (or you could do some background research yourself.) For my part I'll just explicitly deny that this represents any sort of consensus lesswrong position, lest the casual reader be mislead.

What if you were to assume I am philosophical zombie?

That would be troubling indeed. It would mean I have become a rather confused and incompetent philosopher.

Dmytry
30

It's not irrational, it's just weak evidence.

Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can't process everything.

5Viliam_Bur
It's perfectly OK to give low priors to strange beliefs, like: "Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words." However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence. For example, a hypothesis that "Washington is a capital city of USA" has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior. So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So... How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don't pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn't it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don't read, but you do read this one -- why?)
4John_Maxwell
Oh please. There's a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics. http://paulgraham.com/disagree.html If even widely read bloggers like EY don't qualify to affect your opinions, it sounds as though you're ignoring almost everyone. No one is expecting you to adopt their priors... Just read and make arguments about ideas instead of people, if you're trying to make an inference about ideas.
Dmytry
20

This another example of method of thinking I dislike - thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it's charity which is set at constant percentage point. Someone else winning doesn't imply you are losing.

3ShardPhoenix
What you describe is arguably already a (mediocre) FAI, with all the attendant challenges.
[+]Dmytry
-80
Dmytry
40

Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer's individual biases.

Not if there is self selection for coincidence of their biases with Eliezer's. Even worse if the reasoning you outlined is employed to lower risk estimates.

Dmytry
30

e.g. the lone genius point basically amounts to ad hominem

But why it is irrational, exactly?

but empirically, people trying to do things seems to make it more likely that they get done.

As long as they don't use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?

0Normal_Anomaly
This line of conversation could benefit from some specificity and from defining what you mean by "explicitly friendly" and "safe". I think of FAI as AI for which there is good evidence that it is safe, so for me, whenever there are multiple AGIs to choose between, the Friendliest one is the lowest-risk one, regardless of who developed it and why. Do you agree? If not, what specifically are you saying?
0John_Maxwell
It's not irrational, it's just weak evidence. I'm not sure exactly what you're asking with the second paragraph. In any case, I don't think the Singularity Institute is dogmatically in favor of friendliness; they've collaborated with Nick Bostrom on thinking about Oracle AI.
Dmytry
00

The hyper-foom is the worst. The cherry picked filtration of what to advertise is also pretty bad.

2fubarobfusco
I haven't been able to find the expression "hyper-foom" on this site, so I'm not sure what it is that you are picking out for criticism there.
[+]Dmytry
-100
Dmytry
10

For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don't go down this road for confirmation, that's how self reinforcing cults are made.

4wedrifid
I didn't go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn't 'cultish', it's just practical.
[+]Dmytry
-50
Dmytry
00

The issue is that it is a doomsday cult if one is to expect extreme outlier (on doom belief) who had never done anything notable beyond being a popular blogger, to be the best person to listen to. That is incredibly unlikely situation for a genuine risk. Bonus cultism points for knowing Bayesian inference but not applying it here. Regardless of how real is the AI risk. Regardless of how truly qualified that one outlier may be. It is an incredibly unlikely world-state where the AI risk would be best coming from someone like that. No matter how fucked up is the scientific review process, it is incredibly unlikely that world's best AI talk is someone's first notable contribution.

Dmytry
00

how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don't even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.

0timtyler
The claim was: inductive inference is just a math problem. If we know how to build a good quality, general-purpose stream compressor, the problem would be solved.
Dmytry
-30

but brings forward the date by which we must solve it

Does it really? I already explained that if someone makes an automated engineering tool, all users of that tool are at least as powerful as some (U)FAI based upon this engineering tool. Addition of independent will onto tank doesn't make it suddenly win the war against much larger force of tanks with no independent will.

You are rationalizing the position here. If you actually reason forwards, it is clear that creation of such tools may, instead, be the life-saver when someone who thought he solved mor... (read more)

0[anonymous]
Think of the tool and its human user as a single system. As long as the system is limited by the human's intelligence then it will not be as powerful as a system consisting of the same tool driven by a superhuman intelligence. And if the system isn't limited by the human's intelligence then the tool is making decisions, it is an AI, and we're facing the problem of making it follow the operator's will. (And didn't you mean to say "as powerful as any (U)FAI"?) In general, it doesn't make much sense to draw a sharp distinction between tools and wills that use them. How do you draw the line in the case of a self-modifying AI? Reasoning by cooked anecdote? Why speak of tanks and not, for example, automated biochemistry labs? I can imagine such existing in the future. And one of those could win the war against all the other biochemistry labs in the world and the rest of the biosphere too, if it were driven by a superior intelligence.
Dmytry
00

Less Wrong has discussed the meme of "SIAI agrees on ideas that most people don't take seriously? They must be a cult!"

Awesome, it has discussed this particular 'meme', to prevalence of viral transmission of which your words seem to imply it attributes it's identification as cult. Has it, however, discussed good Bayesian reasoning and understood the impact of a statistical fact that even when there is a genuine risk (if there is such risk), it is incredibly unlikely that the person most worth listening to will be lacking both academic credenti... (read more)

Dmytry
00

It is unclear to me that artificial intelligence adds any risk there, though, that isn't present from natural stupidity.

Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.

edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.

Dmytry
10

Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.

Actually your scenario already happened... Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.

For more subtle cases though - see, the problem is substitution of 'intellectually omnipotent omniscient entity' for AI. If the AI tells to assassinate foreign official, nobody's going to do that; got to be starting the nuclear war via butterfly effect, and that's pretty much intractable.

0Incorrect
I would prefer our only line of defense not be "most stupid solutions are going to look stupid". It's harder to recognize stupid solutions in say, medicine (although there we can verify with empirical data).
Dmytry
00

There are machine learning techniques like genetic programming that can result in black-box models.

Which are even more prone to outputting crap solutions even without being superintelligent.

1Incorrect
Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous. Sorry you're being downvoted. It's not me.
Dmytry
-40

I'm assuming that the modelling portion is a black box so you can't look inside and see why that solution is expected to lead to a reduction in global temperatures.

Let's just assume that mister president sits on nuclear launch button by accident, shall we?

It isn't an amazing novel philosophical insight that type-1 agents 'love' to solve problems in the wrong way. It is fact of life apparent even in the simplest automated software of that kind. You, of course, also have some pretty visualization of what is the scenario where the parameter was minimized o... (read more)

0Incorrect
Of course it isn't. There are machine learning techniques like genetic programming that can result in black-box models. As I stated earlier, I'm not sure humans will ever combine black-box problem solving techniques with self-optimization and attempt to use the product to solve practical problems; I just think it is dangerous to do so once the techniques become powerful enough.
Dmytry
00

See, that's what is so incredibly irritating about dealing with people who lack any domain specific knowledge. You can't ask it, "how can we reduce global temperatures" in the real world.

You can ask it how to make a model out of data, you can ask it what to do to the model so that such and such function decreases, it may try nuking this model (inside the model), and generate such solution. You got to actually put a lot of effort, like replicating it's in-model actions in real world in mindless manner, for this nuking to happen in real world. (and you'll also have the model visualization to examine, by the way)

0Incorrect
What if instead of giving the solution "cause nuclear war" it simply returns a seemingly innocuous solution expected to cause nuclear war? I'm assuming that the modelling portion is a black box so you can't look inside and see why that solution is expected to lead to a reduction in global temperatures. If the software is using models we can understand and check ourselves then it isn't nearly so dangerous.
Dmytry
-20

I think the problem is conflating different aspects of intelligence into one variable. The three major groups of aspects are:

1: thought/engineering/problem-solving/etc; it can work entirely within mathematical model. This we are making steady progress at.

2: real-world volition, especially the will to form most accurate beliefs of the world. This we don't know how to solve, and don't even need to automate. We ourselves aren't even a shining example of 2, but generally don't care so much about that. 2 is a hard philosophical problem.

3: Morals.

Even strongly ... (read more)

0Incorrect
Type 1 intelligence is dangerous as soon as you try to use it for anything practical simply because it is powerful. If you ask it "how can we reduce global temperatures" and "causing a nuclear winter" is in its solution space, it may return that. Powerful tools must be wielded precisely.
Dmytry
-10

Pretty ordinary meaning: Bunch of people trusting extraordinary claims not backed with any evidence or expert consensus, originating from a charismatic leader who is earning living off cultists. Subtype doomsday. Now, I don't give any plus or minus points for the leader and living off cultists part, but the general lack of expert concern of the issue is a killer. Experts being people with expertise on relevant subject (but no doomsday experts allowed; has to be something practically useful or at least not all about the doomsday itself. Else you start count... (read more)

0Incorrect
There are two claims the conjunction of which must be true in order for a doomsday scenario to be likely: 1. self-improving human-level AI is dangerous enough 2. humans are likely to create human-level AI I am unsure of 2 but believe 1. Do you disagree with 1?
Dmytry
20

If it starts worrying more than astronomers do, sure. The few is as in percentile, at same level of the worry.

More generally, if the degree of the belief is negatively correlated with achievements in relevant areas of expertise, then the extreme forms of belief are very likely false. (And just in case: comparing to Galileo is cherry picking. For each Galileo there's a ton of cranks)

Dmytry
20

Yep. Majorly awesome scenario degrades into ads vs adblock when you consider everything in the future not just the self willed robot. Matter of fact, a lot of work is put into constructing convincing strings of audio and visual stimuli, and into ignoring those strings.

3David_Gerard
Superstimuli and the Collapse of Western Civilization. Using such skills to manipulate other humans appears to be what we grew intelligence for, of course. As I note, western civilisation is already basically made of the most virulent toxic memes we can come up with. In the noble causes of selling toothpaste and car insurance and, of course, getting laid. It seems to be what we do now we've more or less solved the food and shelter problems.
Dmytry
10

You're still falling into the same trap, thinking that your work is ok as long as it doesn't immediately destroy the Earth. What if someone takes your proof generator design, and uses the ideas to build something that does affect the real world?

Well let's say in 2022 we have a bunch of tools along the lines of automatic problem solving, unburdened by their own will (not because they were so designed but by simple omission of immense counter productive effort). Someone with a bad idea comes around, downloads some open source software, cobbles together so... (read more)

3XiXiDu
This is actually one of Greg Egan's major objections. That superhuman tools come first and that artificial agency won't make those tools competitive against augmented humans. Further, you can't apply any work done to ensure that an artificial agents is friendly to augmented humans.
8Wei Dai
Well, one thing the self-willed superintelligent AI could do is read your writings, form a model of you, and figure out a string of arguments designed to persuade you to give up your own goals in favor of its goals (or just trick you into doing things that further its goals without realizing it). (Or another human with superintelligent tools could do this as well.) Can you ask your "automatic problem solving tools" to solve the problem of defending against this, while not freezing your mind so that you can no longer make genuine moral/philosophical progress? If you can do this, then you've pretty much already solved the FAI problem, and you might as well ask the "tools" to tell you how to build an FAI.
Dmytry
10

Well, there's this implied assumption that super-intelligence that 'does not share our values' shares our domain of definition of the values. I can make a fairly intelligent proof generator, far beyond human capability if given enough CPU time; it won't share any values with me, not even the domain of applicability; the lack of shared values with it is so profound as to make it not do anything whatsoever in the 'real world' that I am concerned with. Even if it was meta - strategic to the point of potential for e.g. search for ways to hack into a mainframe ... (read more)

6Wei Dai
You're still falling into the same trap, thinking that your work is ok as long as it doesn't immediately destroy the Earth. What if someone takes your proof generator design, and uses the ideas to build something that does affect the real world?
Dmytry
10

I'm kind of dubious that you needed 'beware of destroying mankind' in a physics textbook to get Teller to check if nuke can cause thermonuclear ignition in atmosphere or seawater, but if it is there, I guess it won't hurt.

Wei Dai
110

Here's another reason why I don't like "AI risk": it brings to mind analogies like physics catastrophes or astronomical disasters, and lets AI researchers think that their work is ok as long as they have little chance of immediately destroying Earth. But the real problem is how do we build or become a superintelligence that shares our values, and given this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't ... (read more)

Dmytry
00

Choosing between mathematically equivalent interpretations adds 1 bit of complexity that doesn't need to be added. Now, if EY had derived the Born probabilities from first principles, that'd be quite interesting.

Dmytry
30

Seems like a prime example of where to apply rationality: what are the consequences to trying to work on AI risk right now? Versus on something else? Does AI risk work have good payoff?

What's of the historical cases? The one example I know of is this: http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf (thermonuclear ignition of atmosphere scenario). Can a bunch of people with little physics related expertise do something about such risks >10 years before? Beyond the usual anti war effort? Bill Gates will work on AI risk when it becomes clear what to do about it.

2Wei Dai
Have you seen Singularity and Friendly AI in the dominant AI textbook?
2orthonormal
It's worth discussing an issue as important as cultishness every so often, but as you might expect, this isn't the first time Less Wrong has discussed the meme of "SIAI agrees on ideas that most people don't take seriously? They must be a cult!" ETA: That is, I'm not dismissing your impression, just saying that the last time this was discussed is relevant.
3Incorrect
What complete definition of "cult" are you using here so that I can replace every occurrence of the word by its definition and get a better understanding of your paragraph? That would be helpful to me as many people use this word in different ways and I don't know precisely how you use it.
-1thomblake
Seems more obviously a doomsday non-cult.
-2Emile
So if the US government worries about meteorites hitting earth, it's a doomsday cult?
Dmytry
20
  • Innate fears - Explained here why I'm not too afraid about AI risks.

You read fiction, some of it is made to play on fears, i.e. to create more fearsome scenarios. The ratio between fearsome, and nice scenarios, is set by market.

  • Political orientation - Used to be libertarian, now not very political. Don't see how either would bias me on AI risks.

You assume zero bias? See, the issue is that I don't think you have a whole lot of signal getting through the graph of unknown blocks. Consequently, any residual biases could win the battle.

  • Religion - Never
... (read more)
Dmytry
30

My point was that when introducing a new idea, the initial examples ought to be optimized to clearly illustrate the idea, not for "important to discuss".

Not a new idea. Basic planning of effort . Suppose I am to try and predict how much income will a new software project bring, knowing that I have bounded time for making this prediction, much shorter time than the production of software itself that is to make the income. Ultimately, thus rules out the direct rigorous estimate, leaving you with 'look at available examples of similar projects, d... (read more)

Dmytry
00

In very short summary, that is also sort of insulting so I am having second thoughts on posting that:

Math homework takes time.

See, one thing I never really even got about LW. So you have some black list of biases, which is weird because the logic is known to work via white list and rigour in using just the whitelisted reasoning. So you supposedly get rid of biases (opinions on this really vary). You still haven't gotten some ultra powers that would instantly get you through enormous math homework which is prediction of anything to any extent what so ever... (read more)

7Luke_A_Somers
You can't use pure logic to derive the inputs to your purely logical system. That's where identifying biases comes in.
Dmytry
00

Empirical data needed. (ideally the success rate on non self administered metrics).

2Luke_A_Somers
I hate to say it, but that's kind of the definition of rationality - applying your intelligence to do things you want done instead of screwing yourself over. Note the absence of a claim of being rational.
Dmytry
20

I've heard so too, then I followed news on Fukushima, and the clean up workers were treated worse than Chernobyl cleanup workers, complete with lack of dosimeters, food, and (guessing with a prior from above) replacement respirators - you need to replace this stuff a lot but unlike food you can just reuse and pretend all is fine. (And tsunami is no excuse)

Dmytry
00

I think the issue is that our IQ is all too often just like engine in a car to climb hills with. You can go where-ever, including downhill.

3Viliam_Bur
If IQ is like an engine, then rationality is like a compass. IQ makes one move faster on the mental landscape, and rationality is guiding them towards victory.
Dmytry
20

Still a ton better than most other places i've been to, though.

0A1987dM
I've never been there, but I've read that Japan has much lower disparity.
Dmytry
30

You need to keep in mind that we are stuck on this planet, and the super-intelligence is not; i'm not assuming that the super-intelligence will be any more benign than us; on the contrary the AI can go and burn resources left and right and eat Jupiter, it's pretty big and dense (dense means low lag if you somehow build computers inside of it). It's just that for AI to keep us, is easier than for entire mankind to keep 1 bonsai tree.

Also, we mankind as meta-organism are pretty damn short sighted.

Dmytry
20

Latency is the propagation delay. Until you propagate through the hard path at all, the shorter paths are the only paths that you could propagate through. There is no magical way for skipping multiple unknown nodes in a circuit and still obtaining useful values. It'd be very easy to explain in terms of electrical engineering (the calculation of signal propagation of beliefs through the inference graphs is homologous to the calculation of signal propagation through a network of electrical components; one can construct an equivalent circuit for specific reas... (read more)

3Jonathan_Graehl
I still have no idea what your model is ("belief propagation graph with latencies"). It's worth spelling out rigorously, perhaps aided by a simpler example. If we're to talk about your model, then we'll need you to teach it to us.
Dmytry
00

Why would those correlations invalidate it, assuming we have controlled for origin and education, and are sampling society with low disparity? (e.g. western Europe).

Don't forget we have a direct causal mechanism at work; failure to predict; and we are not concerned with the feelings so much as with the regrettable actions themselves (and thus don't need to care if the intelligent people e.g. regret for longer, or intelligent people notice more often that they could have done better, which can easily result in more intelligent people experiencing feeling o... (read more)

3teageegeepea
The poor also commit significantly more non-lucrative crime. I found your top-level post hard to understand at first. You may want to add a clearer introduction. When I saw "The issue in brief", I expected a full sentence/thesis to follow and had to recheck to see if I overlooked a verb.
0A1987dM
I wouldn't call present-day western Europe a society with low disparity. Fifteen years ago, maybe.
Load More