I think that's a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means.
More a legacy kind of consideration, really - I do not imagine any meaningful part of myself other than genes (which frankly I was just borrowing) live on. But - If I have done my job right, the attitudes and morals that I have should be reflected in my children, and so I have an effect on the world in some small way that lingers, even if I am not around to see it. And yes - that's comforting, a bit. Still would rather not die, but hey.
So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I...
Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).
As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do no...
I have an intellectual issue with using "probably" before an event that has never happened before, in the history of the universe (so far as I can tell).
And - if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction - I'll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And - I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once ...
The techniques are useful, in and of themselves, without having to think about utility in creating a friendly AI.
So, yes, by all means, work on better skills.
But - the point I'm trying to make is that while they may help, they are insufficient to provide any real degree of confidence in preventing the creation of an unfriendly AI, because the emergent effects that would likely be responsible for such are not amenable to planning about ahead of time.
It seems to me your original proposal is the logical equivalent to "Hey, if we can figure out how to be...
So - there's probably no good reason for you - as a mind - to care about your genes, unless you have reason to believe they are unique or somehow superior in some way to the rest of the population.
But as a genetic machine, you "should" care deeply, for a very particular definition of "should" - simply because if you do not, and that turns out to have been genetically related, then yours will indeed die out. The constant urge and competition to reproduce your particular set of genes is what drives evolution (well, that and some other st...
Exactly. Having a guaranteed low-but-livable-income job as a reward for serving time and not going back is hardly a career path people will aim for - but might be attractive to someone who is out but sees little alternatives but to go back to a life of crime.
I actually think training and new-deal type employment guarantees for those in poverty is a good idea aside from the whole prison thing - in that attempts to raise people from poverty would likely reduce crime to begin with.
The real issue here - running a prison being a profit-making business - has already been pointed out.
Dunning-Kruger - learn it, fear it. So long as you are aware of that effect, and aware of your tendency to arrogance (hardly uncommon, especially among the educated), you are far less likely to have it be a significant issue. Just be vigilant.
I have similar issues - I find it helpful to dive deeply into things I am very inexperienced with, for a while; realizing there are huge branches of knowledge you may be no more educated in than a 6th grader is humbling, and freeing, and once you are comfortable saying "That? Oh, hell - I don't know much about th...
I spent 7 years playing a video game that started to become as important to me as the real world, at least in terms of how it emotionally effected me. If I had spent the 6ish hours a day, on average, doing something else - well, it makes me vaguely sick to think of the things I might have better spent the time and energy on. Don't get me wrong - it was fun. And I did not sink nearly so low as so many others have, and in the end when I realized what was going on - I left. I am simply saddened by the lost opportunity cost. FWIW - this is less about the "...
Perhaps instead of the prison, the ex-prisoner should be given the financial incentive to avoid recidivism. Reward good behavior, rather than punish bad.
We could do this by providing training, and given them reasonable jobs. HA HA! I make myself laugh. Sigh.
It seems to me the issue is less one of recidivism, and more one of the prison-for-profit machine. Rather than address it by trying to make them profit either way (they get paid if the prisoner returns already - this is proposing they get paid if they stay out) - it seems simpler to remove profit as a...
Fair question.
My point is that if improving techniques could take you from (arbitrarily chosen percentages here) a 50% chance that an unfriendly AI would cause an existential crisis, to 25% chance that it would - you really didn't gain all that much, and the wiser course of action is still not to make the AI.
The actual percentages are wildly debatable, of course, but I would say that if you think there is any chance - no matter how small - of triggering ye olde existential crisis, you don't do it - and I do not believe that technique alone could get us a...
The article supports that agricultural diets were worse - but the hunter-gatherers were, as well. Nobody ate a lot back then, abundance is fairly new to humanity. The important part about agriculture is not that it might be healthier - far from it.
Agriculture (and the agricultural diets that go with it) allowed humanity luxuries that the hunter-gatherer did not have - a dependable food supply, and moreover a supply where a person could grow more food than they actually needed for subsistence. This is the very foundation of civilization, and all of the ben...
It hits a nerve with me. I do computer tech stuff, and one of the hardest things for people to learn, seemingly, is to admit they don't actually know something (and that they should therefore consider, oh, doing research, or experiment, or perhaps seek someone with experience). The concept of "Well - you certainly can narrrow it down in some way" is lovely - but you still don't actually know. The incorrect statement would be "I know nothing (about your number)" - but nobody actually says that.
I kinda flip it - we know nothing for sure ...
Actually - I took a closer look. The explanation is perhaps simpler.
Tide doesn't make a stand-alone fabric softener. Or if they do - amazon doesn't seem to have it? There's TIde, and Tide with Fabric Softener, and Tide with a dozen other variants - but nothing that's not detergent plus.
So - no point in differentiating. The little Ad-man in my said says "We don't sell mere laundry detergent - we sell Tide!"
To put it another way - did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So - the concern is perhaps unfounded.
While reading up on Jargon in the wiki (it is difficult to follow some threads without it), I came across:
http://wiki.lesswrong.com/wiki/I_don%27t_know
The talk page does not exist, and I have no rights to create it, so I will ask here: If I say "I am thinking of a number - what is it?" - would "I don't know" be not only a valid answer, but the only answer, for anyone other than myself?
The assertion the page makes is that "I don't Know" is "Something that can't be entirely true if you can even formulate a question." ...
Ah - that's much clearer than your OP.
FWIW - I suspect it violates causality under nearly everyone's standards.
You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".
So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?
If so - the flaw lies in orders of infinity. For every wa...
No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.
Then perhaps I simply do not understand the proposal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
This is where I am confused. The "of course" is not very "of coursey" to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argu...
Fair enough. I should mention my "Why" was more nutsy-and-boltsy than asking about motive; it would perhaps more accurately have been asked as "What do you observe about lesswrong, as it stands, that make you believe it can or should be improved". I am willing to take the desire for it as a given.
The goal of the why, fwiw, was to encourage self-examination, to help perhaps ensure that the "improvement" is just that. Fairly often, attempts to improve things are not as successful as hoped (see most of world history), and as I g...
The flaws leading to an unexpectedly unfriendly AI certainly might lead back to a flaw in the design - but I think it is overly optimistic to think that the human mind (or a group of minds, or perhaps any mind) is capable of reliably creating specs that are sufficient to avoid this. We can and do spend tremendous time on this sort of thing already, and bad things still happen. You hold the shuttle up as an example of reliability done right (which it is) - but it still blew up, because not all of shuttle design is software. In the same way, the issue could ...
Thank you. The human element struck me as the "weak link" as well, which is why I am attempting to 'formally prove' (for a pretty sketchy definition of 'formal') that the AI should be left in the box no matter what it says or does - presumably to steel resolve in the face of likely manipulation attempts, and ideally to ensure that if such a situation ever actually happened, "let it out of the box" isn't actually designed to be a viable option. I do see the chance that a human might be subverted via non-logical means - sympathy, or a des...
Case in point - I, for one, would likely not have posted anything whatsoever were it not for Stupid Questions. There is enough jargon here that asking something reasonable can still be intimidating - what if it turns out to be common knowledge? Once you break the ice, it's easier, but count this as a sample of 1 supporting it.
Setting a goal helps clarify thought process and planning; it forces you to step back a bit and look at the work to be done, and the outcome, from a slightly different viewpoint. It also helps you maintain focus on driving toward a result, and gives you the satisfaction of accomplishment when (if) you reach the goal.
So - I'm not sure I want to get along with those who are totally wrong (or who I think are). More power to altruism, you rock, but I wonder sometimes if we do not bring some of this stupidity on ourselves by tolerating and giving voice to idiocy.
I look, for example, at the vaccination situation; I live in Southern California, a hotbed of clueless celebrity bozos who think for some reason they know more about disease and epidemiology than freaking doctors, and who cause real harm - actual loss of human life - to their community, of which my kids are a part...
I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good.
To your own cognition, or just to that of others?
I just got here. I have no experience with the issues you cite, but it strikes me that disengagement does not, in general, change society. If you think ideas, as presented, are wrong - show the evidence, debate, fight the good fight. This is probably one of the few places it might actually be acceptable - you can't lurk on religious boards and try to convince them of things, they mostly canno...
I am new here - and so do not have enough experience to make a judgement call, but I do have a question:
Why do you want to "improve" it? What are the aspects of it's current operation that you think are sub-optimal, and why?
I see a lot of interesting suggestions for changes, and a wishlist for features - but I have no inkling if they might "improve" anything at all. I tend to be of the "simpler is better" school, and from the sound of things, it seems things are already pretty good, or at least pretty non-bad?
STORYTIME!
I used...
Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).
I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simul...
I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence are going to make human-run biology irrelevant before long.
I suspect that's the issue, and I suspect AI will not be the Panacea you expect it to; or rather, if AI gets to the point of making Human-run research in any field irrelevant - it may well do so in all fields shortly thereafter, so you're right back where you started.
I rather doubt it will happen that way at all; it seems to me in the forseeable future, the most likely scenario of compute...
I think there may be an unfounded assumption here - that an unfriendly AI would be the results of some sort of bug, or coding errors that could be identified ahead of time and fixed.
I rather suspect those sorts of stuff would not result in "unfriendly", they would result in crash/nonsense/non-functional AI.
Presumably part of the reason the whole friendly/non-friendly thing is an issue is because our models of cognition are crude, and a ton of complex high-order behavior is a result of emergent properties in a system, not from explicit coding. I ...
Is a 3 minute song worse, somehow, than a 10 minute song? or a song that plays forever, on a loop, like the soundtrack at Pirates of the Caribbean, is that somehow even better?
The value of a life is more about quality than quantity, although presumably if quality is high, longer is more desirable, at least to a point.
You could argue with current overpopulation is is unethical to have any children. In which case your genes will be deselected from the gene pool, in favor of those of my children, so it's maybe not a good argument to make.
There's a label on the back as well with details. The front label is a billboard, designed to get your attention and take advantage of brand loyalty, so yes - you are expected to know it's detergent, and they are happy to handle the crazy rare edge-case person who does not recognize the brand. I suspect they also expect the supermarket you buy it at to have it in the "laundry detergents" section, likely with labels as well, so it's not necessary on the front label.
I am hoping this is not stupid - but there is a large corpus of work on AI, and it is probably faster for those who have already digested it to point out fallacies than it is for me to try to find them. So - here goes:
BOOM. Maybe it's a bad sign when your first post to a new forum gets a "Comment Too Long" error.
I put the full content here - https://gist.github.com/bortels/28f3787e4762aa3870b3#file-aiboxguide-md - what follows is a teaser, intended to get those interested to look at the whole thing
TL;DR - it seems evident to me that the "ke...
Strawman?
"... idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI." is what you said. I said preventing the creation of an unfriendly AI.
Ok. valid point. Not the same.
I would say the items described will do nothing whatsoever to "increase the likelihood of society acquiring robustly safe and beneficial AI."
They are certainly of value in normal software development, but it seems increasingly likely as time passes without a proper general AI actually being created that such a tas... (read more)