Comment author: bortels 03 June 2015 06:42:31AM -1 points [-]

Perhaps instead of the prison, the ex-prisoner should be given the financial incentive to avoid recidivism. Reward good behavior, rather than punish bad.

We could do this by providing training, and given them reasonable jobs. HA HA! I make myself laugh. Sigh.

It seems to me the issue is less one of recidivism, and more one of the prison-for-profit machine. Rather than address it by trying to make them profit either way (they get paid if the prisoner returns already - this is proposing they get paid if they stay out) - it seems simpler to remove profit as a motive (ie. if the state is gonna lock you up - the state has to deal with it, nobody should be doing it as a business). Not that the state is likely to have a better record here.

My take is that we would be better off spending that money on education and health care for the poor, as an effort to avoid having crime be seen as an easy way out of poverty. Ooh, my inner hippie is showing. Time to go hug a tree.

Comment author: owencb 01 June 2015 08:51:15PM 0 points [-]

I'm not sure quite what point you're trying to make:

  • If you're arguing that with the best attempt in the world it might be we still get it wrong, I agree.
  • If you're arguing that greater diligence and better techniques won't increase our chances, I disagree.
  • If you're arguing something else, I've missed the point.
Comment author: bortels 03 June 2015 06:26:36AM 0 points [-]

Fair question.

My point is that if improving techniques could take you from (arbitrarily chosen percentages here) a 50% chance that an unfriendly AI would cause an existential crisis, to 25% chance that it would - you really didn't gain all that much, and the wiser course of action is still not to make the AI.

The actual percentages are wildly debatable, of course, but I would say that if you think there is any chance - no matter how small - of triggering ye olde existential crisis, you don't do it - and I do not believe that technique alone could get us anywhere close to that.

The ideas you propose in OP seem wise, and good for society - and wholly ineffective in actually stopping us from creating an unfriendly AI, The reasons are simply that the complexity defies analysis, at least by human beings. The fear is that the unfriendly arises from unintended design consequences, from unanticipated system effects rather than bugs in code or faulty intent

It's a consequence of entropy - there are simply far, far more ways for something to get screwed up than for it to be right. So unexpected effects arising from complexity are far, far more likely to cause issues than be beneficial unless you can somehow correct for them - planning ahead only will get you so far.

Your OP suggests that we might be more successful if we got more of it right "the first time". But - things this complex are not created, finished, de-novo - they are an iterative, evolutionary task. The training could well be helpful, but I suspect not for the reasons you suggested. The real trick is to design things so that when they go wrong - it still works correctly. You have to plan for and expect failure, or that inevitable failure is the end of the line.

Comment author: Lumifer 02 June 2015 05:20:34PM 1 point [-]

Why are agricultural diets assumed to always be better than the wide range of possible hunter-gatherer diets that our species has spent megayears on?

Agricultural diets are actually worse and led to a documented decrease in health -- see e.g. here.

Comment author: bortels 03 June 2015 06:09:58AM 1 point [-]

The article supports that agricultural diets were worse - but the hunter-gatherers were, as well. Nobody ate a lot back then, abundance is fairly new to humanity. The important part about agriculture is not that it might be healthier - far from it.

Agriculture (and the agricultural diets that go with it) allowed humanity luxuries that the hunter-gatherer did not have - a dependable food supply, and moreover a supply where a person could grow more food than they actually needed for subsistence. This is the very foundation of civilization, and all of the benefits derived from that - the freed up workers could spend their time on other things like permanent structures, research into new technologies, trade, exploration, that were simply impossible in hunter-gatherer society. You can afford to be sickish, as a society, if you can have more babies and support a higher population, at least temporarily. (I suspect that beyond this, adapting to the diet was probably a big issue, and continues to be - look at how many are still lactose intolerant...)

Over time, that allowed agrarian culture to become far better nourished - to the point where sheer abundance causes a whole new set of health issues. I would suggest that today the issues with diet are those of abundance, not agricultural versus hunter-gatherer types of food choices. And, today, with the information we have - you can indeed have a vegan diet, and avoid all or nearly all of the issues the article cites. Technology rocks.

Comment author: Manfred 02 June 2015 08:48:52PM *  2 points [-]

If you, as a human, are thinking of a number, I can narrow it down a great deal from a uniform improper prior. I don't really like that wiki entry, though - if you ask me to guess a number and I say "I don't know," it's sure as heck not because either of us believes or is attempting to imply that I have literally no information about the problem.

I think the way in which that wiki entry is important is that "I don't know" cannot be your only possible answer to a question. If there was a gun to my head, I could give guessing your number a pretty good try. But as triplets of words go, "I don't know" serves a noble and practical purpose.

Comment author: bortels 03 June 2015 05:49:14AM 0 points [-]

It hits a nerve with me. I do computer tech stuff, and one of the hardest things for people to learn, seemingly, is to admit they don't actually know something (and that they should therefore consider, oh, doing research, or experiment, or perhaps seek someone with experience). The concept of "Well - you certainly can narrrow it down in some way" is lovely - but you still don't actually know. The incorrect statement would be "I know nothing (about your number)" - but nobody actually says that.

I kinda flip it - we know nothing for sure (you could be hallucinating or mistaken) - but we are pretty confident about a great many things, and can become more confident. So long as we follow up "I don't know" with "... but I can think of some ways to try to find out", it strikes me as simple humility.

Amusingly - "I am thinking of a number" - was a lie. So - there's a good chance that however you narrowed it down, you were wrong. Fair's fair - you were given false information you based that on, but still thought you might know more than you actually did. Just something to ponder.

Comment author: ThisSpaceAvailable 02 June 2015 03:43:26AM -1 points [-]

There's a laundry section, with detergent, fabric softeners, and other laundry-related products. I don't think the backs generally say what the product is, and even if they do, that's not very useful. And as I said, most laundry brands have non-detergent products. Not labeling detergent as detergent trains people to not look for the "detergent" label, which means that they don't notice when they're buying fabric softener or another product.

Comment author: bortels 03 June 2015 05:39:13AM 0 points [-]

Actually - I took a closer look. The explanation is perhaps simpler.

Tide doesn't make a stand-alone fabric softener. Or if they do - amazon doesn't seem to have it? There's TIde, and Tide with Fabric Softener, and Tide with a dozen other variants - but nothing that's not detergent plus.

So - no point in differentiating. The little Ad-man in my said says "We don't sell mere laundry detergent - we sell Tide!"

To put it another way - did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So - the concern is perhaps unfounded.

Comment author: bortels 01 June 2015 11:09:43PM 1 point [-]

While reading up on Jargon in the wiki (it is difficult to follow some threads without it), I came across:

http://wiki.lesswrong.com/wiki/I_don%27t_know

The talk page does not exist, and I have no rights to create it, so I will ask here: If I say "I am thinking of a number - what is it?" - would "I don't know" be not only a valid answer, but the only answer, for anyone other than myself?

The assertion the page makes is that "I don't Know" is "Something that can't be entirely true if you can even formulate a question." - but this seems a counterexample.

I understand the point that is trying to be made - that "I don't know" is often said even when you actually could narrow down your guess a great deal - but the assertion given is only partially correct, and if you base arguments on a string of mostly correct things, you can still end up wildly off-course in the end.

Am I perhaps applying rigor where it is inappropriate? Perhaps this is taken out of context?

Comment author: Eitan_Zohar 25 May 2015 11:23:53AM *  1 point [-]

Look, I explained the details in the OP. Create a lot of Earths and hope that yours turns out to be one of them. That already violates causality, according to your standards. I don't see much of a way to make it clearer.

Comment author: bortels 01 June 2015 09:34:53PM 0 points [-]

Ah - that's much clearer than your OP.

FWIW - I suspect it violates causality under nearly everyone's standards.

You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".

So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?

If so - the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So - if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it's not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.

The whole thing seems a bit silly anyway - not your argument, but the sim argument - from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).

Comment author: Eitan_Zohar 01 June 2015 01:25:15PM *  0 points [-]

Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).

No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.

I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simulations would simply lack fidelity, or the +1 society running us would go "whoops, that one is spinning up exponentially" and shut us down. If you really think you are in a simulated society, things like this would be tantamount to suicide...

Why? The terminal point is creation of FAI. But they wouldn't shut down the humans of the simulation; that would defeat the whole point of the thing.

I don't find the Doomsday argument compelling, simply because it assumes something is not the case ("we are in the first few percent of humans born") just because it is improbable.

...so you are arguing that probability doesn't mean anything? Something that will happen in 99.99% of universes can be safely assumed to occur in ours.

Comment author: bortels 01 June 2015 09:17:41PM -1 points [-]

No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.

Then perhaps I simply do not understand the proposal.

The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.

This is where I am confused. The "of course" is not very "of coursey" to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).

I am also foggy on terminology - DA and FAI and so on. I don't suppose there's a glossary around. Ok - DA is "Doomsday Argument" from the thread context (which seems silly to me - the SSA seems to be wrong on the face of it, which then invalidates DA).

Comment author: adamzerner 01 June 2015 06:14:10AM *  0 points [-]

My thinking stems from the belief that improving this community would be a high level action that would do a lot of good. Improving the quantity and quality of conversation could 1) help spread rationality and 2) improve the experience for current members. And I think that it could lead to 3) intellectual progress and 4) progress stemming from people working together on projects.

The explanation above is incomplete, but hopefully it communicates the big picture - I see a lot of untapped potential for this community to make important progress in discovering things and achieving things.

Why do I think this? It'll take me a good amount of time to answer that properly and now isn't the time for me to do so, sorry. I plan on posting again with that answer at some point though.

Comment author: bortels 01 June 2015 08:56:33PM 1 point [-]

Fair enough. I should mention my "Why" was more nutsy-and-boltsy than asking about motive; it would perhaps more accurately have been asked as "What do you observe about lesswrong, as it stands, that make you believe it can or should be improved". I am willing to take the desire for it as a given.

The goal of the why, fwiw, was to encourage self-examination, to help perhaps ensure that the "improvement" is just that. Fairly often, attempts to improve things are not as successful as hoped (see most of world history), and as I get older I begin to think more and more that most human attempts to "fix" complex things just tend to screw em up more.

Imagine an "improvement" where your picture was added as part of your post. There are perhaps some who would consider that an improvement - I, emphatically, would not. Not that you are suggesting that - just that the actual improvements should ideally be agreed upon (or at least tolerable to) most or all of the community, and sometimes that sort of consensus is just impossible.

Comment author: owencb 01 June 2015 12:40:01PM 0 points [-]

I'm not suggesting that the problems would come from what we normally think of as software bugs (though see the suggestion in this comment). I'm suggesting that they would come from a failure to specify the right things in a complex scenario -- and that this problem bears enough similarities to software bugs that they could be a good test bed for working out how to approach such problems.

Comment author: bortels 01 June 2015 08:46:15PM *  0 points [-]

The flaws leading to an unexpectedly unfriendly AI certainly might lead back to a flaw in the design - but I think it is overly optimistic to think that the human mind (or a group of minds, or perhaps any mind) is capable of reliably creating specs that are sufficient to avoid this. We can and do spend tremendous time on this sort of thing already, and bad things still happen. You hold the shuttle up as an example of reliability done right (which it is) - but it still blew up, because not all of shuttle design is software. In the same way, the issue could arise from some environmental issue that alters the AI in such a way that it is unpredictable - power fluctuations, bit flip, who knows. The world is a horribly non-deterministic place, from a human POV.

By way of analogy - consider weather prediction. We have worked on it for all of history, we have satellites and supercomputers - and we are still only capable of accurate predictions for a few days or week, getting less and less accurate as we go. This isn't a case of making a mistake - it is a case of a very complex end-state arising from simple beginnings, and lacking the ability to make perfectly accurate predictions about some things. To put it another way - it may simply be the problem is not computable, now or with any forseeable technology.

View more: Prev | Next