All of bortels's Comments + Replies

bortels20

Strawman?

"... idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI." is what you said. I said preventing the creation of an unfriendly AI.

Ok. valid point. Not the same.

I would say the items described will do nothing whatsoever to "increase the likelihood of society acquiring robustly safe and beneficial AI."

They are certainly of value in normal software development, but it seems increasingly likely as time passes without a proper general AI actually being created that such a tas... (read more)

bortels00

I think that's a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means.

More a legacy kind of consideration, really - I do not imagine any meaningful part of myself other than genes (which frankly I was just borrowing) live on. But - If I have done my job right, the attitudes and morals that I have should be reflected in my children, and so I have an effect on the world in some small way that lingers, even if I am not around to see it. And yes - that's comforting, a bit. Still would rather not die, but hey.

bortels00

So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.

The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.

The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?

For a moment, I will assume I... (read more)

bortels00

Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).

As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do no... (read more)

0gjm
I think you're wrong about "backwards probability". Probabilities describe your state of knowledge (or someone else's, or some hypothetical idealized observer's, etc.). It is perfectly true that "your" probability for some past event known to you will be 1 (or rather something very close to 1 but allowing for the various errors you might be making), but that isn't because there's something wrong with probabilities of past events. Now, it often happens that you need to consider probabilities that ignore bits of knowledge you now have. Here's a simple example. I have a 6-sided die. I am going to roll the die, flip a number of coins equal to the number that comes up, and tell you how many heads I get. Let's say the number is 2. Now I ask you: how likely is it that I rolled each possible number on the die? To answer that question (beyond the trivial observation that clearly I didn't roll a 1) one part of the calculation you need to do is: how likely was it, given a particular die roll but not the further information you've gained since then, that I would get 2 heads? You will get completely wrong answers if you answer all those questions with "the probability is 1 because I know it was 2 heads". (Here's how the actual calculation goes. If the result of the die roll was k, then Pr(exactly 2 heads) was (k choose 2) / 2^k, which for k=1..6 goes 0, 1/4, 3/8, 6/16, 10/32, 15/64; since all six die rolls were equiprobable to start with, your odds after learning how many heads are proportional to these or (taking a common denominator) to 0 : 16 : 24 : 24 : 20 : 15, so e.g. Pr(roll was 6 | two heads) is 15/99 = 5/33. Assuming I didn't make any mistakes in the calculations, anyway.) The SSA-based calculations work in a similar way. * Consider the possible different numbers of humans there could ever have been (like considering all the possible die rolls). * For each, see how probable it is that you'd have been human # 70 billion, or whatever the figure is (like consideri
bortels00

I have an intellectual issue with using "probably" before an event that has never happened before, in the history of the universe (so far as I can tell).

And - if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction - I'll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And - I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once ... (read more)

bortels00

The techniques are useful, in and of themselves, without having to think about utility in creating a friendly AI.

So, yes, by all means, work on better skills.

But - the point I'm trying to make is that while they may help, they are insufficient to provide any real degree of confidence in preventing the creation of an unfriendly AI, because the emergent effects that would likely be responsible for such are not amenable to planning about ahead of time.

It seems to me your original proposal is the logical equivalent to "Hey, if we can figure out how to be... (read more)

1ChristianKl
Basically you attack a strawman. Unfortunately I don't think anybody has proposed an idea of how to solve FAI that's as straightforward as building lighting rods. In computer security there the idea of "defense in depth". You try to get every layer right and as secure as possible.
bortels20

So - there's probably no good reason for you - as a mind - to care about your genes, unless you have reason to believe they are unique or somehow superior in some way to the rest of the population.

But as a genetic machine, you "should" care deeply, for a very particular definition of "should" - simply because if you do not, and that turns out to have been genetically related, then yours will indeed die out. The constant urge and competition to reproduce your particular set of genes is what drives evolution (well, that and some other st... (read more)

0MrMind
Ahah, no, no particular reason, to the contrary, they're not especially good, and I am in favor of eugenetics (applied to those who do not exist yet, not those who are already alive!). Yes, I understand the argument, and that probably that's exactly what will happen. On the other side, I feel no special loss pondering that the human genetic pool in the future will be composed by this or that sequence of adenosine and citosine. I think that's a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means. I understand that having kids, as much as unethical I think it is (that is, mildly), still generates for the way that some are built, some very strong good emotions, and those are not at all unethical. Everyone has to balance the two, I guess.
bortels00

Exactly. Having a guaranteed low-but-livable-income job as a reward for serving time and not going back is hardly a career path people will aim for - but might be attractive to someone who is out but sees little alternatives but to go back to a life of crime.

I actually think training and new-deal type employment guarantees for those in poverty is a good idea aside from the whole prison thing - in that attempts to raise people from poverty would likely reduce crime to begin with.

The real issue here - running a prison being a profit-making business - has already been pointed out.

bortels00

Dunning-Kruger - learn it, fear it. So long as you are aware of that effect, and aware of your tendency to arrogance (hardly uncommon, especially among the educated), you are far less likely to have it be a significant issue. Just be vigilant.

I have similar issues - I find it helpful to dive deeply into things I am very inexperienced with, for a while; realizing there are huge branches of knowledge you may be no more educated in than a 6th grader is humbling, and freeing, and once you are comfortable saying "That? Oh, hell - I don't know much about th... (read more)

bortels150

I spent 7 years playing a video game that started to become as important to me as the real world, at least in terms of how it emotionally effected me. If I had spent the 6ish hours a day, on average, doing something else - well, it makes me vaguely sick to think of the things I might have better spent the time and energy on. Don't get me wrong - it was fun. And I did not sink nearly so low as so many others have, and in the end when I realized what was going on - I left. I am simply saddened by the lost opportunity cost. FWIW - this is less about the "... (read more)

0Creutzer
On the other hand, you would not and quite possibly could not have spent all that time on learning valuable things. At least some of it would have been used up by some other sort of relaxation.
bortels-10

Perhaps instead of the prison, the ex-prisoner should be given the financial incentive to avoid recidivism. Reward good behavior, rather than punish bad.

We could do this by providing training, and given them reasonable jobs. HA HA! I make myself laugh. Sigh.

It seems to me the issue is less one of recidivism, and more one of the prison-for-profit machine. Rather than address it by trying to make them profit either way (they get paid if the prisoner returns already - this is proposing they get paid if they stay out) - it seems simpler to remove profit as a... (read more)

0lululu
Fully agreed that this incentive would also be well spent on programs directly for the prisoner. Unfortunately, there is no way that you could convince law makers to consider this. Imagine the headlines: "My Rapist Is Payed More than Me," "Go Directly to Jail, Collect $200", "Pennsylvania Begins New Steal to Earn Program," "Don't Qualify for Student Loans? Steal a Car!" People are more comfortable if the money goes to some intermediary. I would expect prisons are the best group to insensitivize because they have the captive audience. If job training works, prisons can earn money by providing job training. If they need reasonable jobs, it would be in the prison's interest to make ties with recruiters or hire a full time job seeker on the prisoner's behalf. For the record, also agreed that education and health care are great preventative expenditures but that is a different system for reforming and one with a lot of partisan lines in the sand. I think it would be disproportionately difficult to use incentives to reform those areas because facts don't matter when partisanism starts happening.
6ChristianKl
Given people who commit crimes money that you don't give the rest of the population seems like a bad idea for all sorts of reasons. Lack of getting caught for a crime also isn't proof of good behavior.
bortels00

Fair question.

My point is that if improving techniques could take you from (arbitrarily chosen percentages here) a 50% chance that an unfriendly AI would cause an existential crisis, to 25% chance that it would - you really didn't gain all that much, and the wiser course of action is still not to make the AI.

The actual percentages are wildly debatable, of course, but I would say that if you think there is any chance - no matter how small - of triggering ye olde existential crisis, you don't do it - and I do not believe that technique alone could get us a... (read more)

0ChristianKl
That's not the choice we are making. The choice we are making is to decide to develop those techniques.
0owencb
I disagree that "you really didn't gain all that much" in your example. There are possible numbers such that it's better to avoid producing AI, but (a) that may not be a lever which is available to us, and (b) AI done right would probably represent an existential eucatastrophe, greatly improving our ability to avoid or deal with future threats.
bortels20

The article supports that agricultural diets were worse - but the hunter-gatherers were, as well. Nobody ate a lot back then, abundance is fairly new to humanity. The important part about agriculture is not that it might be healthier - far from it.

Agriculture (and the agricultural diets that go with it) allowed humanity luxuries that the hunter-gatherer did not have - a dependable food supply, and moreover a supply where a person could grow more food than they actually needed for subsistence. This is the very foundation of civilization, and all of the ben... (read more)

0Lumifer
I don't think this is true. Contemporary hunter-gatherers leading traditional lifestyles are not malnourished or permanently hungry. They certainly have problems (like the parasite load or an occasional famine), but I have a strong impression that their quality and amount of food is fine. Yes, of course -- the agriculture people did win and take over the world :-) My understanding is that the primary way they won was through breeding faster: nomads have to space their kids because the mother can't carry many infants with her, but settled people don't have that problem, their women could (and did) pop out children every year and basically overwhelmed the nomads. Though what you are saying about the food surplus allowing luxuries like specialized craftsmen, research, etc. is certainly true as well. One can, but doesn't mean that all vegans do.
bortels00

It hits a nerve with me. I do computer tech stuff, and one of the hardest things for people to learn, seemingly, is to admit they don't actually know something (and that they should therefore consider, oh, doing research, or experiment, or perhaps seek someone with experience). The concept of "Well - you certainly can narrrow it down in some way" is lovely - but you still don't actually know. The incorrect statement would be "I know nothing (about your number)" - but nobody actually says that.

I kinda flip it - we know nothing for sure ... (read more)

bortels00

Actually - I took a closer look. The explanation is perhaps simpler.

Tide doesn't make a stand-alone fabric softener. Or if they do - amazon doesn't seem to have it? There's TIde, and Tide with Fabric Softener, and Tide with a dozen other variants - but nothing that's not detergent plus.

So - no point in differentiating. The little Ad-man in my said says "We don't sell mere laundry detergent - we sell Tide!"

To put it another way - did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So - the concern is perhaps unfounded.

bortels20

While reading up on Jargon in the wiki (it is difficult to follow some threads without it), I came across:

http://wiki.lesswrong.com/wiki/I_don%27t_know

The talk page does not exist, and I have no rights to create it, so I will ask here: If I say "I am thinking of a number - what is it?" - would "I don't know" be not only a valid answer, but the only answer, for anyone other than myself?

The assertion the page makes is that "I don't Know" is "Something that can't be entirely true if you can even formulate a question." ... (read more)

4Manfred
If you, as a human, are thinking of a number, I can narrow it down a great deal from a uniform improper prior. I don't really like that wiki entry, though - if you ask me to guess a number and I say "I don't know," it's sure as heck not because either of us believes or is attempting to imply that I have literally no information about the problem. I think the way in which that wiki entry is important is that "I don't know" cannot be your only possible answer to a question. If there was a gun to my head, I could give guessing your number a pretty good try. But as triplets of words go, "I don't know" serves a noble and practical purpose.
bortels00

Ah - that's much clearer than your OP.

FWIW - I suspect it violates causality under nearly everyone's standards.

You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".

So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?

If so - the flaw lies in orders of infinity. For every wa... (read more)

0Fivehundred
Oh god damn it, Lesswrong is responsible for every single premise of my argument. I'm just the first to make it! As for the rest of your post: I have to admit I did not consider this, but I still don't see why they wouldn't just create a less complex physical universe for the simulation. Or maybe I'm misunderstanding you. My brain is feeling more than usually fried at the moment.
bortels-20

No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.

Then perhaps I simply do not understand the proposal.

The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.

This is where I am confused. The "of course" is not very "of coursey" to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argu... (read more)

2Fivehundred
I'm not sure that you can avoid picking it up, just by being on this site. http://www.yudkowsky.net/singularity/ai-risk/ You clearly know something I don't.
bortels20

Fair enough. I should mention my "Why" was more nutsy-and-boltsy than asking about motive; it would perhaps more accurately have been asked as "What do you observe about lesswrong, as it stands, that make you believe it can or should be improved". I am willing to take the desire for it as a given.

The goal of the why, fwiw, was to encourage self-examination, to help perhaps ensure that the "improvement" is just that. Fairly often, attempts to improve things are not as successful as hoped (see most of world history), and as I g... (read more)

bortels00

The flaws leading to an unexpectedly unfriendly AI certainly might lead back to a flaw in the design - but I think it is overly optimistic to think that the human mind (or a group of minds, or perhaps any mind) is capable of reliably creating specs that are sufficient to avoid this. We can and do spend tremendous time on this sort of thing already, and bad things still happen. You hold the shuttle up as an example of reliability done right (which it is) - but it still blew up, because not all of shuttle design is software. In the same way, the issue could ... (read more)

0owencb
I'm not sure quite what point you're trying to make: * If you're arguing that with the best attempt in the world it might be we still get it wrong, I agree. * If you're arguing that greater diligence and better techniques won't increase our chances, I disagree. * If you're arguing something else, I've missed the point.
bortels00

Thank you. The human element struck me as the "weak link" as well, which is why I am attempting to 'formally prove' (for a pretty sketchy definition of 'formal') that the AI should be left in the box no matter what it says or does - presumably to steel resolve in the face of likely manipulation attempts, and ideally to ensure that if such a situation ever actually happened, "let it out of the box" isn't actually designed to be a viable option. I do see the chance that a human might be subverted via non-logical means - sympathy, or a des... (read more)

bortels10

Case in point - I, for one, would likely not have posted anything whatsoever were it not for Stupid Questions. There is enough jargon here that asking something reasonable can still be intimidating - what if it turns out to be common knowledge? Once you break the ice, it's easier, but count this as a sample of 1 supporting it.

bortels00

Setting a goal helps clarify thought process and planning; it forces you to step back a bit and look at the work to be done, and the outcome, from a slightly different viewpoint. It also helps you maintain focus on driving toward a result, and gives you the satisfaction of accomplishment when (if) you reach the goal.

bortels-10

So - I'm not sure I want to get along with those who are totally wrong (or who I think are). More power to altruism, you rock, but I wonder sometimes if we do not bring some of this stupidity on ourselves by tolerating and giving voice to idiocy.

I look, for example, at the vaccination situation; I live in Southern California, a hotbed of clueless celebrity bozos who think for some reason they know more about disease and epidemiology than freaking doctors, and who cause real harm - actual loss of human life - to their community, of which my kids are a part... (read more)

bortels-20

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good.

To your own cognition, or just to that of others?

I just got here. I have no experience with the issues you cite, but it strikes me that disengagement does not, in general, change society. If you think ideas, as presented, are wrong - show the evidence, debate, fight the good fight. This is probably one of the few places it might actually be acceptable - you can't lurk on religious boards and try to convince them of things, they mostly canno... (read more)

bortels00

I am new here - and so do not have enough experience to make a judgement call, but I do have a question:

Why do you want to "improve" it? What are the aspects of it's current operation that you think are sub-optimal, and why?

I see a lot of interesting suggestions for changes, and a wishlist for features - but I have no inkling if they might "improve" anything at all. I tend to be of the "simpler is better" school, and from the sound of things, it seems things are already pretty good, or at least pretty non-bad?

STORYTIME!

I used... (read more)

0Adam Zerner
My thinking stems from the belief that improving this community would be a high level action that would do a lot of good. Improving the quantity and quality of conversation could 1) help spread rationality and 2) improve the experience for current members. And I think that it could lead to 3) intellectual progress and 4) progress stemming from people working together on projects. The explanation above is incomplete, but hopefully it communicates the big picture - I see a lot of untapped potential for this community to make important progress in discovering things and achieving things. Why do I think this? It'll take me a good amount of time to answer that properly and now isn't the time for me to do so, sorry. I plan on posting again with that answer at some point though.
5VoiceOfRa
The fact that we appear to selecting against traits like intelligence.
-1passive_fist
Given that most males in our current society (and, indeed, a significant fraction of females) seem to try to delay or indefinitely postpone reproduction - sometimes failing to do so - it doesn't seem that failure to reproduce is a driver of behavioral modification.
bortels-20

Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).

I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simul... (read more)

0Fivehundred
No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted. Why? The terminal point is creation of FAI. But they wouldn't shut down the humans of the simulation; that would defeat the whole point of the thing. ...so you are arguing that probability doesn't mean anything? Something that will happen in 99.99% of universes can be safely assumed to occur in ours.
bortels00

I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence are going to make human-run biology irrelevant before long.

I suspect that's the issue, and I suspect AI will not be the Panacea you expect it to; or rather, if AI gets to the point of making Human-run research in any field irrelevant - it may well do so in all fields shortly thereafter, so you're right back where you started.

I rather doubt it will happen that way at all; it seems to me in the forseeable future, the most likely scenario of compute... (read more)

0zslastman
I should be clearer on that score. It's not that I see a high likelihood of a singularity happening in the next 50 years, with Skynet waltzing in and solving everything. Rather I see new methods in Biology happening that render what I'm doing irrelevant, and my training not very useful. An example: lots of people in the 90s spent their entire PhDs sequencing single genes by hand. I feel like what I'm doing is the equivalent.
bortels00

I think there may be an unfounded assumption here - that an unfriendly AI would be the results of some sort of bug, or coding errors that could be identified ahead of time and fixed.

I rather suspect those sorts of stuff would not result in "unfriendly", they would result in crash/nonsense/non-functional AI.

Presumably part of the reason the whole friendly/non-friendly thing is an issue is because our models of cognition are crude, and a ton of complex high-order behavior is a result of emergent properties in a system, not from explicit coding. I ... (read more)

0owencb
I'm not suggesting that the problems would come from what we normally think of as software bugs (though see the suggestion in this comment). I'm suggesting that they would come from a failure to specify the right things in a complex scenario -- and that this problem bears enough similarities to software bugs that they could be a good test bed for working out how to approach such problems.
bortels00

Is a 3 minute song worse, somehow, than a 10 minute song? or a song that plays forever, on a loop, like the soundtrack at Pirates of the Caribbean, is that somehow even better?

The value of a life is more about quality than quantity, although presumably if quality is high, longer is more desirable, at least to a point.

You could argue with current overpopulation is is unethical to have any children. In which case your genes will be deselected from the gene pool, in favor of those of my children, so it's maybe not a good argument to make.

3MrMind
Meh, I don't have any special attachment to my genes, and I think that those who do should reconsider. After all, why we should? It's not an upload or anything like that, just a special set of dna code which resembles me only very vaguely. What's the good if they are transmitted instead of some other set of genes?
bortels20

There's a label on the back as well with details. The front label is a billboard, designed to get your attention and take advantage of brand loyalty, so yes - you are expected to know it's detergent, and they are happy to handle the crazy rare edge-case person who does not recognize the brand. I suspect they also expect the supermarket you buy it at to have it in the "laundry detergents" section, likely with labels as well, so it's not necessary on the front label.

-2ThisSpaceAvailable
There's a laundry section, with detergent, fabric softeners, and other laundry-related products. I don't think the backs generally say what the product is, and even if they do, that's not very useful. And as I said, most laundry brands have non-detergent products. Not labeling detergent as detergent trains people to not look for the "detergent" label, which means that they don't notice when they're buying fabric softener or another product.
bortels00

I am hoping this is not stupid - but there is a large corpus of work on AI, and it is probably faster for those who have already digested it to point out fallacies than it is for me to try to find them. So - here goes:

BOOM. Maybe it's a bad sign when your first post to a new forum gets a "Comment Too Long" error.

I put the full content here - https://gist.github.com/bortels/28f3787e4762aa3870b3#file-aiboxguide-md - what follows is a teaser, intended to get those interested to look at the whole thing

TL;DR - it seems evident to me that the "ke... (read more)

0gjm
The first thing that's commonly held to be difficult is exploiting it in the box without accidentally letting it out. E.g., it says "if you do X you will solve all the world's hunger problems, and here's why", and you follow its advice, and indeed it does solve the world's hunger problems -- but it also does other things that you didn't anticipate but the AI did. (So exploiting it in the box is not an unproblematic option.) The second thing that may be difficult in some cases is exploiting it in the box without being persuaded to let it out. This may be true even if you have a perfectly correct reasoned argument showing that it should be exploited in the box but not let out -- because it may be able to play on the emotions of the person or people who have the ability to let it out. (So saying "here is an argument for not letting it out" doesn't mean that there isn't a risk that it will get let out on purpose; someone might be persuaded by that argument, but later counter-persuaded by the AI.)