All of Lapsed_Lurker's Comments + Replies

Surely if you provably know what the ideal FAI would do in many situations, a giant step forward has been made in FAI theory?

1Algernoq
Keeping the tone positive and conducive for discussion...It's not at all clear to me what a representative friendly AI would do in any situation.
0ThisSpaceAvailable
Since when has provability been considered a necessary condition for decision making? For instance, before you posted your comment, did you prove that your comment would show up on the discussion board, or did you just find it likely enough to justify the effort? Do you not know what the word "heuristic" means?

BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.

Drat. I just came here to post that. Still, at least this time I only missed by hours.

You need a different definition for 'blackmail' then. Action X might be beneficial to the blackmailer rather than negative in value and still be blackmail.

2Emile
The whole point of this post is to find a formal definition of "something like blackmail" (maybe "threat" instead of "blackmail" would have been a bit better, but "threat" also means a lot of other different things). I agree that maybe starting with an informal definition of "what is meant by 'blackmail' here" may have been better.

Why not taboo 'blackmail'? That word already has a bunch of different meanings in law and common usage.

4Stuart_Armstrong
I'm trying to define threat/blackmail or similar concepts in decision theory. In the two examples above, one seems a clear negative situation, the other doesn't, and I can't figure out what the difference is.

Omega gives you a choice of either $1 or $X, where X is either 2 or 100?

It seems like you must have meant something else, but I can't figure it out.

0solipsist
Yes, that's what I mean. I'd like to know what, if anything, is wrong with this argument that no decision theory can be optimal. Suppose that there were a computable decision theory T that was at least as good as all other theories. In any fair problem, no other decision theory could recommend actions with better expected outcomes than the expected outcomes of T's recommended actions. 1. We can construct a computable agent, BestDecisionAgent, using theory T. 2. For any fair problem, no computable agent can perform better (on average) than BestDecisionAgent. 3. Call the problem presented in the grandfather post the Prejudiced Omega Problem. In the Prejudiced Omega Problem, BestDecisionAgent will almost assuredly collect $2. 4. In the Prejudiced Omega Problem, another agent can almost assuredly collect $100. 5. The Prejudiced Omega Problem does not involve an Omega inspecting the source code of the agent. 6. The Prejudiced Omega Problem, like Newcomb's problem, is fair. 7. Contradiction I'm not asserting this argument is correct -- I just want to know where people disagree with it. Qiaochu_Yuan's post is related.

Isn't that steel-man, rather than strong-man?

3prase
The historical Steelman was also a strongman, at least according to Wikipedia.
6Kawoomba
That's the spirit!

Reading that, I thought: "I bet people asking questions like that is why 'Original Sin' got invented".

Of course, the next step is to ask: "Why doesn't the priest drown the baby in the baptismal font, now that its Original Sin is forgiven?"

2MugaSofer
My first thought on reading that was "murder is a sin", which makes the priest seem unwilling to risk hell to save the children. (Incidentally, I have seen actual attempts at answering that question, mostly revolving around theories as to why God didn't simply have us be born directly into heaven.)

I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn't investing in

Are there lists like this about? I think I'd like to read about that sort of stuff.

I remember seeing a few AI(and other things, sometimes) debates (mostly on YouTube) where they'd just be getting to the point where they are clarifying what it is that each one actually believes and you get: 'agree to disagree'. The end.

Just when the really interesting part seemed to be approaching! :(

For text-based discussions that fail to go anywhere, that brings to mind the 'talking past each other' you mention or 'appears to be deliberately misinterpreting the other person'

Has there been any evolution in either of their positions since 2008, or is that the latest we have?

edit Credit to XiXiDu to sending me this OB link, which contains in the comments this YouTube video of a Hanson-Yudkowsky AI debate in 2011. Boiling it down to one sentence I'd say it amounts to Hanson thinking that a singleton Foom is a lot less likely than Yudkowsky thinks.

Is that more or less what it was in 2008?

0MinibearRex
I think so, but truth be told I've actually never read through all of it myself. All of the bits of it I've seen seem to indicate that they hold similar positions in those debates to their positions in the original argument.

I find it is the downsides of those things that I generally blame for not doing them, though I do own a Bon Jovi CD.

…powers such as precognition (knowledge of the future), telepathy or psychokinesis…

Sounds like a description of magic to me. They could have written it differently if they'd wanted to evoke the impression of super-advanced technologies.

-2Thomas
Yes, I wasn't clear enough. By "this site" I mean Lesswrong, not the NWT. On LW (and on any somehow related site), NWT could not get the information that an AI might become magical. Only very advanced or very very very advanced. For an outside observer it may be hard to tell where is the difference, but it always is and it always will be a fundamental difference. The message "there is no magic" is the loudest message here on Lesswrong, as I see it. The second one "AI may LOOK LIKE a magic" is ... well, subordinate to the first one. And it is the NWT who doesn't understand this hierarchy.
2MugaSofer
I somehow doubt that meant super-advanced technology - remember, this AI is trapped in a box. I changed it to "psychic powers", since that seems more accurate - high intelligence leading to "psychic powers" is a well-established sci-fi trope.

I hope that happens quick. There are systems in my body that need some re-engineering, lest I die even sooner than the average Englishman.

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for making cheesecake.

-4Luke_A_Somers
... The subtlety of your humor is beyond me, or you are taking the parent way too seriously.

Several comments on the original thread seem to be making a comparison between "I found a complicated machine-thing, something must have made it" and the classic anti-evolution "This looks complicated, therefore God"

I can't quite see how they can leap from one to the other.

3Luke_A_Somers
That was Caledonian. What do you expect?
3Viliam_Bur
Probably they don't find the algorithm for determining whether something is a product of evolution or a product of intelligence satisfactory. Especially when the algorithm is not written explicitly; there is mostly just a suggestion that after some exploring, we would know the difference. Pieces of metal, gears, doing some work... seems like an obvious evidence of an intelligent design. Unless we consider the possibility that they somehow evolved in this strange alien biology. Or are they something like a bee hive or a beaver dam -- a product of a life form, but not very intelligently designed. Perhaps the alien "bees" create these metallic gears and assemble the machines instinctively. In which case I would expect all such machines to be rather similar to each other. The article does not explore this interesting topic deeply enough. It just suggests that it can be done.
4advancedatheist
The Intelligent Design theorists don't seem to understand that (1) their effort to describe biological structures in strictly engineering terms capitulates to what materialists have said for generations, namely, that life operates according to nonspooky mechanical principles; and (2) their influence in propagandizing the view of the human body as a machine will probably help to erode resistance to proposals for re-engineering human biology. In other words, the Intelligent Design idea has the unintentional effect of desacralizing the human body.

So, a choice between the worst possible thing a superintelligence can do to you by teaching you an easily-verifiable truth and the most wonderful possible thing by having you believe an untruth. That ought to be an easy choice, except maybe when there's no Omega and people are tempted to signal about how attached to the truth they are, or something.

I am worried about "a belief/fact in its class" the class chosen could have an extreme effect on the outcome.

2Endovior
As presented, the 'class' involved is 'the class of facts which fits the stated criteria'. So, the only true facts which Omega is entitled to present to you are those which are demonstrably true, which are not misleading as specified, which Omega can find evidence to prove to you, and which you could verify yourself with a month's work. The only falsehoods Omega can inflict upon you are those which are demonstrably false (a simple test would show they are false), which you do not currently believe, and which you would disbelieve if presented openly. Those are fairly weak classes, so Omega has a lot of room to work with.

OpenOffice file, I think. edit OpenDocument Presentation. You ought to be able to view it with more recent versions of MS Office, it seems

I was under the impression from reading stuff Gwern wrote that Intrade was a bit expensive unless you were using it a lot. Also, even assuming I made money on it, wouldn't I be liable for tax? I intend to give owning shares via a self-select ISA a go.

3[anonymous]
If Intrade were an efficient market that made use of all of the information in the world, that would be true. People make enough bad bets often enough that it's not too hard to find predictions that are obviously priced wrong.

As a non-USian, my main interest in the election is watching the numbers go up and down on Nate Silver's blog.

8[anonymous]
May I suggest Intrade as a pasttime?

Even having watched the video before, when I concentrated hard on counting passes, I missed seeing it.

Using Opera Mini, I just delete the cookies (which then requires me to re-login to LW) It was much less annoying when the count-to-nag was 20, rather than 10.

Is this pretty much what gets called 'signalling' on LW? Anything you do in whole or in part to look good to people or because doing otherwise would make people think badly of you?

0evand
No, though they are related. A signal is a costly behavior that predictably correlates with a more difficult to observe attribute. In particular, the cost of performing the behavior normally depends on the attribute(s) in question. For example, it's cheap to tell an interviewer that I'm interested in the job, and can act in a professional manner on the job. Showing up to the interview early and dressed appropriately signals that interest much more effectively, and the interviewer is far more likely to believe the actions than the words as a result. (While you can fake signaling behaviors, it's usually easier to do the behavior when the underlying attribute is present, so they constitute Bayesian evidence. As in all human things, sometimes this works better than others and the details rapidly get complicated.) A rational astrology is a behavior you do for purposes of societal approval. In general, that behavior would signal a belief in the common beliefs that underlie that behavior. However, the behavior is not done purely for signaling purposes: it's also done to gain the societal safe harbor protection. If you use the normal medical treatment rather than the quack one, society won't blame you if it fails. This reason is often sufficient to justify the behavior, even ignoring all signaling concerns. It also signals belief in the standard medical establishment. The two effects can be somewhat difficult to disentangle. (You could also take a more signaling-centric explanation of rational astrologies than I did here. You can explain the decision to visit the regular doctor as a signal of caring about your health, as a signal of non-quack-conspiracy-theorist status, as a signal of general social skills, and as a signal of general scientific knowledge. You can then explain the resultant reaction of society as based on reading those signals, not on the rational astrology conformance directly. However, I think this misses something: the character of my condemnation of a

I'm not sure it counts as an origin story, but after I noticed a lot of discussions/arguments seemed to devolve into arguments about what words meant, or similar, I got the idea this was because we didn't 'agree on our axioms' (I'd studied some maths). Sadly, trying to get agreement on what we each meant by the things we disagreed on didn't seem to work - I think that the other party mostly considered it an underhanded trick and gave up. :(

"One death is a tragedy. One million deaths is a statistic."

If you want to remind people that death is bad, agreed, the death of individuals you know or feel like you know is worse than lots of people you never met or even saw.

Eulogies on arbitrary people might help with motivation, and if you're doing that you might as well chose one with a minor advantage like not needing a long introduction to make the reader empathize, rather than choosing purely at random.

Are you suggesting that putting eulogies of famous people on LessWrong is a good idea? That sort of sounds like justifying something you've already decided.

5Armok_GoB
Not quite. I'm saying that GIVEN you want to spend a post reminding people that death is bad, talking about a single death might be more motivating then many. And that GIVEN you want to talk about the death of an arbitrary individual, you might as well chose one likely know to the reader than one that is not.

~150,000 other people died today, too. Okay, Armstrong was hugely more famous than any of them, probably the most famous person to die this year, but what did he do for rationality, or AI, or other LessWrong interests?(which I figure do include space travel, admittedly. Presumably he wasn't signed up for cryogenic preservation) the post doesn't say.

Yes, death is bad, and Armstrong is/was famous, possibly uniquely famous, but I don't think eulogies of famous people are on-topic.

3Armok_GoB
Eulogies on arbitrary people might help with motivation, and if you're doing that you might as well chose one with a minor advantage like not needing a long introduction to make the reader empathize, rather than choosing purely at random.
0[anonymous]
Wouups - FIXED - thanks for pointing that out!

Holden Karnofsky thinks superintelligences with utility functions are made out of programs that list options by rank without making any sort of value judgement (basically answer a question), and then pick the one with the most utility.

Isn't 'listing by rank' 'making a (value) judgement'?

In my recollection of just about any place I have eaten in the UK, there is no choice. They only ever have one cola or the other. Is this different in other parts of the world?

I thought that sensitivity might be the answer. Not that hearing fairly sensitive perception of magnetic fields is possible makes me want the ability enough to stick magnets in my fingers. Yet.

I've heard about other superhuman sensory devices, like the compass-sense belt, though, and the more I hear about this stuff, the cooler it sounds. Perhaps sometime the rising interest and falling cost/inconvenience curves will cross for me. :)

I can see X-ray or terahertz scanners missing a tiny lump of metal, but aren't there a fair number of magnetic scanners in use looking for larger lumps of metal, which I'd think the magnet would interact fairly strongly with?

1drethelin
I don't know about that, but I've been through multiple security checkpoints since getting them and they've never been noticed.

Judging by previous instances, you ought to put in more than just a link and also put [LINK] in the title, or else you are liable to get a bunch of downvotes.

[edit] OK, watched the first video, with people getting little rare-earth magnets put in their fingers so they can feel magnetic fields... Why not just get a magnetic ring? That way you can feel magnetic fields and don't risk medical complications and you don't have to stop for several minutes and explain every time you fly or go through one of those scanners I hear are relatively common in the US. [/edit]

6Bakkot
0drethelin
Security things don't detect them.
7[anonymous]
Not to mention your finger would fry if you'd ever find yourself in an MRI, though the implant might just as well burst out from your finger before you even lay down.

Well, they say that now. We have something that works better than what we had before. I commend Asimov's essay The Relativity Of Wrong.

Good to read that again. Thanks.

I was wondering about evidence that uploading was accurate enough that you'd consider it to be a satisfactory continuation of personal identity.

I'd think that until even one of those little worms with only a couple hundred neurons is uploaded (or maybe a lobster), all evidence of the effectiveness of uploading is theory or fiction.

If computing continues to get cheaper at Moore's Law rates for another few decades, then maybe...

More generally, what would folks here consider to be good enough evidence that uploading was worth doing?

Good enough evidence that (properly done) uploading would be a good thing, as opposed to the status quo of tens of thousands of people dying every day, you mean?

[edit] If you want to compare working SENS to uploading, then I'd have to think a lot harder.

2NancyLebovitz
I was wondering about evidence that uploading was accurate enough that you'd consider it to be a satisfactory continuation of personal identity.

Wasn't that trick tried with Windows Vista, and people were so annoyed by continually being asked trivial "can I do this?" questions that they turned off the security?

I think that the intention is to make forgetting your password as hard as forgetting how to ride a bicycle. Although I only remember the figure of '2 weeks' from reading about this yesterday.

0Decius
It's only as valid as identifying someone by how they ride their bicycle. Any number of neurological factors, including fatigue, could change how someone enters the 'password' provided.

If you mostly solve the 'Ageing' and 'Unnecessary Unhappiness' problems, the youthful, happy populous will probably give a lot more weight to 'Things That Might Kill Everyone'

I don't know about putting these things into proper categories, but I'm sure I'd be a lot more worried about the (more distant than a few decades) future if I had a stronger expectation of living to see it and I spent less time being depressed.

Just reading the title of this post, TVTropes came to mind, and there it was when I read it, which made me feel both good that I had made a successful prediction, and worried that it was probably me being biased by not remembering all the fleeting predictions that don't come true.

I can't help you there. Not enough detail has survived the years.

It has been more than a decade since then. All I have left are the less-reliable memories-of-memories of the dream. Having said that, I recall the dream being of text coloured like the MUD I was playing, but I am pretty sure that there was only the text. I don't even recall anything that happened in the dream or if I previously did and have forgotten.

I very rarely recall any dreams, but I do remember one time, during a summer I spent playing a lot of MUD (Internet text-based game, primitive ancestor to World of Warcraft), that I had a dream in text.

2Rain
I've dreamed in scrolling text before due to extensive MUD playing. I even wrote an essay about it, though it's a bit embarrassing to read now.
0DanielLC
Isn't it normally hard to read in dreams? How did that work out?
1DataPacRat
May I ask for more details? For example, do you recall whether you were observing text on a screen, or if you were looking at a visual field with just text, or some combination of imagining typing while visualizing what was being described, or the like?

Why would tools for which the failure-mode of the tool just wireheading itself was common be built?

-2private_messaging
Ugh, because we don't know other way to do it? You don't need to implement 'real world intentionality' (even if you can, which may well not be the case) to prevent any particular tool from wireheading. You just make it non general purpose enough, which simultaneously prevents foom, but if you don't believe in foom, what do you lose?

Well, I realize that personal health is a personal choice in most cases.

You might want to rethink your wording on that one. Perhaps 'personal health status is a consequence of previous choices in many cases' or something. As written it sounds a bit overstated.

0MaoShan
True, I was trying not to step on any more toes at that point.

And yet, several high-status Less Wrongers continue to affirm utilitarianism with equal weight for each person in the social welfare function. I have criticized these beliefs in the past (as not, in any way, constraining experience), but have not received a satisfactory response.

I'm not sure how that answers my question, or follows from it. Can you clarify?

0Jayson_Virissimo
It wasn't meant as an attempt to answer your question. I was pointing out that this isn't only a problem for Danaher.

I am not sure what 'accurate moral beliefs' means. By analogy with 'accurate scientific beliefs', it seems as if Mr Danaher is saying there are true morals out there in reality, which I had not thought to be the case, so I am probably confused. Can anyone clarify my understanding with a brief explanation of what he means?

5JohnD
Well, I suppose I had in mind the fact that any cognitivist metaethics holds that moral propositions have truth values, i.e. are capable of being true or false. And if cognitivism is correct, then it would be possible for one's moral beliefs to be more or less accurate (i.e. to be more or less representative of the actual truth values of sets of moral propositions). While moral cognitivism is most at home with moral realism - the view that moral facts are observer-indepedent - it is also compatible with some versions of anti-realism, such as the constructivist views I occasionally endorse. The majority of moral philosophers (a biased sample) are cognitivists, as are most non-moral philosophers that I speak to (pure anecdotal evidence). If one is not a moral cognitivist, then the discussion on my blog post will of course be unpersuasive. But in that case, one might incline towards moral nihilism, which could, as I pointed out, provide some support for the orthogonality thesis.
-9Jayson_Virissimo

Not very sure. I've heard all sorts of assertions. I'm pretty sure that sugar and other carbs are a bad idea, since I've been diagnosed as diabetic. Also that too much animal fat and salt are bad - but thinking that things are bad doesn't always stop me indulging :(

The UK government recommends five portions (handful-sized) of different fruit and vegetables per day, but I don't even manage to do that, most days.

Sadly, the last time I got an appointment to talk about my diet, the nurse I had an appointment with turned out to be fatter than I am, and absolute... (read more)

0RomeoStevens
When trying to form a dietary habit it may be useful to eat (close to) the same foods everyday for a week or two. Or add a food to eat everyday one at a time, slowly replacing bad foods with better ones.
Load More