Comment author: Furcas 13 September 2016 07:02:34PM *  0 points [-]

I changed the link to the audio, should work now.

Comment author: Manfred 13 September 2016 07:36:52PM *  0 points [-]

No dice.

Edit: either it works from my phone only, or it works now. Yay!

Comment author: Furcas 13 September 2016 03:03:27PM *  4 points [-]

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Comment author: Manfred 13 September 2016 06:48:56PM 0 points [-]

Thanks for the pointer, though I can't open the audio file either.

Comment author: Manfred 10 September 2016 11:48:43PM 4 points [-]

I'd blame the MIT press release organ for being clickbait, but the paper isn't much better. It's almost entirely flash with very little substance. This is not to say there's no math - the math just doesn't much apply to the real world. For example, the idea that deep neural networks work well because they recreate the hierarchical generative process for the data is a common misconception.

And then from this starting point you want to start speculating?

Comment author: Houshalter 05 September 2016 09:15:44AM 4 points [-]

I wrote a thing that turned out to be too long for a comment: The Doomsday Argument is even Worse than Thought

Comment author: Manfred 06 September 2016 06:47:09PM 1 point [-]

The problem with the doomsday argument is that it is a correct assignment of probabilities only if you have the very small amount of information specified in the argument. More information can change your predictions - the prediction you would make if you had less information gets overridden by the prediction that uses all your information.

Let's use the example of picking a random tree. Suppose you know about the existence of tree-diseases that make trees sick and more likely to die, and you know that some trees are sick and some are healthy. You pick a random tree and it is ten years old and sick. You now should update your prediction of the average tree age toward 10 years, but you cannot expect that you have picked a point near the middle of this tree's life. Because you know it is sick, you can expect it to die sooner than that.

Comment author: ignoranceprior 05 September 2016 02:05:10AM *  2 points [-]

Has anyone here had success with the method of loci (memory palace)? I've seen it mentioned a few times on LW but I'm not sure where to start, or whether it's worth investing time into.

Comment author: Manfred 05 September 2016 05:53:37AM 2 points [-]

Brienne has, example blog post here. She'd probably recommend it.

I personally am satisfied with some much more simplistic memory techniques like trying to remember context when I remember something (e.g. try to remember the sight and feel of sitting in a certain classroom to remember content of a lecture), and using repetition more judiciously (remembering to use peoples' names right after I hear them is the biggest use, but this is also good for shopping lists etc).

I also suspect that practice using any sort of deliberate memorization at all will improve some sort of general deliberate memorization skill, so you might find that practicing mnemonics or method of loci improves your memory in a general way.

Comment author: Manfred 31 August 2016 03:54:52AM *  1 point [-]

I still basically agree with my retracted comment, I'd just like to note that taken at face value, your two equations for B given A really are the same.

The counterfactual difference comes from an implied random variable that decides which branch of the equation we're "using" (in the implied causal process that goes from A to B), and which can remember this information during counterfactual reasoning. But of course it is a simple thing to make this implied random variable an explicit node in your causal graph. This is probably the best resolution.

Comment author: Manfred 30 August 2016 11:38:26PM 0 points [-]

This sounds like a question of how you're choosing to define a causal node. Is it something that's a fixed function of its parents? In which case your hypotheses about the function from A to B are hypotheses over different causal graphs. Or should the function from parents to node be a parameter that you represent inside a causal graph? In which case you need some representation of this distribution.

Either way, I agree that you need more than what you started with to capture the counterfactuals you're thinking of here.

Comment author: turchin 23 August 2016 08:49:05PM *  -1 points [-]

(memetic hazard) ˙sƃuıɹǝɟɟns lɐuɹǝʇǝ ɯnɯıxɐɯ ǝʇɐǝɹɔ oʇ pǝsıɯıʇdo ɹǝʇʇɐɯ sı ɯnıuoɹʇǝɹnʇɹoʇ

Update: added full description of the idea in my facebook https://www.facebook.com/turchin.alexei/posts/10210360736765739?comment_id=10210360769286552¬if_t=feed_comment¬if_id=1472326132186571

Comment author: Manfred 26 August 2016 06:22:03PM 1 point [-]

I find this surprisingly unmotivating. Maybe it's because the only purpose this could possibly have is as blackmail material, and I am pretty good at not responding to blackmail.

Comment author: WhySpace 23 August 2016 06:26:08PM *  2 points [-]

(1) Given: AI risk comes primarily from AI optimizing for things besides human values.

(2) Given: humans already are optimizing for things besides human values. (or, at least besides our Coherent Extrapolated Volition)

(3) Given: Our world is okay.^[CITATION NEEDED!]

(4) Therefore, imperfect value loading can still result in an okay outcome.

This is, of course, not necessarily always the case for any given imperfect value loading. However, our world serves as a single counterexample to the rule that all imperfect optimization will be disastrous.

(5) Given: A maxipok strategy is optimal. ("Maximize the probability of an okay outcome.")

(6) Given: Partial optimization for human values is easier than total optimization. (Where "partial optimization" is at least close enough to achieve an okay outcome.)

(7) ∴ MIRI should focus on imperfect value loading.

Note that I'm not convinced of several of the givens, so I'm not certain of the conclusion. However, the argument itself looks convincing to me. I’ve also chosen to leave assumptions like “imperfect value loading results in partial optimization” unstated as part of the definitions of those 2 terms. However, I’ll try and add details to any specific areas, if questioned.

Comment author: Manfred 23 August 2016 11:29:08PM 0 points [-]

1) Sure.
2) Okay.
3) Yup.
4) This is weasely. Sure, 1-3 are enough to establish that an okay outcome is possible, but don't really say anything about probability. You also don't talk about how good of an optimization process is trying to optimize these values.

5) Willing to assume for the sake of argument.
6) Certainly true but not certainly useful.
7) Doesn't follow, unless you read 6 in a way that makes it potentially untrue.

All of this would make more sense if you tried to put probabilities to how likely you think certain outcomes are.

In response to comment by Manfred on Identity map
Comment author: turchin 15 August 2016 09:18:01PM *  1 point [-]

In fact the identity is technical term which should help us to solve several new problems which will appear after older intuitive ideas of identity stop work, that is the problems of uploading, human modification and creation of copies.

So the problems:

1) Should I agree to be uploaded into a digital computer? Should we use gradual uploading tech? Should I sign for cryonics?

2) Should I collect data for digital immortality hoping that future AI will reconstruct me. Which data is most important? How much? What if my future copy will be not exact?

3) Should I agree on creation of my several copies?

4) What about my copies in the multiverse? Should I count them even? Should I include casually disconnected copies in another universes in my expectation of quantum immortality?

5) How quantum immortality and destructive uploading work together?

6) Will I die in case of deep coma, as my stream of consciousness will interrupt? So should I prefer only local anesthesias in case of surgery?

7) Am I responsible for the things I did 20 years ago?

8) Should I act now for the things which I will get in only 20 years like life extension.

So there are many things which depends of my ideas of personal identity in my decision making, and some of them like digital immortality and taking life extension drugs should be implemented now. Some people refuse to record everything about them or sign for cryonics because of their identity ideas.


The problem with the hope that AI will solve all our problems that it has a little bit of circularity. In child language, because to create good AI we need to know exactly what is "good". I mean that if we can't verbalise our concept of identity, we also poor in verbalaising any other complex idea, including friendliness and CEV.

So I suggest to try our best in creating really good definitions of what is really important to us, hoping that future AI will be able to get the idea much better from these attempts.

In response to comment by turchin on Identity map
Comment author: Manfred 16 August 2016 03:10:09AM 0 points [-]

Right. I think that one can use one's own concept of identity to solve these problems, but that which you use is very difficult to put into words. Much like your functional definition of "hand," or "heap." I expect that no person is going to write a verbal definition of "hand" that satisfies me, and yet I am willing to accept peoples' judgments on handiness as evidence.

On the other hand, we can use good philosophy about identity-as-concept to avoid making mistakes, much like how we can avoid certain mistaken arguments about morality merely by knowing that morality is something we have, not something imposed upon us, without using any particular facts about our morality.

View more: Prev | Next