Comment author: Dagon 26 July 2016 09:47:37PM 3 points [-]

What is actually the problem?

There are lots of problems. If I had to pick only one, it would be that you seem to think there is a single, simple problem that can be identified from this transcript.

Comment author: Strangeattractor 26 July 2016 09:19:46PM 3 points [-]

If he kills himself, he hurts only himself. If he's violent toward other people, he can end up doing a lot more damage than that. He mentioned that one incident, but given his casual attitude toward it, there are probably more. It wouldn't surprise me if he was beating his girlfriend. Domestic assault (I call it domestic because it was against someone he lived with, even though housemate is not as usual a target as partner or child) is a huge huge huge warning flag. He had a bad day, and trouble sleeping, and suddenly someone else has to deal with the consequences of having a broken nose for the rest of their lives. The consequences for each of them are disproportionate, asymmetric. If he has another bad day, what next?

His girlfriend's life might be in danger.

Comment author: Dagon 26 July 2016 09:46:20PM 2 points [-]

The combination of suicidal thoughts and violence toward others is worse than either alone. There are lots of ways to commit suicide that hurt more people more seriously than one broken nose.

Comment author: Arielgenesis 25 July 2016 05:22:56PM 1 point [-]

Thank you, that was a very nice extension to the story. I should have included the scenario to make her belief relevant. I agree with you, assigning 100% probability is irrational in her case. But, if she is not rationally literate enough to express herself in fuzzy, non-binary way, I think she would maintain rationality through saying "Ceteris paribus, I prefer to be not locked in the same room with Cain because I believe he is a murder because I believe Adam was innocent" (ignoring ad hominem)

I was under the impression that the golden standard for rationality is falsifiability. However, I now understand that Eve is rational despite unfalsifiablity, because she remained Bayesian.

Comment author: Dagon 25 July 2016 09:20:11PM 1 point [-]

I'm still deeply troubled by the focus on labels "rational" and now "Bayesian", rather than "winning", "predicting", or "correct".

For epistemic rationality, focus on truth rather than rationality: do these beliefs map to actual contingent states of the universe? Especially for human-granularity beliefs, Bayesian reasoning is really difficult, because it's unlikely for you to know your priors in any precise way.

For instrumental rationality, focus on decisions: are the actions I'm taking based on these beliefs likely to improve my future experiences?

Comment author: Arielgenesis 25 July 2016 04:01:22AM 1 point [-]

What if we were to take one step back and Adam didn't die. Eve claims that her believe pays rent because it could be falsified if Adam changed in character. In this scenario, I suppose that you would agree to say that Eve is still rational.

Now, I cannot formulate my arguments properly at the moment, but I think it is weird that Adam's death make Eve's belief irrational, as per:

So I do not believe a spaceship blips out of existence when it crosses the cosmological horizon of our expanding universe, even though the spaceship's existence has no further experimental consequences for me.

http://lesswrong.com/lw/ss/no_logical_positivist_i/

Comment author: Dagon 25 July 2016 04:08:52PM 2 points [-]

I think you're focusing too much on the label "rational", and not enough on the actual effect of beliefs.

I'll admit I'm closer to logical positivism than is Eliezer, but even if you make the argument (which you haven't) that the model of the universe is simpler (in the Kolmogorov complexity sense) by believing Adam killed Able, it's still not important. Unless you're making predictions and taking actions based on a belief (or on beliefs influenced by that belief), it's neither rational nor irrational, it's irrelevant.

Now, a somewhat more complicated example, where Eve has to judge Cain's likelihood of murdering her, and thinks the circumstances of the locked room in the past are relevant to her future, there are definite predictions she should be making. Her confidence in Adam's innocence implies Cain's guilt, and she should be concerned.

It's still the case that she cannot possibly have enough evidence for her confidence to be 1.00.

Comment author: Dagon 25 July 2016 03:11:36AM 1 point [-]

This belief pays no rent. It's unfalsifiable precisely because it's irrelevant - there is no prediction that Eve can make which would give different outcomes based on Adam's past behavior. The belief just doesn't matter.

Separately, if she assigns 0.0 probability to anything, she's probably not actually as rational as she claims.

Comment author: rmoehn 21 July 2016 04:53:52AM 0 points [-]

So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?

Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?

Comment author: Dagon 21 July 2016 02:52:40PM 1 point [-]

Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.

Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn't be your primary drive.

Comment author: rmoehn 20 July 2016 06:58:06AM *  0 points [-]

So you think there's not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?

Oh, and why do you consider AI safety a "theoretical [or] unlikely" problem?

Comment author: Dagon 20 July 2016 04:26:03PM 0 points [-]

I think that there's not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.

I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.

Comment author: ChristianKl 19 July 2016 03:44:09PM -1 points [-]

Perhaps you'd do a lot more good with a slight reduction in shipping costs or tiny improvements in safety or enjoyment of some consumer product.

Perhaps you would also do more good by working in a slight increase in shipping costs.

Comment author: Dagon 19 July 2016 09:50:21PM 0 points [-]

Quite. Whatever you consider an improvement to be. Just don't completely discount small, likely improvements in favor of large (existential) unlikely ones.

Comment author: rmoehn 19 July 2016 01:17:15AM 2 points [-]

In the likely case that your marginal contribution to x-risk doesn't save the world

So you think that other people could contribute much more to x-risk, so I should go into areas where I can have a lot of impact? Otherwise, if everyone says »I'll only have a small impact on x-risk. I'll do something else.«, nobody would work on x-risk. Are you trying to get a better justification for work on x-risk out of me? At the moment I only have this: x-risk is pretty important, because we don't want to go extinct (I don't want humanity to go extinct or into some worse state than today). Not many people are working on x-risk. Therefore I do work on x-risk, so that there are more people working on it. Now you will tell me that I should start using numbers.

the fact that you won't consider leaving Kagoshima is an indication that you aren't as fully committed as you claim

What did I claim about my degree of commitment? And yes, I know that I would be more effective at improving the state of humanity if I didn't have certain preferences about family and such.

Anyway, thanks for pushing me towards quantitative reasoning.

Comment author: Dagon 19 July 2016 01:52:53PM 0 points [-]

So you think that other people could contribute much more to x-risk

"marginal" in that sentence was meant literally - the additional contribution to the cause that you're considering. Actually, I think there's not much room for anybody to contribute large amounts to x-risk mitigation. Most people (and since I know nothing of you, I put you in that class) will do more good for humanity by working at something that improves near-term situations than by working on theoretical and unlikely problems.

Comment author: AstraSequi 18 July 2016 06:05:42PM *  1 point [-]

I have some questions on discounting. There are a lot, so I'm fine with comments that don't answer everything (although I'd appreciate it if they do!) I'm also interested in recommendations for a detailed intuitive discussion on discounting, ala EY on Bayes' Theorem.

  • Why do people focus on hyperbolic and exponential? Aren't there other options?
  • Is the primary difference between them the time consistency?
  • Are any types of non-exponential discounting time-consistent?
  • What would it mean to be an exponential discounter? Is it achievable, and if so how?
  • What about different values for the exponent? Is there any way to distinguish between them? What would affect the choice?
  • Does it make sense to have different discounting functions in different circumstances?
  • Why should we discount in the first place?

On a personal level, my intuition is not to discount at all, i.e. my happiness in 50 years is worth exactly the same as my happiness in the present. I'll take $50 right now over $60 next year because I'm accounting for the possibility that I won't receive it, and because I won't have to plan for receiving it either. But if the choice is between receiving it in the mail tomorrow or in 50 years (assuming it's adjusted for inflation, I believe I'm equally likely to receive it in both cases, I don't need the money to survive, there are no opportunity costs, etc), then I don't see much of a difference.

  • Is this irrational?
  • Or is the purpose of discounting to reflect the fact that those assumptions I made won't generally hold?
  • The strongest counterargument I can think of is that I might die and not be able to receive the benefits. My response is that if I die I won't be around to care (anthropic principle). Does that make sense? (The discussions I've seen seem to assume that the person will be alive at both timepoints in any case, so it's also possible this should just be put with the other assumptions.)
  • If given the choice between something bad happening now and in 10 years, I'd rather go through it now (assume there are no permanent effects, I'll be equally prepared, I'll forget about the choice so anticipation doesn't play a role, etc). Does that mean I'm "negative discounting"? Is that irrational?
  • I find that increasing the length of time I anticipate something (like buying a book I really want, and then deliberately not reading it for a year) usually increases the amount of happiness I can get from it. Is that a common experience? Could that explain any of my preferences?
Comment author: Dagon 18 July 2016 11:24:30PM 2 points [-]

my intuition is not to discount at all, i.e. my happiness in 50 years is worth exactly the same as my happiness in the present.

If you separate utility discount into uncertainty (which isn't actually a discount of a world state, it's weighting across world-states and should be separately calculated by any rational agent anyway) and time preference, it's pretty reasonable to have no utility discount rate.

It's also reasonable to discount a bit based on diffusion of identity. The thing that calls itself me next year is slightly less me than the thing that calls itself me next week. I do, in fact, care more about near-future me than about far-future me ,in the same way that I care a bit more about my brother than I do about a stranger in a faraway land. Somewhat counteracting this is that I expect further-future me to be smarter and more self aware, so his desires are probably better, in some sense. Depending on your theory of ego value, you can justify a relatively steep discount rate or a negative one.

Hyperbolic discounting is still irrational, as it's self-inconsistent.

View more: Prev | Next