Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: MrMind 11 October 2016 01:06:33PM 1 point [-]

Is there a good rebuttal to why we don't donate 100% of our income to charity? I mean, as an explanation tribality / near - far are ok, but is there a good justification post-hoc?

Comment author: gjm 11 October 2016 03:10:30PM -1 points [-]

100%? Well, your future charitable donations will be markedly curtailed after you starve to death.

Comment author: Jacobian 11 October 2016 01:11:07PM 0 points [-]

How about: doing evil (even inadvertently) requires coercion. Slavery, Nazis, tying a witch to a stake, you name it. Nothing effective altruists currently do is coercive (except to mosquitoes), so we're probably good. However, if we come up with a world improvement plan that requires coercing somebody, we should A) hear their take on it and B) empathize with them for a bit. This isn't a 100% perfect plan, but it seems to be a decent framework.

Comment author: gjm 11 October 2016 03:09:28PM -1 points [-]

Some argument along these lines may work; but I don't believe that doing evil requires coercion.

Suppose that for some reason I am filled with malice against you and wish to do you harm. Here are some things I can do that involve no coercion.

I know that you enjoy boating. I drill a small hole in your boat, and the next time you go out on the lake your boat sinks and you die.

I know that you are an alcoholic. I leave bottles of whisky around places you go, in the hope that it will inspire you to get drunk and get your life into a mess.

The law where we live is (as in many places) rather overstrict and I know that you -- like almost everyone in the area -- have committed a number of minor offences. I watch you carefully, make notes, and file a report with the police.

I get to know your wife, treat her really nicely, try to give her the impression that I have long been nursing a secret yearning for her. I hope that some day if your marriage hits an otherwise-navigable rocky patch, she will come to me for comfort and (entirely consensually) leave you for me.

I discover your political preferences and make a point of voting for candidates whose values and policies are opposed to them.

I put up posters near where you live, accusing you of horrible things that you haven't in fact done.

I put up posters near where you live, accusing you of horrible things that you have in fact done.

None of these involves coercion unless you interpret that word very broadly. Several of them don't, so far as I can see, involve coercion no matter how broadly you interpret it.

So if you want to be assured of not doing evil, you probably need more firebreaks besides "no coercion".

Comment author: Lumifer 06 October 2016 06:55:08PM *  1 point [-]

But, wait, once you've decided on a course of action.

You are misreading Jacobian. Let me quote (emphasis mine):

whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.


but it's not at all clear that it's actually a bad idea.

Such people are commonly called "fanatics".

Comment author: gjm 06 October 2016 11:58:35PM -1 points [-]

You are misreading Jacobian

Plausible guess, but actually my error was different: I hadn't noticed the bit of Jacobian's comment you quote there; I read what you wrote and made the mistake of assuming it was correct.

Those words "once you've decided on a course of action" were your words. I just quoted them. It does indeed appear that they don't quite correspond to what Jacobian wrote, and I should have spotted that, but the original misrepresentation of Jacobian's position was yours rather than mine.

(But I should make clear that you misrepresented Jacobian's position by making it look less unreasonable and less easy for you to attack, so there's something highly creditable about that.)

Comment author: Lumifer 06 October 2016 03:11:50PM 3 points [-]

So, if the emotional empathy should be discarded, why should I help all those strangers? The only answer that the link suggests is "social propriety".

But social propriety is a fickle thing. Sometimes it asks you to forgive the debts of the destitute, and sometimes it asks you to burn the witches. Without empathy, why shouldn't you cheer at the flames licking the evil witch's body? Without empathy, if there are some kulaks or Juden standing in the way of the perfect society, why shouldn't you kill them in the most efficient manner at your disposal?

Comment author: gjm 06 October 2016 06:42:39PM -1 points [-]

The article distinguishes between "emotional empathy" ("feeling with") and "cognitive empathy" ("feeling for"), and it's only the former that it (cautiously) argues against. It argues that emotional empathy pushes you to follow the crowd urging you to burn the witches, not merely out of social propriety but through coming to share their fear and anger.

So I think the author's answer to "why help all those strangers?" (meaning, I take it, something like "with what motive?") is "cognitive empathy".

I'm not altogether convinced by either the terminology or the psychology, but at any rate the claim here is not that we should be discarding every form of empathy and turning ourselves into sociopaths.

Comment author: Lumifer 06 October 2016 05:34:49PM *  2 points [-]

With empathy, it turns out that Germans were much more likely to empathize with other Germans than with Juden. With empathy, everyone was cheering as the witches burned.

This required first to, basically, decide that something which looks like a person is actually not and so is not worthy of empathy. That is not a trivial barrier to overcome. Without empathy to start with, burning witches is much easier.

Moral progress is the progress of knowledge.

This is a very... contentious statement. There are a lot of interesting implications.

All I'm saying is that whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.

And that is what I'm strongly disagreeing with.

You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.

Comment author: gjm 06 October 2016 06:21:53PM -1 points [-]

You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.

Sounds terrible! But, wait, once you've decided on a course of action. The main problem with sociopaths is that they do horrible things and do them very effectively, right? Someone who chooses what to do like a non-sociopath and then executes those plans like a sociopath may sound scary and creepy and all, but it's not at all clear that it's actually a bad idea.

(I am not convinced that Jacobian is actually arguing that you decide on a course of action and then turn yourself into a sociopath. But even that strawman version of what he's saying is, I think, much less terrible than you obviously want readers to think it is.)

Comment author: Houshalter 06 October 2016 06:06:13PM *  5 points [-]

I think it's well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.

I recall notable researchers like Hinton making predictions that "X will take 5 years" and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:

Testing embedded image.

In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.

Google recently announced a neural net chip that is 7 years ahead of Moore's law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.

That's just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google's synthetic gradient paper or hypernetworks.

I think one of the biggest things holding the field back is that it's all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don't generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.

Comment author: gjm 06 October 2016 06:17:54PM -1 points [-]

20 years ago the very first crude neural nets were just getting started

The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.

In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.

What's about 20 years old is "deep learning", which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That's not quite fair. There's been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)

Comment author: ImmortalRationalist 30 September 2016 04:54:12AM 0 points [-]

But why should the probability for lower-complexity hypotheses be any lower?

Comment author: gjm 01 October 2016 01:02:35AM -1 points [-]

But why should the probability for lower-complexity hypotheses be any lower?

It shouldn't, it should be higher.

If you just meant "... be any higher?" then the answer is that if the probabilities of the higher-complexity hypotheses tend to zero, then for any particular low-complexity hypothesis H all but finitely many of the higher-complexity hypotheses have lower probability. (That's just part of what "tending to zero" means.)

Comment author: DataPacRat 29 September 2016 03:52:24PM 0 points [-]

How seriously are going to take this letter?

Language is a many-splendored thing. Even a simple shopping list contains more information than a mere list of goods; a full letter is exponentially more valuable. As one fictional character once put it, it's worth looking for the "underneath the underneath"; as another one put it, it's possible to deduce much of modern civilization from a cigarette butt. If you need a specific reason to pay attention to such a letter spelled out for you, then it could be looked at for clues as to how likely the reanimated fellow would need to spend time in an asylum before being deemed competent to handle his own affairs and released into modern society, or if it's safe to plan on just letting him crash on my couch for a few days.

And that's without even touching the minor detail that, if a Faerie Queen is running around, then the Devil may not be far behind her, and the resurrectee's concerns may, in fact, be completely justified. :)

PS: I like this scenario on multiple levels. Is there any chance I could convince you to submit it to /r/WritingPrompts, or otherwise do more with it on a fictional level? ;)

Comment author: gjm 29 September 2016 10:00:36PM -2 points [-]

It looks like you've changed the subject a bit -- from whether the letter should be taken seriously in the sense of doing what it requests, to whether it should be taken seriously in the sense of reading it carefully.

Comment author: ImmortalRationalist 29 September 2016 08:20:27PM 0 points [-]

But in the infinite series of possibilities summing to 1, why should the hypotheses with the highest probability be the ones with the lowest complexity, as opposed to having each consecutive hypothesis having an arbitrary complexity level?

Comment author: gjm 29 September 2016 09:58:37PM -1 points [-]

Almost all hypotheses have high complexity. Therefore most high-complexity hypotheses must have low probability.

(To put it differently: let p(n) be the total probability of all hypotheses with complexity n, where I assume we've defined complexity in some way that makes it always a positive integer. Then the sum of the p(n) converges, which implies that the p(n) tend to 0. So for large n the total probability of all hypotheses of complexity n must be small, never mind the probability of any particular one.)

Note: all this tells you only about what happens in the limit. It's all consistent with there being some particular high-complexity hypotheses with high probability.

Comment author: 2587 19 September 2016 01:39:52PM 0 points [-]

What does Lesswrong think of this video? What Is God? - Leo Becomes Absolute Infinity (Aka God) - All Of Reality Explained https://www.youtube.com/watch?v=4VNoe5tn3tg

I also wonder: What do you think of subjective experience?

Comment author: gjm 21 September 2016 02:12:24PM *  -1 points [-]

I think this is the second time within a week or two that someone who's never posted to LW before has come along with a video from this same person, asking "what do you think about this?" and the first time the person in question turned out to be here not to inquire but to proselytize.

[EDITED because what I initially wrote in the first paragraph wasn't quite what I intended.]

And I think what I've watched of this video (roughly the first 1/3, at double speed) is incredibly unimpressive: this guy took mind-altering drugs and had an experience that made a big impression on him, as people who take mind-altering drugs often do, and now he wants to tell us what an incredible enlightenment he's had. (And he keeps telling us that it's something we won't be able to understand ... and then goes on to try to explain it.)

View more: Next