All of Ben_Welchner's Comments + Replies

In the above examples, there may well be more net harm than gain from staying in an unpleasant relationship or firing a problematic employee. It's pretty case-by-case in nature, and you're not required to ignore your own feelings entirely. If not, yes, utilitarianism would say you'd be "wrong" for indulging yourself at the expense of others.

The same reason fat people can derail trolleys and businesspeople have lifeguard abilities, I'd imagine.

6Luke_A_Somers
New problem: should you spring for the train-derailing-self-destruct-with-an-ejection-seat option on your new car?

You pretty much got it. Eliezer's predicting that response and saying, no, they're really not the same thing. (Tu quoque)

EDIT: Never mind, I thought it was a literal question.

2Ben Pace
I see. Could you articulate how exactly they're not the same thing please?

We encourage you to downvote any comment that you'd rather not see more of - please don't feel that this requires being able to give an elaborate justification. -LW Wiki Deletion Policy

Folks are encouraged to downvote liberally on LW, but the flip-side of that is that people will downvote where they might otherwise just move on for fear of offending someone or getting into an argument that doesn't interest them. You might want to be less sensitive if someone brings one of your posts to -1 - it's not really an act of aggression.

0Eliut
This is fun! To tell you the truth (my thruth not the absolute one) I dont care. I am having a blast trying to unravel what (and how) most people write here. Cheers!

I sympathize. One of my professors jokes about having discovered a new optical illusion, then going to the literature and having the incredible good luck that for once nobody else discovered it first.

This all seems to have more to do with rule consequentialism than deontology. This isn't necessarily a bad thing, and rule consequentialism has indeed been considered a halfway point between deontology and act consequentialism, but it's worth noting.

0RogerS
By my understanding, rule consequentialism means choosing rules according to the utility of the expected consequences, whereas deontology argues for a duty to follow a rule for reasons which may have nothing to do with the consequences. Kant's "treat another person as an end in him/herself, not as a means to an end" doesn't mention consequences and the argument for it isn't based on assessment of consequences. Admittedly both sorts of rule may lead to the same outcome in most cases, but in totally unprecedented moral dilemmas it helps to have an idea where the rule comes from. My prejudice is that rule consequentialism is the best basis for public policy, but deontology sometimes better captures the essence of what matters in cases of private morality.

Note to self: it's much more difficult to have original thought than I think it is.

3buybuydandavis
The thing is, to deal with any real problem, one has to work out the details. In practical application, consequentialists will have to deal with all the same facts of reality that deontologist do, and vice versa. From our last go around, I was increasingly wondering whether the difference between consequentialists and deontologists go to zero in application, and whether they're just arguing over structural commitments in their language model. I am aware that they often come to different and stereotyped conclusions, but that could have more to do with differences in underlying moral preferences, and a failure to truly drive home to the details. My guess is we all have a bit of each in us, but prefer one or the other according to how well it facilitates our preferred conclusions. Rule consequentliasm becomes the natural way for someone insisting on consequentialism to try to take care of deontological concerns. They get farther by recognizing that acts are events too. What is a consequence, but an event? If you can have preferences over events, you can have preferences over acts, and you can get to have all the preferences a deontologist does, and still call yourself a consequentialist.

Disliking meetings and reading in a crowded environment doesn't seem like much evidence that you're neither introverted nor extroverted (except that you're not one of Those Nasty Extraverts that keep supposedly fawning over meetings), which doesn't seem like much evidence that the introvert/extrovert split isn't helpful. I can't enjoy parties or meetings, prefer to read in silence and work alone.

In accordance with ancient tradition, I took the survey.

If I unpacked "disbelieves in God" to "has not encountered a concept of God they both believed ("did not disbelieve", if you prefer) and did not consider a silly conception of God", would atheism still be meaningless? Would that be a horrible misconception of atheism?

Are you sure there's nothing bundled in with "God is Reality" beyond what you state? Let's say I said "God is Reality. Reality is not sapient and has never given explicit instructions on anything." Would you consider that consistent with your belief that God equals Reality?

I'm not trying for Socratic Method arguing here, I'm just not quite sure where you're coming from.

As a psychology student, I can say with some certainty that Watson is a behaviorist poster boy.

I figured it was because it was a surprising and more-or-less unsupported statement of fact (that turned out to be, according to the only authority anyone cited, false). When I read 'poor people are better long-term planners than rich people due to necessity' I kind of expect the writer to back it up. I would have considered downvoting if it wasn't already downvoted, and my preferences are much closer to socialist than libertarian.

I don't have an explanation for the parent getting upvoted beyond a 'planning is important' moral and some ideological wiggle... (read more)

0Gastogh
It paraphrases the bottom line of the metaethics sequence - or what I took to be the bottom line of those posts, anyway. Namely, that one can have values and a naturalistic worldview at the same time.

Caledonian hasn't posted anything since 2009, if you said that in hopes of him responding.

Depends on if you're hallucinating everything or your vision has at least some bearing in the real world. I mean, I'd rather see spiders crawling on everything than be blind, since I could still see what they were crawling on.

0Kindly
Some things (for instance, eating) would definitely be more enjoyable while blind rather than while hallucinating spiders.

It was grammar nitpicking. "The authors where wrong".

-1Alicorn
Also "this papers".
0A1987dM
I had guessed it must be something like that, but I failed to see the typo in the grandparent and changed my mind to the parent being some different joke I didn't get or something. (I've retracted the downvote to the parent.)

Unless you expect some factual, objective truth to arise about how one should define oneself, it seems fair game for defining in the most beneficial way. It's physics all the way down, so I don't see a factual reason not to define yourself down to nothing, nor do I see a factual reason to do so.

0MarkusRamikin
Why yes, when I ask who I am, I am indeed interested in objective truth, or whatever objective truth of the matter may or may not exist. What the relation actually is, between our sense of self, and the-stuff-out-there-in-reality. I don't understand why this seems so outlandish. If identity really were up for grabs like that, then that just seems to me to mean that there really ain't no such critter in the first place, no natural joint of reality at which it would make most sense to carve. In that case that would be what I'd want to believe, rather than invent some illusion that's pleasing or supposedly beneficial.

I'm not talking about SI (which I've never donated money to), I'm talking about you. And you're starting to repeat yourself.

-1private_messaging
I can talk about you too. The statement "That's why outsiders think it's a circlejerk", does not have 'sizable majority', or 'significant minority', or 'all', or 'some' qualifier, nor does it have any kind of implied qualifier, nor does it need qualifying with vague "some", that is entirely needless verbosity (as the 'some' can range from 0.00001% to 99.999%), and the request to add "some" is clearly rhetorical, which we both realize equally well. (It is the case, though, that I think the most likely case is "significant majority of rational people", i.e. i expect greater than 50% chance of strong negative opinion of SI if it is presented to a rational person). The other day someone told me my argument was shifting like wind.
-1wedrifid
Does that mean it is time to stop feeding him? I had decided when I finished my hiatus recently that the account in question had already crossed the threshold where I could reply to him without predicting that I was just causing more noise.

You guys are only being supposedly 'accurate' when it feels good. I have not said, 'all outsiders', that's your interpretation which you can subsequently disagree with.

You're misusing language by not realizing that most people treat "members of group A think X" as "a sizable majority of members of group A think X", or not caring and blaming the reader when they parse it the standard way. We don't say "LWers are religious" or even "US citizens vote Democrat", even though there's certainly more than one religious p... (read more)

-3private_messaging
I do think that 'sizable majority' hypothesis has not been ruled out, to say the least. SI is working to help build benevolent ruler bot, to save the world from malevolent bot. That sounds as crazy as things can be. Prior track record doing anything relevant? None. Reasons for SI to think they can make any progress? None. I think most of sceptically minded people do see that kind of stuff in pretty negative light, but of course that's my opinion, you can disagree. Actually, who cares, SI should just go on 'fix' what Holden pointed out, increase visibility, and get listed on crackpot/pseudoscience pages.

If you know of any illusions that give inevitably ceasing to exist negative utility to someone leading a positive-utility life, I would love to have them dispelled for me.

0JulianMorrison
Sorry for the slow reply. Hmm. I may be a bit biased because I don't really have a high valuation on being alive as such (which is to say utility[X] is nearly the same as utility[X and Julian is alive] for me, all other things being equal - it's why I am not signed up for cryonics). However I think that any utility calculus that negatively values the fun you're not going to have when inevitably dead is as silly as negatively valuing the fun you didn't get to have because said events preceded your birth, and you inevitably can't extend your life into the past. You get more chance to fulfil your values in the real world by making use of your 2 minutes than by anticipating values that are not going to happen. And I do very much place utility on my values being fulfilled in a real, rather than self deceptive way.

I know I'll probably trigger a flamewar...

Nitpick: LW doesn't actually have a large proportion of cryonicists, so you're not that likely to get angry opposition. As of the 2011 survey, 47 LWers (or 4.3% of respondents) claimed to have signed up. There were another 583 (53.5%) 'considering it', but comparing that to the current proportion makes me skeptical they'll sign up.

A decision tree (the entirety of my game theory experience has been a few online videos, so I likely have the terminology wrong), with decision 1 at the top and the end outcomes at the bottom. The sections marked 'max' have the decider trying to pick the highest-value end outcome, and the sections marked 'min' have the decider trying to pick the lowest-value end outcome. The numbers in every line except the bottom propagate up depending on which solution will be picked by whoever is currently doing the picking, so if Max and Min maximize and minimize properly the tree's value is 6. I don't quite remember how the three branches being pruned off work.

I'm pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not.

It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value.

EDIT: Just in case this comes off as disparaging LW's upvote generosity or average comm... (read more)

2AspiringKnitter
Though among LW members, people probably don't need to be encouraged to use basic rationality. If we could just upvote and downvote people's arguments in real life... I'm also considering the possibility that MHD was asking why we don't see everyone using Rationality 101.

He also notes that the experts who'd made failed predictions and employed strong defenses tended to update their confidence, while the experts who'd made failed predictions but didn't employ strong defenses did update.

I assume there's a 'not' missing in one of those.

3Kaj_Sotala
Fixed, thanks.

Given humanity's complete lack of experience with absolute power, it seems like you can't even take that cliche for weak evidence. Having glided through the article and comments again, I also don't see where Eliezer said "rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?

(No, I wasn't the one who downvoted)

And would newer readers know what "EY" meant?

Given it's right after an anecdote about someone whose name starts with "E", I think they could make an educated guess.

0ahartell
Probably. When I first started reading Lw it took me a while I think to figure out EY, though it is a pretty obvious connection. Anyway, I don't really think it's a big deal, just that it might be sub-optimal.

That's one hell of a grant proposal/foundation.

0Dorikka
That's one hell of a goal.

Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn't a very hive-mindey community, unless you count atheism.

(The singularity, yes, you're very much in the minority with the most skeptical quartile expecting it in 2150)

1Bugmaster
Regarding cryonics, you're right and I was wrong, so thanks ! But in the interest of pedantry I should point out that among those 96% who did not sign up, many did not sign up simply due to a lack of funds, and not because of any misgivings they have about the process.

In other words, why didn't the story mention its (wealthy, permissive, libertarian) society having other arrangements in such a contentious matter - including, with statistical near-certainty, one of the half-dosen characters on the bridge of the Impossible Possible World?

It was such a contentious issue centuries (if I'm reading properly) ago, when ancients were still numerous enough to hold a lot of political power and the culture was different enough that Akon can't even wrap his head around the question. That's plenty of time for cultural drift to p... (read more)

On a similar note, what should be 13.9's solution links to 13.8's solution.

I'm also finding this really interesting and approachable. Thanks very much.

1[anonymous]
Fixed, thank you.

I recall another article about optimization processes or probability pumps being used to rig elections; I would imagine it's a lighthearted reference to that, but I can't turn it up by searching. I'm not even sure if it came before this comment.

(Richard_Hollerith2 hasn't commented for over 2.5 years, so you're not likely to get a response from him)

0Normal_Anomaly
I noticed this right after I commented. Oops.

Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?

The agent's goals aren't changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-ut... (read more)