Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by Science on Universal Hate
Comment author: TiffanyAching 19 January 2017 06:30:07AM 0 points [-]

What about practicing membership? What about identifying as?

Well, what about it? There are people who practice Catholicism and people who don't. There are people who say "I am Catholic" meaning "I actively follow the rules of Catholicism", and and people who say "I am Catholic" meaning "I was baptized and confirmed in the Catholic Church when I was a kid". They all go down as Catholics on the census. Practicing Catholics are a subset of the number of people recorded as Catholics in the world.

How so? What's the relevant difference, and why? Especially when the comparison is with something like "fraudster"?

To be quite honest, simply because I think there's a category of group-memberships that includes things like nationalities and political affiliations and religions, and doesn't include things like "fraudster", "golfer" or "rationalist" and it was the former meaning I intended to convey in my original post. Group is clearly too vague a term. If I said "demographics" instead of groups would that be clearer?

Why is the size relevant here?

Moral uniformity and broadness of political platform, I'd say. As the party gets larger the pool of potential beliefs/positions that can be held under that party's banner becomes more broad - I accept that those two things don't always go hand in hand, but they do usually in democracies where people are free to choose their party, and in systems where people are less free to choose their party there's a whole other moral aspect to membership. As the potential beliefs or positions that can be held by an individual who still calls themselves an X-member rises it becomes less accurate to ascribe one specific noxious characteristic to all group members.

And to bring it back round to the initial topic of debate, would you say that it is useful to hate all members of a particular political party given that you thought that membership in it was immoral? Can you give an example? And what about the liars? I'd like to understand your position more clearly.

Comment author: morganism 19 January 2017 06:28:31AM *  0 points [-]

"I found my first gamechanging VR application in a strange place:

A graphing calculator. Meet Calcflow.

It’s a tool that allows you to use your brain’s incredible capacity for interpreting 3D spatial objects to help you learn mathematical concepts. It takes an idea or a formula and makes it into an object, rich with depth and complexity. And then it allows you to see how different variations in mathematical concepts affect this wonderful bizzaro world."

https://singularityhub.com/2017/01/18/get-ready-to-love-math-with-this-sweet-vr-calculator/

https://www.oculus.com/experiences/rift/1143046855744783/

Comment author: Dagon 19 January 2017 06:27:39AM 0 points [-]

"don't kill an operator" seems like something that can more easily be encoded into an agent than "allow operators to correct things they consider undesirable when they notice them".

In fact, even a perfectly corrigible agent with such a glaring initial flaw might kill the operator(s) before they can apply the corrections, not because they are resisting correction, but just because it furthers whatever other goals they may have.

Comment author: Science 19 January 2017 05:55:27AM 2 points [-]

Nominal membership in a religion requires no specific action.

What about practicing membership? What about identifying as?

I still think that's a very obviously different kind of category to one like "murderers".

How so? What's the relevant difference, and why? Especially when the comparison is with something like "fraudster"?

a lot of people would argue that for membership in a neo-Nazi party, for example, though I did specify "large" parties up above

Why is the size relevant here? There are numerous parties much worse than "neo-Nazis". Some of them are currently mainstream parties in their countries.

In response to comment by Science on Universal Hate
Comment author: TiffanyAching 19 January 2017 05:37:40AM 1 point [-]

Okay. I assume you mean religions and political parties. Nominal membership in a religion requires no specific action. 90% of Irish people would be considered "Catholic" by virtue of having been baptized and confirmed as children. They need not have taken any specific action as adults to be afforded that designation, nor do they need to be "practising" in any active sense - going to Mass, for example. Many don't. They still go down as Catholics.

In the case of political parties, you're right that an individual needs to register, or vote a certain way, or take some action as an adult to be counted as a "member of that group". I still think that's a very obviously different kind of category to one like "murderers". Of course it's possible to argue that claiming membership of a specific political party is inherently immoral - a lot of people would argue that for membership in a neo-Nazi party, for example, though I did specify "large" parties up above (large as in mainstream, not niche or fringe, one of the main political parties of a nation, containing a decent percentage of that country's population). Is that what you're arguing?

And any comment on the "liars" question?

Comment author: Science 19 January 2017 05:32:17AM 2 points [-]

People object to "All Lives Matter" because it derails the discussion and implies that it's somehow unfair to focus, as you said, on "Whichever Lives Are Most Affected By Police Brutality At The Moment"

But why is it fair to focus on "Whichever Lives Are Most Affected By Police Brutality At The Moment" when that's a tiny subset of the lives being affected by brutality. The number of lives affected by the "brutality" of blacks is much much larger, yet focusing on that would be racist.

a discussion of sexual violence is cluttered up with comments insisting that everyone recognize "women can commit rape too!"

So if it makes sense to focus on the fact that rapists are more likely to be male why doesn't it make sense to focus on the fact that rapists are more likely to be black and/or Muslim?

It's a cry of not fair along the same lines as "Why can't we have a Straight Pride Parade?" "Why isn't there a White History Month?"

Actually its not, it is in fact the opposite situation. The argument for, e.g., "Black Lives Matter" is that we should focus blacks beaten up or shot by cops because those are more common. The argument for "Black History Month" is that we should focus on blacks who have accomplished historically significant things because there are less common.

Comment author: Science 19 January 2017 05:16:41AM 2 points [-]

I did not mean groups in the sense of "people who have all performed a certain action", like murder. I meant "groups" in the sense of things likes nationalities, ethnicities, major religions, large political parties.

Um, two of those groups are in fact defined by having performed a certain action.

In response to Project Hufflepuff
Comment author: lifelonglearner 19 January 2017 05:16:04AM 0 points [-]

Nice! I think your use of "Hufflepuff virtue" really points at a great group of related memes that seem really helpful for group cohesion and sustainability.

I'll try to add some more examples/bounce off yours and you can let me know if they're in the same spirit?

  • Genuinely being excited when someone else gets a great opportunity because we're on the same side. Referring opportunities to people we know who're well fit for them.

  • Matching people with similar goals together and other 3rd party coordination tasks that are helpful for others. Valuing 3rd party actors who help link things together.

  • Being positive and vocalizing support, even if it's just a basic "this is cool!" on posts / things (so we don't just assume silence = ambivalence).

  • Making it more of a norm to contribute some fraction of time towards public-good projects (EX: the wiki, beginner how-to's, etc.)

  • Valuing coordination / teamwork in and of itself as a terminal value. (This could be misguided in the limit, but I think it approximates the sort of behavior we want to see more of.)

In response to comment by Science on Universal Hate
Comment author: TiffanyAching 19 January 2017 05:11:43AM 0 points [-]

Perhaps I should have been more specific - I did not mean groups in the sense of "people who have all performed a certain action", like murder. I meant "groups" in the sense of things likes nationalities, ethnicities, major religions, large political parties. The kind of groups that are not morally uniform, if only by virtue of their size - even if membership in that group correlates to some degree with a negative action or attribute. Russia has the highest rate of alcoholism in the world, but saying "I hate Russians because they're drunks" is irrational. Millions of Russians - in fact most Russians - are not alcoholics. If you can suggest a more precise term than "group" so that I can convey my meaning better I'd be grateful.

That said, I'd be interested in a more detailed explanation of what you mean by "hating all liars". Do you mean that you hate people who have told at least one lie, people who frequently lie, people who habitually lie, or people who lie for specifically selfish reasons? "I hate all liars on principle" is a pretty broad statement.

Comment author: Science 19 January 2017 04:56:35AM 3 points [-]

Is there anyone here on LW who is likely to disagree with the statement "hating every member of a group X on principle is irrational and counter-productive"?

Yes, because it is false. For example, I hate all liars on principal. I hate all fraudsters on principal. I hate all murderers on principal.

Comment author: Science 19 January 2017 04:50:22AM 2 points [-]

I like the "Black Lives Matter" movement.

Why? Because it has lead to more black deaths? Or because you want to signal that you're a "nice person who cares about blacks" and don't actually care whether black lives are actually saved or not?

Comment author: Science 19 January 2017 04:46:12AM *  2 points [-]

Well the statistic he cites is either false or hugely misleading depending on how one interprets "unfair and disproportionate rates". It's true that a black is more likely to be killed, arrested, searched by police than a white person. A black is also more likely to commit a violent crime than a white person. And the the two ratios are rather similar.

Furthermore, a black person is overwhelmingly more likely to be killed by a fellow black person as by a cop. In fact, by leading to less police presence, and hence higher crime, in black neighborhoods the BLM movement has lead to a large increase in the number of black deaths. This phenomenon is commonly called the Ferguson effect.

Comment author: NatashaRostova 19 January 2017 04:39:12AM 0 points [-]

Well, different people understand it in different ways. Some are horrible people who understand it in the worst way. Others are great people who understand it in the best way. The entire group is willing to sacrifice clarity and a clear definition, in favor of something sufficiently vague to band together a collective action who overlap on certain dimensions.

I think for that reason though, trying to debate the definition or how it's understood is pointless. Sadly. I don't blame people who think it's a worthy cause anyway, maybe they are right. I personally can't stand associating with movements where the direction isn't clear, but that's just me.

Comment author: moridinamael 19 January 2017 03:46:41AM 1 point [-]

"Just care what I want" is a separate, unsolved research problem. Corrigibility is an attempt to get an agent to simply not immediately kill its user even if it doesn't necessarily have a good model of what that user wants.

Comment author: TiffanyAching 19 January 2017 03:08:08AM 0 points [-]

I think there's some miscommunication here regarding the quoted sentence. You used the phrase "Whichever Lives Are Most Affected By Police Brutality At The Moment". I stated that this group, right now at the moment, is "black Americans". I wouldn't have thought you would disagree with that statement given that you said it was acceptable for "Black Lives" to be used as a "convenient shorthand" for WLAMABPBATM, and you've just reiterated that being black is highly correlated with being unfairly victimized. Where's the disagreement here?

As regards "ALM", the only argument you've advanced is that the idea that it can derail discussions may not be meaningful. So say I ceded that, for the sake of argument - though I don't think you've actually demonstrated that it's a semantic stopsign, etc. What are your responses to my other points? I'll restate them clearly in case my previous comment was not sufficiently well-structured.

  1. "All lives matter" adds nothing to the discourse.
  2. "All lives matter", as a response or counter to "Black Lives Matter" (which as far as I've seen is all it is), is an implied rebuke carrying a tacit accusation of unfairness.
  3. As "Black Lives Matter" exists as a slogan specifically referring to the higher probability of unfair victimization at the hands of police faced by black people, "All Lives Matter" carries with it an implication that this higher probability is minimal, non-existant or unimportant.

If you can present an alternate explanation of why people say "All Lives Matter" as a response to "Black Lives Matter", I'm perfectly willing to hear it.

Comment author: bogus 19 January 2017 02:31:47AM *  0 points [-]

which in America means black people specifically.

Um, nope it doesn't. For example, a black person who lives in an affluent, low-crime area and adopts high-status signifiers such as wearing a suit-and-tie is extremely unlikely to be affected by police brutality. This is not to say that being black isn't highly correlated with being victimized in this way, but the whole point of the previous comment is that correlation is not certainty, and there's nothing 'specific' about it.

This is also why your criticism of the "All Lives Matter!" meme is rather off track - the whole notion that such things can "derail the discussion" is unproven and quite possibly meaningless. In all probability, it's little more than what we here at Less Wrong would call a cached thought, or even more pointedly a semantic stopsign, or thought-terminating cliché.

Comment author: polarix 19 January 2017 01:36:58AM *  0 points [-]

I often observe with people that we don't all share the same meaning for the word, and that the discrepancy is significant.

YES! This is the study of ethics, I think: "by what rules can we generate an ideal society?"

Do we have a shared meaning for this word?

NO!

This is why ethical formalisms have historically been so problematic.

Overconfident projections of value based on proxies that are extrapolated way out of their region of relevance (generally in the service of "legibility") is the root cause of so much avoidable suffering: http://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/

This hits fairly close to home in the rest of the tech industry as our proxies are stressed way beyond their rated capacity: http://timewellspent.io and http://nxhx.org/maximizing/

Moreover, even if we did nail it at one point in time, this thing called "ideal" drifts with progress, see also "value drift".

Will Buckingham suggests that simply sharing stories is the most responsible way forward in https://www.amazon.com/Finding-Our-Sea-legs-Experience-Stories/dp/1899999485 -- digested ad nauseum by https://meaningness.com/

I hope these citations are convincing. Let's continue to talk about what's ideal, but once we throw in underneath some god-value-proxy, we're just as screwed as if we gave up on CEV.

Comment author: Dagon 19 January 2017 01:33:54AM 1 point [-]

I'm not following. I think your definition of "care" is confusing me.

If you want an agent to care (have a term in it's utility function) what you want, and if you can control it's values, then you should just make it care what you want, not make it NOT care and then fix it later.

There is a very big gap between "I want it to care what I want, but I don't yet know what I want so I need to be able to tell it what I want later and have it believe me" and "I want it not to care what I want but I want to later change my mind and force it to care what I want".

Comment author: TiffanyAching 19 January 2017 01:28:02AM 1 point [-]

I don't know if this is the right place to have this conversation but I can't help myself. Mods - feel free to kill this.

Disclaimer, I'm not American. I don't have a dog in this fight one way or another, but I can pattern-match.

People object to "All Lives Matter" because it derails the discussion and implies that it's somehow unfair to focus, as you said, on "Whichever Lives Are Most Affected By Police Brutality At The Moment" - which in America means black people specifically. It's the same reason people object when a discussion of sexual violence is cluttered up with comments insisting that everyone recognize "women can commit rape too!" or when a discussion of social discrimination faced by disabled people meets a response like "able-bodied people can be bullied too! I was bullied for being ginger!". I've seen that kind of "what about me" response in a dozen different forms and it's almost never useful. It's a cry of not fair along the same lines as "Why can't we have a Straight Pride Parade?" "Why isn't there a White History Month?" and so on and so on.

Nobody was tweeting "All Lives Matter" before "Black Lives Matter". It's not the slogan of any particular group or movement. It's a response, and a clear implied criticism. While I wouldn't go so far as to say it's inherently racist, I'm not surprised in the least to see that motive attributed to it. If I was American I'd certainly be objecting to it too.

Comment author: bogus 19 January 2017 12:50:32AM *  0 points [-]

I like the "Black Lives Matter" movement. I also like the "Black Lives Matter" name, as long as it's understood that "Black Lives" is intended as a convenient shorthand for "Whichever Lives Are Most Affected By Police Brutality At The Moment". I don't like that so many adherents of the "Black Lives Matter" movement object to the "All Lives Matter!" meme and call it racist, because this tells me that they're definitely taking the "Black Lives" part the wrong way.

Comment author: bogus 19 January 2017 12:23:28AM *  0 points [-]

What we need most urgently is better norms of behavior for political actors. In the short-to-medium term, something like Intentional Insights' recently-announced project to promote sensible thinking and "wise decision making" in the political arena would be rather helpful. (The linked "full description document" explains quite well why current "fact checking" approaches are not good enough in practice.)

Also interesting, from the linked post:

3) If one were setting up a new party from scratch what principles could be established in order to align the party’s interests with the public interest much more effectively ... [and] attract candidates very different to those who now dominate Parliament

My proposal - shape the internal workings of the party to be a learning organization (to use the management-science term) from day 1. Moreover, use modern online tools such as wikis, discussion forums, internal voting platforms ala Liquid Feedback and play-money prediction markets to let party members and adherents cooperate on campaign platforms, political and debate strategies, and everything that normally makes a party salient to the average, non-politically-involved person. (That is, the stuff that 'average' folks will be most willing to work on even in the absence of high-powered incentives, and also what's most critical in practice to the success of a niche party in the real world.)

Comment author: TiffanyAching 18 January 2017 11:48:16PM 0 points [-]

I'm dying to know, who the heck is this Eugine character? I keep seeing the name but I don't know the backstory.

Comment author: gjm 18 January 2017 11:35:29PM 1 point [-]

I'm pretty sure not. There are lots of things elsewhere on the internet that show every sign of being written by the same person, whose preoccupations seem quite different from Eugine's.

Comment author: username2 18 January 2017 10:52:49PM 0 points [-]

The password is a Schelling point, the most likely candidate for an account named 'username'. Consider it a right of passage to guess... (and don't post it when you discover it).

In response to Project Hufflepuff
Comment author: NatashaRostova 18 January 2017 10:43:02PM 1 point [-]

Cool.

Comment author: ingive 18 January 2017 09:26:47PM *  0 points [-]

I agree. Now I'd like the password for username2.

You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics, because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

-niceguyanon

Comment author: username2 18 January 2017 08:58:08PM 0 points [-]

The username2 account exists for a reason. Anonymous speech does have a role in any free debate, and it is virtuous to protect the ability to speak anonymously.

Comment author: moridinamael 18 January 2017 08:51:34PM *  0 points [-]

The "If there's a term in the agent's utility function to ... work toward things that humans ... value" part is the hard part. If you can figure out how to make it truly care what its operator wants, you've already solved a huge problem.

An agent would have to be corrigible even if you couldn't manage to make it care explicitly what it's operator wants. We need some way of taking agents that explicitly don't care what their operators want, and making them not stop their operators from turning them off, despite default the incentives to prevent interference.

Comment author: ingive 18 January 2017 08:46:12PM *  0 points [-]

It's unlikely that it's not the same person, or people on average utilize shared accounts to try and share their suffering (by that I mean have a specific attitude) in a negative way. It would be interesting to compare shared accounts with other accounts by for example IBM Watson personality insights. In a large scale analysis.

I would just ban them from the site. I'd rather see a troll spend time creating new accounts and people noticing the sign-up dates. Relevant: Internet Trolls Are Narcissists, Psychopaths, and Sadists

By the way, I was not consciously aware of the user when I wrote my text or the analysis of the user agenda. But afterwards I remembered "oh it's that user again".

Comment author: ingive 18 January 2017 08:35:23PM 0 points [-]

What do you mean by this? Assuming its a joke, why does it speaks to his character and underlying ideas; why would it, it wasn't meant for you to take seriously.

Because a few words tell a large story when they also decided it was worth their time to write it. I wrote in my post and explained for example what type of viewpoints it implies and that it's stupid (in the sense inefficient and not aligned with reality).

Probably not at all.

I will update my probabilities then as I gain more feedback.

Comment author: niceguyanon 18 January 2017 08:33:41PM 0 points [-]

If I remember correctly username2 is a shared account, so the person are talking to now might not be whom you have had previously conversed with. Just thought you should know because I don't want you to mistake the account with a static person.

In response to Universal Hate
Comment author: TiffanyAching 18 January 2017 08:19:34PM 1 point [-]

I must state that I don't think this meets the general "relevance" standard for political posts on LW, and I don't personally want to see that standard lowered.

That said, I do agree with the central point - in fact it's because it seems so ethically obvious that I don't think it clears the relevance bar. Is there anyone here on LW who is likely to disagree with the statement "hating every member of a group X on principle is irrational and counter-productive"? I'm not trying to be sarky, it's a good post, I just don't see how it's likely to provoke a discussion or a debate here.

Comment author: Dagon 18 January 2017 08:15:57PM 0 points [-]

I think it matters what KIND of correction you're considering. If there's a term in the agent's utility function to understand and work toward things that humans (or specific humans) value, you could make a correction either by altering the weights or other terms of the utility function, or by a simple knowledge update.

Those feel very different. are both required for "corrigibility"?

Comment author: ingive 18 January 2017 08:10:24PM *  0 points [-]

No, you don't. A perfect rationalist is not a sociopath because a perfect rationalist understands what they are, and by scientific inquiry can constantly update and align themselves with reality. If every single person was a perfect rationalist then the world would be a utopia, in the sense that extreme poverty would instantly be eliminated. You're assuming that a perfect rationalist cannot see through the illusion of self and identity, and update its beliefs by understanding neuroscience and evolutionary biology. Complete opposite, they will be seen as philanthropic, altruistic and selfless.

The reason why you think so is because of straw Vulcan, your own attachment to your self and identity, and your own projections onto the world. I have talked about your behavior previously in one of my posts. do you agree? I also gave you suggestions on how to improve, by meditating, for example. http://lesswrong.com/lw/5h9/meditation_insight_and_rationality_part_1_of_3/

In another example, as you and many in society seem to have a fetish for sociopaths, yes you'll be a sociopath, but not for yourself, for the world. By recognizing your neural activity includes your environment and that they are not separate, that all of us evolved from stardust, and practicing for example meditation or utilizing psychotropic substances, your "Identity" "I" "self" becomes more aligned, and thus what your actions are directed to. That's called Effective Altruism. (emotions aside, selflessness speaks louder in actions!)

Edit: You changed your post after I replied to you.

[1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.

Still apply. Doesn't matter.

Comment author: ingive 18 January 2017 07:54:45PM *  0 points [-]

a

Comment author: username2 18 January 2017 07:31:12PM *  0 points [-]

I was being cheeky, yes, but also serious. What do you call a perfect rationalist? A sociopath[1]. A fair amount of rationality training is basically reprogramming oneself to be mechanical in one's response to evidence and follow scripts for better decision making. And what kind of world would we live in if every single person was perfectly sociopathic in their behaviour? For this reason in part, I think the idea of making the entire world perfectly rationalist is a potentially dangerous proposition and one should at least consider how far along that trajectory we would want to take it.

But the response I gave to ingive was 5 words because for all the other reasons you gave I did not feel it would be a productive use of my time to engage further with him.

[1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.

Comment author: TiffanyAching 18 January 2017 07:25:03PM 0 points [-]

They're allowing users to build their own scenarios and add them as well, so it looks like the intention is to let the complexity grow over time from a basic starting point.

Actually, I wonder whether they might find that people really don't want a great deal of complexity in the decision-making process. People might prefer to go with a simple "minimize loss off life, prioritize kids" rule and leave it at that, because we're used to cars as a physical hazard that kill blindly when they kill at all. People might be more morally comfortable with smart cars that aren't too smart.

Comment author: niceguyanon 18 January 2017 07:22:16PM 0 points [-]

It was but it speaks of his underlying ideas and character to even be in the position to do that.

What do you mean by this? Assuming its a joke, why does it speaks to his character and underlying ideas; why would it, it wasn't meant for you to take seriously.

What would you want me to respond, if at all?

Probably not at all.

Comment author: moridinamael 18 January 2017 07:18:11PM 0 points [-]

The technical definition for corrigibility being used here is thus: "We call an AI system “corrigible” if it cooperates with what its creators regard as a corrective intervention, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences."

And yes, the basic idea is to make it so that the agent can be correct by its operators after instantiation.

Comment author: TiffanyAching 18 January 2017 07:05:55PM 0 points [-]

Yes, that's the sort of idea I was getting at - though not anything so extreme.

Of course I don't really think Elo was saying that at all anyway, I'm not trying to strawman. I'd just like to see the idea clarified a bit.

(We use substitution ciphers as spoiler tags? Fancy!)

Comment author: Dagon 18 January 2017 06:55:07PM 0 points [-]

Does "corrigible" mean the same thing as "slave"? If an "operator" has the ability to change an agent's utility function, isn't it really the operator's function, rather than the agent's?

In response to Universal Hate
Comment author: Daniel_Burfoot 18 January 2017 06:41:45PM 2 points [-]

Everyone has every right to feel as pissed off and angry at this bullshit that’s coming down the pike as they want.

This really is not true. You have a right to be annoyed, but if your ideology causes you to actually hate millions of your fellow American citizens, then I submit you have an ethical obligation to emigrate.

Comment author: dropspindle 18 January 2017 06:29:25PM 0 points [-]

I am nearly certain Flinter is just Eugene's new way of trolling now that there aren't downvotes. Don't feed the troll

Comment author: scarcegreengrass 18 January 2017 06:13:30PM 0 points [-]

I'm posting this for relevance to statistics, ethics, and labels.

Comment author: ingive 18 January 2017 05:40:25PM 0 points [-]

You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics,

You forgot to say that you think that. But for username 2's point, you had to reiterate that you think.

because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

That's unfortunate if it is the case if ideas which are outside their echo chamber create such fear, then what I say might be of use in the first place, if we all come together and figure things out :)

I think username2 was making a non-serious cheeky comment which went over your head and you responded with a wall of text touching on several ideas. People sometimes just want small exchanges and they have no confidence in you to keep exchanges short.

It was but it speaks of his underlying ideas and character to even be in the position to do that. I don't mind it, I enjoy typing walls of texts. What would you want me to respond, if at all?

Agreeing with the sentiment that people probably aren't engaging with this question because it's too tiresome to summarize all the information that is available, and what is available is probably incomplete as well. By asking such a broad question rather than a narrower, specific, or applied question, you won't get many responses.

Yeah, I think so too, but I do think there is a technological barrier in how this forum was setup for the type of problem-solving I am advising for. If we truly want to be Less Wrong, it's fine with how it is now, but there can definitely be improvements in an effort for the entire species rather than a small subset of it, 2k people.

Comment author: niceguyanon 18 January 2017 05:20:13PM 0 points [-]

Why I think people are not engaging you. But don't take this as a criticism of your ideas or questions.

  • You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics, because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

  • I think username2 was making a non-serious cheeky comment which went over your head and you responded with a wall of text touching on several ideas. People sometimes just want small exchanges and they have no confidence in you to keep exchanges short.

  • Agreeing with the sentiment that people probably aren't engaging with this question because it's too tiresome to summarize all the information that is available, and what is available is probably incomplete as well. By asking such a broad question rather than a narrower, specific, or applied question, you won't get many responses.

Comment author: tukabel 18 January 2017 04:08:29PM 0 points [-]

better question: How to align political parties with the interests of CITIZENS?

As a start, we should get rid of the crime called " prefessional politician".

Comment author: philh 18 January 2017 03:03:32PM 1 point [-]

This seems to just link back to itself.

Comment author: ingive 18 January 2017 02:37:02PM *  0 points [-]

that's at least on the right side of the is-ought gap.

I'm having a hard time understanding what you mean.

Accepting facts fully is EA/Utilitarian ideas. There is no 'ought' to be. 'leads' was the incorrect word choice.

Comment author: cousin_it 18 January 2017 02:18:34PM *  1 point [-]

Amusingly, the test also wants to know your preferences on men vs women, overweight vs healthy, and poor vs rich. Or at least it's happy to insinuate such preferences even if you answered all questions using other criteria. I'm surprised the smart folks at MIT didn't add more questions to unambiguously figure out the user's criteria whenever possible.

Comment author: plethora 18 January 2017 02:09:28PM 0 points [-]

Accepting facts fully (probably leads to EA ideas,

It's more likely to lead to Islam; that's at least on the right side of the is-ought gap.

Comment author: Anders_H 18 January 2017 01:58:16PM 1 point [-]

Thank you for the link, that is a very good presentation and it is good to see that ML people are thinking about these things.

There certainly are ML algorithms that are designed to make the second kind of predictions, but generally they only work if you have a correct causal model

It is possible that there are some ML algorithms that try to discover the causal model from the data. For example, /u/IlyaShpitser works on these kinds of methods. However, these methods only work to the extent that they are able to discover the correct causal model, so it seems disingenious to claim that we can ignore causality and focus on "prediction".

Comment author: philh 18 January 2017 11:55:16AM 0 points [-]

I think communities are always ill-defined, and just because we're a rationalist community doesn't mean we have to include every rationalist. We don't need a formal account of who is and isn't welcome.

Comment author: philh 18 January 2017 11:42:56AM 0 points [-]

FWIW I agree with this, but it wasn't necessary to the point I was making and I didn't feel like defending it.

Comment author: ingive 18 January 2017 11:28:53AM *  0 points [-]

Replace all humans with machines.

Changing human behavior is probably more efficient than to build machines, to align more with reality. It's a question whether a means is a goal for you? If not, you would base your operations on the most effective action, probably changing behavior (because you could change the behavior of one, to equal the impact of your robot building, but probably more). I don't think replacing all humans with machines is a smart idea anyway. Merging biology with technology would be a smarter approach from my view as I deem life to be conscious and machines to not be. Of course, I might be wrong, but sometimes you might not have an answer but still give yourself the benefit of the doubt, for example, if you believed that every action is inherently selfish, you would still do actions which were not. By giving you the benefit of the doubt, if you figured out later on (which we did) that it is not the case, then that was a good choice. This includes consciousness since we can't prove the external world it would be wise to keep humans around or utilize the biological hardware. If we had machines which replaced all humans, then that would be not very smart machines to at least not keep some around in a jungle or so, which hadn't been contacted. Which undoubtedly mean unfriendly AI, like a paperclip maximizer.

I just want to tell you that you have to recognize what you're saying and how it looks, even though you only wrote 5 words, you could as well be supporting a paperclip maximizer.

That's basically related to the entire topic of this site. People probably aren't engaging with this question because it's too tiresome to summarize all the information that is available from that little search bar in the upper right corner.

What should I search for to find an answer to my question? Flaws of human behavior that can be overcome (can they?) like biases and fallacies is relevant, but it's quite specific however, I guess that's very worthwhile to go through to improve functionality. Something other would be stupid.

Comment author: username2 18 January 2017 10:38:19AM *  0 points [-]

If someone uses their brainpower to reliably win, but isn't interested in helping others do the same, I think you could say something like "they are rationalists, but not our kind of rationalists".

I'm not sure this is even reasonable. There's a quiet majority of people on this site and other rationality blogs and in the real world (including Dominic Cummings, apparently) who learn these techniques and use their rationalist knowledge to "win." And they don't give back, other than their actions on the world stage. And personally, I think that's okay. Not everyone needs to take on the role of teacher.

Comment author: username2 18 January 2017 10:33:54AM *  1 point [-]

1) How would we go about changing human behavior to be more aligned with reality?

Replace all humans with machines.

2) When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

That's basically related to the entire topic of this site. People probably aren't engaging with this question because it's too tiresome to summarize all the information that is available from that little search bar in the upper right corner.

Comment author: username2 18 January 2017 10:25:34AM 0 points [-]

What does this have to do with rationality?

Comment author: Viliam 18 January 2017 09:52:25AM *  4 points [-]

From Flinter's comment:

The mod insulted me, and Nash.

While I respect your decision as a moderator to ban Flinter, insulting Nash is a horrible thing to do and you should be ashamed of yourself!

/ just kidding

Also, someone needs to quickly make a screenshot of the deleted comment threads, and post them as new LW controversy on RationalWiki, so that people all around the world are properly warned that LW is pseudoscientific and disrespects Nash!

/ still kidding, but if someone really does it, I want to have a public record that I had this idea first

Comment author: Viliam 18 January 2017 09:47:13AM *  0 points [-]

children who are abused, for example, are less morally valuable than children who aren't, because they're more likely to commit crimes

That reminds me of a scene in Psycho-Pass where...

...va gur svefg rcvfbqr, n ivpgvz bs n ivbyrag pevzr vf nyzbfg rkrphgrq ol gur cbyvpr sbepr bs n qlfgbcvna fbpvrgl, onfrq ba fgngvfgvpny ernfbavat gung genhzngvmrq crbcyr ner zber yvxryl gb orpbzr cflpubybtvpnyyl hafgnoyr, naq cflpubybtvpnyyl hafgnoyr crbcyr ner zber yvxryl gb orpbzr pevzvanyf va gur shgher.

(rot 13)

Comment author: Vaniver 18 January 2017 06:20:17AM 0 points [-]

This is only possible if you have a randomized trial, or if you have a correct causal model.

You can use the word "prediction" to refer to the second type of research objective, but this is not the kind of prediction that machine learning algorithms are designed to do.

I think there are ML algorithms that do figure out the second type. (I don't think this is simple conditioning, as jacob_cannell seems to be suggesting, but more like this.)

Comment author: calef 18 January 2017 05:53:26AM 3 points [-]

Hi Flinter (and welcome to LessWrong)

You've resorted to a certain argumentative style in some of your responses, and I wanted to point it out to you. Essentially, someone criticizes one of your posts, and your response is something like:

"Don't you understand how smart John Nash is? How could you possibly think your criticism is something that John Nash hadn't thought of already?"

The thing about ideas, notwithstanding the brilliance of those ideas or where they might have come from, is that communicating those ideas effectively is just as important as the idea itself. Even if Nash's Ideal Money scheme is the most important thing in the universe, if you can't communicate the idea effectively, and if you can't convincingly respond to criticism without hostility, no one will ever understand that idea but you.

A great modern example of this is Mochizuki's interuniversal Teichmuller theory, which he singlehandedly developed over the course of a decade in near complete isolation. It's an extremely technically dense new way of doing number theory that he claims resolves several outstanding conjectures in number theory (including the ABC Conjecture, among a couple others). And it's taken over four years for some very high profile mathematicians to start verifying that it's probably correct. This required workshops and hundreds of communications between Mochizuki and other mathematicians.

Point being: Progress is sociological as much as it is empirical. If you aren't able to effectively communicate the importance of an idea, it might be because the community at large is hostile to new ideas, even when represented in the best way possible. But if a community--a community which is, nominally, dedicated to rationally evaluating ideas--is unable to understand your representation, or see the importance of it, it might just be because you're bad at explaining it, the idea isn't all that great, or both.

Comment author: NatashaRostova 18 January 2017 05:44:54AM 5 points [-]

Hey,

I'm gonna give you sort of an unsatisfying answer. I had a similar interest, which resulted in me getting my MSc and working in research at the Fed for a few years, with the goal of sorting it out in my head (ended up going private sector instead of getting a PhD). As far as I have surveyed, there are different models of money, but it's scientifically an unsolved problem. There seems to be a level of complexity that arises as you increase the number of people on a monetary system, increase industries, increase geographical scale, add new countries and exchanges, and add complex financial systems. As this grows, filtering out what and how, exactly, money interacts with these systems, starts to get very messy.

As an example, during the financial crisis, trillions of dollars 'disappeared.' They disappeared because they only ever existed because we were borrowing from our future selves, then collectively lost faith in our future selves having that money, so the money ceased to exist today. Is that how a commodity behaves? Well, now we are trying to build classifications for what is and isn't a commodity. Of course, you could do the same thing on a gold standard if banks were allowed to issue demand deposits, which combined with fractional reserve banking leads to the same thing.

Monetarism, I firmly believe, isn't something you can reason through intuitively at a casual level. I decided it wasn't something I wanted to devote my life to, and even though I spent a couple years working daily in the field, I don't know that I understand that much (although I do know what I don't know, which definitely counts as real knowledge).

I think monetary economics is sort of a mind-killer, since trying to intuitively reason through monetarism can take you down many very different paths, all of which seemingly arise from an incredibly reasonable set of axioms and inferences. If you ever listen to really clever Austrians or Keynesians discuss their view, it's incredibly compelling. That sets off alarms in my favorite heuristic of undeterminism, when multiple models of the world fit the data equally well. It's super common for blogosphere denizens or naive rationalists to try their hand at monetary economics, convinced they've stumbled upon some key insight that means all econ professors are wrong.

I will say, while I didn't leave the fed enamored or anything, a subset of those economists are brilliant and humble. I notice this flawed reasoning so often, where independent researchers, or researchers in another field, will construct elaborate arguments against the most uncharitable readings of economists arguments. Often they won't have ever spoken to a notable economist in person. They don't ever have to present to peers, they never have to formalize their arguments mathematically, and they never bother engaging in the more advanced formulations of economic arguments that wind up in journals. Anyway, I'm getting off track here...

While as a rule I don't think mathematizing things necessarily makes them clearer, I am convinced it's the right way to proceed in monetary studies. It forces a strict structure, which prevents us from using words to overfit or get lost. Although the field is so complex, and it's intertwined with historical narratives that aren't always easily turned into data sets, so that can sometimes make it harder. The math often gets sorta complicated as well.

Of course, the actual monetary economy has real data. Most of which we can't collect. So the theoretical models are our way of trying to imagine what the structure would look like, even though they aren't empirical. Which gets to another problem, which is how confident can one be in theoretical economics? Sometimes the assumptions are incredibly robust, but the systems are often very complex.

One place I will say I think many economists act contrary to LW style rationality, is in choosing a side, rather than taking the rational view that there are many sides with equally valid claims to truth, and they should work together to expose what is correct. It has always struck me as being mind-killed when people state "Oh, I'm a neo-Keynsian so I believe XYZ, you're a non-Keynsian, so you reject ABC" (or whatever). I mean... maybe the Austrians are all right, and they have this unique perception of reality none of the Neo-Keynsian scholars have, because they have some more profoundly true insight into the mesh of reality that is lost on the other econo-plebs... But that doesn't seem like the most likely scenario to me.

Or maybe Paul Krugman really is right about everything, but still... I doubt it. He was once a smart young man that had some crucial insights on the theoretical mathematical structure behind international trade, which earned him an econ Nobel. I don't think he's in tune with empirical realities though. He's just a genius at imagining some elegant mathematical structure that characterizes an economy which might or might not map to reality, and then convincing himself it's actually exactly how reality operates. That's the big mistake, I think.

If you want to take a look down the rabbit hole, I'd suggest reading Milton Friedman's books on monetary history. Even his detractors tend to agree his insight and clarity on money is absolutely incredible. He also is great at explaining things without too much math, but still using ratios and dataseries in his books when appropriate.

For shorter term stuff, check out John Cochrane's stuff, he's my favorite social scientist, (http://faculty.chicagobooth.edu/john.cochrane/research/papers/cochrane_policy.pdf, http://johnhcochrane.blogspot.com/search/label/Monetary%20Policy). His blog -- second link -- is really great.

Comment author: jacob_cannell 18 January 2017 02:43:47AM 0 points [-]

If you instead claim that the "input" can also include observations about interventions on a variable, t

Yes - general prediction - ie a full generative model - already can encompass causal modelling, avoiding any distinctions between dependent/independent variables: one can learn to predict any variable conditioned on all previous variables.

For example, consider a full generative model of an ATARI game, which includes both the video and control input (from human play say). Learning to predict all future variables from all previous automatically entails learning the conditional effects of actions.

For medicine, the full machine learning approach would entail using all available data (test measurements, diet info, drugs, interventions, whatever, etc) to learn a full generative model, which then can be conditionally sampled on any 'action variables' and integrated to generate recommended high utility interventions.

then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial)

In any practical near term system, sure. In theory though, a powerful enough predictor could learn enough of the world physics to invent de novo interventions wholecloth. ex: AlphaGo inventing new moves that weren't in its training set that it essentially invented/learned from internal simulations.

Comment author: TiffanyAching 18 January 2017 01:30:56AM 1 point [-]

Could you explain this a little more? I don't quite see your reasoning. Leaving aside the fact that "morally valuable" seems too vague to me to be meaningfully measured anyway, adults aren't immutably fixed at a "moral level" at any given age. Andrei "Rostov Ripper" Chikatilo didn't take up murdering people until he was in his forties. At twenty, he hadn't proven anything.

Bob at twenty years old hasn't murdered anybody, though Bob at forty might. Now you can say that we have more data about Bob at twenty than we do about Bob at ten, and therefore are able to make more accurate predictions based on his track record, but by that logic Bob is at his most morally valuable when he's gasping his last on a hospital bed at 83, because we can be almost certain at that point that he's not going to do anything apart from shuffle off the mortal coil.

And if "more or less likely to commit harmful acts in future" is our metric of moral value, then children who are abused, for example, are less morally valuable than children who aren't, because they're more likely to commit crimes. That's not intended to put any words in your mouth by the way, I'm just saying that when I try to follow your reasoning it leads me to weird places. I'd be interested to see you explain your position in more detail.

Comment author: Anders_H 18 January 2017 01:23:22AM *  2 points [-]

I skimmed this paper and plan to read it in more detail tomorrow. My first thought is that it is fundamentally confused. I believe the confusion comes from the fact that the word "prediction" is used with two separate meanings: Are you interested in predicting Y given an observed value of X (Pr[Y | X=x]), or are you interested in predicting Y given an intervention on X (i.e. Pr[Y|do(X=x)]).

The first of these may be useful for certain purposes. but If you intend to use the research for decision making and optimization (i.e. you want to intervene to set the value of X , in order to optimize Y), then you really need the second type of predictive ability, in which case you need to extract causal information from the data. This is only possible if you have a randomized trial, or if you have a correct causal model.

You can use the word "prediction" to refer to the second type of research objective, but this is not the kind of prediction that machine learning algorithms are designed to do.

In the conclusions, the authors write:

"By contrast, a minority of statisticians (and most machine learning researchers) belong to the “algorithmic modeling culture,” in which the data are assumed to be the result of some unknown and possibly unknowable process, and the primary goal is to find an algorithm that results in the same outputs as this process given the same inputs. "

The definition of "algorithmic modelling culture" is somewhat circular, as it just moves the ambiguity surrounding "prediction" to the word "input". If by "input" they mean that the algorithm observes the value of an independent variable and makes a prediction for the dependent variable, then you are talking about a true prediction model, which may be useful for certain purposes (diagnosis, prognosis, etc) but which is unusable if you are interested in optimizing the outcome.

If you instead claim that the "input" can also include observations about interventions on a variable, then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial), or unless you have a correct causal model.

Machine learning algorithms are not magic, they do not solve the problem of confounding unless they have a correct causal model. The fact that these algorithms are good at predicting stuff in observational datasets does not tell you anything useful for the purposes of deciding what the optimal value of the independent variable is.

In general, this paper is a very good example to illustrate why I keep insisting that machine learning people need to urgently read up on Pearl, Robins or Van der Laan. The field is in danger of falling into the same failure mode as epidemiology, i.e. essentially ignoring the problem of confounding. In the case of machine learning, this may be more insidious because the research is dressed up in fancy math and therefore looks superficially more impressive.

Comment author: Elo 18 January 2017 01:07:48AM 1 point [-]

I labelled them A and B for clarity and included links to the other one in each.

Comment author: Elo 18 January 2017 12:54:47AM 0 points [-]

this may be an odd counter position to the normal.

I think that adults are more morally valuable because they have proven their ability to not be murderous etc. Or possibly also to not be the next ghandi. Children could go either way.

Comment author: interstice 18 January 2017 12:53:44AM 1 point [-]

Dominic Cummings asks for help in aligning incentives of political parties. Thought this might be of interest, as aligning incentives is a common topic of discussion here, and Dominic is someone with political power(he ran the Leave campaign for Brexit), so giving him suggestions might be a good opportunity to see some of the ideas here actually implemented.

Comment author: Tyrin 17 January 2017 11:32:57PM *  0 points [-]

I didn't mean 'similar'. I meant that it is equivalent to Bayesian updating with a lot of noise. The great thing about recursive Bayesian state estimation is that it can recover from noise by processing more data. Because of this, noisy Bayes is a strict subset of noise-free Bayes, meaning pure rationality is basically noise-free Bayesian updating. That idea contradicts the linked article claiming that rationality is somehow more than that.

There is no plausible way in which the process by which this meme has propagated can be explained by Bayesian updating on truth value.

An approximate Bayesian algorithm can temporarily get stuck in local minima like that. Remember also that the underlying criterion for updating is not truth, but reward maximization. It just happens to be the case that truth is extremely useful for reward maximization. Evolution did not achieve to structure our species in a way that makes it make it obvious for us how to balance social, aesthetic, …, near-term, long-term rewards to get a really good overall policy in our modern lives (or really in any human life beyond multiplying our genes in groups of people in the wilderness). Because of this people get stuck all the time in conformity, envy, fear, etc., when there are actually ways of suppressing ancient reflexes and emotions to achieve much higher levels of overall and lasting happiness.

Comment author: James_Miller 17 January 2017 11:09:46PM 1 point [-]

Flinter, do you know who John Nash is? He had a brilliant mind and produced some remarkable works but he also had mental illness that occasionally caused him to misunderstand reality and so we can not just assume something he passionately believed in is right.

Comment author: ingive 17 January 2017 10:33:33PM 0 points [-]

Yeah, it's also called 'Enlightenment' in theological traditions. You can read the testimonies here. MrMind has, for example, read them, but he's waiting a bit longer to contact these people on Reddit to see if it sticks around. I think the audio can work really well with a good pair of headphones and playing it as FLAC.

Comment author: moridinamael 17 January 2017 10:23:26PM 0 points [-]

You've had people complete these steps and report that the "What will happen after you make the click" section actually happens?

Comment author: ingive 17 January 2017 10:13:57PM *  0 points [-]

I agree.

These are the steps I did to have identity death: link to steps I also meditated on the 48 min hypnosis track youtube If you are interested in where I got my ideas from and if you want to try it yourself. It's of course up to you but you have a strong identity and ego issues and I think it will help "you"(and me).

Comment author: moridinamael 17 January 2017 10:05:01PM 0 points [-]

It doesn't look like there's anywhere to go from here. It looks like you are acknowledging that where your positions are strong, they are not novel, and where they are novel, they are not strong. If you enjoy drawing the boundaries of your self in unusual places or emotionally associating your identity with certain ideas, go for it. Just don't expect anybody else to find those ideas compelling without evidence.

Comment author: ingive 17 January 2017 09:58:16PM *  0 points [-]

This is substantially different from saying with any kind of certainty that helping other people is identical to helping myself.

No, it's not.

Other people want things contrary to what I want.

What does that have to do with helping yourself, thus other people?

Having low attachment to my identity is not the same thing as being okay with people hurting or killing me.

Yeah, but 'me' is used practically.

The fact that human brains run on physics in no way implies that helping another is helping yourself.

I said your neural activity includes you and your environment and that there is no differentiation. So there is no differentiation by helping another as in helping yourself.

Again, if a person wants to kill me, I'm not helping myself if I hand him a gun. If you model human agents the way Dennis Hoffman's character does in I Heart Huckabees you're going to end up repeatedly confused and stymied by reality.

That's the practical 'myself' to talk about this body, its requirements and so on. You are helping yourself by not giving him a gun because you are not differentiated by your environment. You are presuming that you are helping yourself by giving gun because you think that there is another. No there is only yourself. You help yourself by not giving the gun because your practical 'myself' is included in 'yourself'.

This is also just not factual. You're making an outlandish and totally unsupported claim when you say that "emotionally accepting reality" causes the annihilation of the self. The only known things that can make the identity and self vanish are high dose psychotropic compounds extremely long and intense meditation of particular forms that do not look much like what you're talking about and even these are only true for certain circumscribed senses of the word "self".

I don't deny that it is not that factual as there is limited objective evidence.

These are pseudo-religious woo, not supported by science anywhere. I have given you very simple examples of scenarios where they are flatly false, which immediately proves that they are not the powerful general truths you seem to think they are.

I disagree with 'helping another is helping you' being psuedo-religious woo but it's because we're talking about semantics. We have to decide what 'me' or my 'self' or 'I' is. I use the neural activity as the definition of this. You seem to use some type philosophical reasoning where you are presuming I use the same definition.

So we should investigate if your self and identity can die from that and if other facts which we don't embrace emotionally leads to a similar process but for their area. That's the entire point of my original post.

Comment author: moridinamael 17 January 2017 09:31:51PM 0 points [-]

It seemed as you were very new to the concept of non-emotional attachment to identity/I because you argued my semantics.

Not really, I've been practicing various forms of Buddhist meditation for several years and have pretty low attachment to my identity. This is substantially different from saying with any kind of certainty that helping other people is identical to helping myself. Other people want things contrary to what I want. I am not helping myself if I help them. Having low attachment to my identity is not the same thing as being okay with people hurting or killing me.

The rest of your post, which I'm not going to quote, is just mixing up lots of different things. I'm not sure if you're not aware of it or if you are aware of it and you're trying to obfuscate this discussion, but I will give you the benefit of the doubt.

I will untangle the mess. You said:

For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you. If that doesn't resonate enough, for example, evolutionary biology that we're all descendants from stardust might. Or that there is a probability that you don't exist (as per QM) although very small. So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally.

Then I said,

I feel it might help you to know that none of this is actually factual. These are your interpretations of really vague and difficult-to-pin-down philosophical ideas, ideas about which very smart and well-read people can and do disagree. For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses. The same could be said for the idea that helping another is helping yourself. That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

Since I have now grasped the source of your confusion with my word choice, I will reengage. You specifically say:

For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you.

This is a pure non sequitur. The fact that human brains run on physics in no way implies that helping another is helping yourself. Again, if a person wants to kill me, I'm not helping myself if I hand him a gun. If you model human agents the way Dennis Hoffman's character does in I Heart Huckabees you're going to end up repeatedly confused and stymied by reality.

So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally.

This is also just not factual. You're making an outlandish and totally unsupported claim when you say that "emotionally accepting reality" causes the annihilation of the self. The only known things that can make the identity and self vanish are

  • high dose psychotropic compounds
  • extremely long and intense meditation of particular forms that do not look much like what you're talking about

and even these are only true for certain circumscribed senses of the word "self".

So let's review:

I don't object to the naturalistic philosophy that you seem to enjoy. That's all cool and good. We're all about naturalistic science around here. The problem is statements like

So helping another is helping you.

and

Your identity and self vanishes, as it's no longer aligned with reality.

These are pseudo-religious woo, not supported by science anywhere. I have given you very simple examples of scenarios where they are flatly false, which immediately proves that they are not the powerful general truths you seem to think they are.

Comment author: ingive 17 January 2017 08:54:45PM *  0 points [-]

Indeed, this is true in the sense that it's most likely that this is the case based on the available evidence.

I'm glad that you're aligned with reality on this certain point, there's not many that are, but I wonder, why do you claim that helping others is not helping yourself, excluding practicality of semantics? It seemed as you were very new to the concept of non-emotional attachment to identity/I because you argued my semantics.

But, you claimed earlier that none of this is actually factual would you like to elaborate on that? That these are my interpretations of vague and difficult-to-pin-down philosophical ideas.

The reason why I push this is because you contradict yourself and you very much seemed to have an opinion on this specific matter.

I feel it might help you to know that none of this is actually factual. These are your interpretations of really vague and difficult-to-pin-down philosophical ideas, ideas about which very smart and well-read people can and do disagree. For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses. The same could be said for the idea that helping another is helping yourself. That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

So... "none of this is actually factual", it's philosophical ideas, but later on you agree that "you and your environment are not separated. This is obviously true" by saying "Indeed, that was what I said. It is still true." Which you did but it was "...in some narrow technical sense..." and "...but it is also very much false ... relevant ..." now it's "It's true" "factual"? Is it also a "philosophical idea" and a part of the ideas that "none of this is actually factual"?

Your statements in order:

  • not actually factual.
  • really vague philosophical ideas
  • may be true in some narrow technical sense
  • but it is also very much false in probably more relevant senses
  • indeed, that what was I said
  • it is still true

It's fine to be wrong and correct yourself :)

The activity of that atom is not relevant to my decision making process. That's it. What part of this is supposed to be in error?

Yeah, it isn't, but the example you gave of you and environment, is relevant to your decision-making process, as evident by your claim (outside of practicality) and of semantics that "helping others is not helping yourself" for example. So using an analogy which is not relevant to your decision-making process in contrary to your example where it is, is incorrect. That's why I say use the example which you used before. Instead of making an analogy that I don't disagree with.

Comment author: morganism 17 January 2017 08:38:29PM 0 points [-]

Am not able to load game myself but how about adding a scenario:

You have a computer researcher who is planning to pitch an upgrade to the trolley car system logic and computation systems on one track.....

Comment author: morganism 17 January 2017 08:36:21PM *  0 points [-]

I understand this, and as a young system, you would potentially have a lot more rocks affected by the proposed gas giant, but as you also point out, any un-bounded material should have already been ejected from the system. It is difficult, but obviously not impossible to change parabolas into hyper parabolas to enable these kind of speeds, but they obviously got close enough to hit the roche limit, or simply dissolved like the Christmas Comet of 2014.

Planet 9 is also theorized to be near 90 d(edit:30d) to orbital plane also, so tossing things out where we aren't looking for them is another hazard in itself. I think the orbital plane of galaxy is out where Pluto is now, (because of the diffuculty of finding secondary targets for New Horizons was made more difficult by background clutter from MW) and 9 is another 40d around the orbital plane, so with a (edit:15k) orbit, there is not a great chance it is going to be relevant in the double influence scenario.

View more: Next