If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
133 comments, sorted by Click to highlight new comments since: Today at 9:40 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My "RECENT ON RATIONALITY BLOGS" section on the right sidebar is blank.

If this isn't just me, and remains this way for long, I predict LW traffic will drop markedly as I primarily use LW habitually as a way to access SSC, and I'd bet my experience is not unique in this way.

Maybe you're just not rational enough to be shown that content? I see like 10 posts there.

MIRI has invented a proprietary algorithm that uses the third derivative of your mouse cursor position and click speed to predict your calibration curve, IQ and whether you would one-box on Newcomb's problem with a correlation of 95%. LW mods have recently combined those into an overall rationality quotient which the site uses to decide what level of secret rationality knowledge you are permitted to see.

Maybe you should do some debiasing, practice being well-calibrated, read the sequences and try again later?

EDIT: Some people seem to be missing that this is intended as humor ............

2Manfred7y
it's a shame downvoting is temporarily disabled.
2The_Jaded_One7y
Why does everyone want to downvote everything, ever!? Seriously, lighten up!!!
0Elo7y
no, some things would benefit from being voted down out of existence.
6The_Jaded_One7y
Yes, I totally agree. In the last few weeks, I have seen some totally legit targets for being on -10 and not visible unless you you click on them, such as the 'click' posts, repetitive spam about that other website, probably the weird guy who just got banned from the open thread too. However, I have also seen people advocate using mass downvoting on an OK-but-not-great article on cults that they just disagree with, and now someone wants to downvote to oblivion a joke in the open thread. Why? Is humor banned? There is a legitimate middle ground between toxicity and brilliance.
0Elo7y
Agreed. I think humour is a mixed bag. Sometimes good and sometimes bad. In my ideal situation there would be a place for humour to happen where people can choose to go, or choose not to go. Humour should exist but mixing it in with everything else is not always great.
1Brillyant7y
Perhaps you are looking at the "RECENT POSTS" section rather than the section I mentioned? I'll work on this. Maybe you could work on reading?
0The_Jaded_One7y
No it's definitely "RECENT ON RATIONALITY BLOGS" section ;)
2Vaniver7y
It looks that way to me as well, and I don't think that should be the case. I'll investigate what's up.
0Vaniver7y
On an initial pass, the code hasn't been updated in a month, so I doubt that's a cause. If you look at the list of feedbox urls here, two of them seem like they're not working (the Givewell one and the CFAR one). It's not clear to me yet how Google's feed object made here works; it looks like we feed it a url, then try to load it in a way that handles errors. But if it checks the URL ahead of the load, that might error out in a way that breaks the feedbox. (The page also has an Uncaught Error: Module: 'feeds' not found! which I'm not sure how to interpret yet, but makes me more suspicious of that region.)
1morganism7y
Both NoScript, and Disconnect blockers, block those. I still have to whitelist Vigilink every time i come here, and i can't see lots of features and editing handles if i haven't gone to Reddit and whitelisted it before visiting here....
1Vaniver7y
So, we use google.feeds.Feed(url) to manage this. If you go to the docs page for that, you find:

Flinter has been banned after a private warning. I'm deleting the comment thread that led to the ban because it's an inordinate number of comments cluttering up a welcome thread.

Users are reminded that responding to extremely low-quality users creates more extremely low quality comments, and extended attempts to elicit positive communication almost never work. Give up after a third comment, and probably by your second.

From Flinter's comment:

The mod insulted me, and Nash.

While I respect your decision as a moderator to ban Flinter, insulting Nash is a horrible thing to do and you should be ashamed of yourself!

/ just kidding

Also, someone needs to quickly make a screenshot of the deleted comment threads, and post them as new LW controversy on RationalWiki, so that people all around the world are properly warned that LW is pseudoscientific and disrespects Nash!

/ still kidding, but if someone really does it, I want to have a public record that I had this idea first

2drethelin7y
this is why we need downvotes
5Vaniver7y
As the Churchill quote goes: Less Wrong is not, and will not be, a home for fanatics.
2TiffanyAching7y
Fair enough. Kindest thing to do really. I think people have a hard time walking away even when the argument is almost certainly going to be fruitless.

For general information -- since Flinter is playing games to get people to follow the steps he suggests, it might be useful to read some of his other writings on the 'net to cut to the chase. He is known as Juice/rextar4444 on Twitter and Medium and as JokerPravis on Steemit.

0[anonymous]7y
Since we no longer have downvotes, might it be a good idea for the mods to start banning cult spammers like ingive and flinter?

At what age do you all think people have the greatest moral status? I'm tempted to say that young children (maybe aged 2-10 or so) are more important than adolescents, adults, or infants, but don't have any particularly strong arguments for why that might be the case.

4knb7y
I don't think children actually have greater moral status, but harming children or allowing children to be harmed carries more evidence of depraved/dangerous mental state because it goes against the ethic of care we are supposed to naturally feel toward children.
2btrettel7y
If you think in terms of QALYs, that could be one reason to prefer interventions targeted at children. Your average child has more life to live than your average adult, so if you permanently improve their quality of life from 0.8 QALYs per year to 0.95 QALYs per year, that would result in a larger QALY change than the same intervention on the adult. This argument has numerous flaws. One which comes to mind immediately are that many interventions are not so long lasting, so both adults and children would presumably gain the same. It also is tied to particular forms of utilitarianism one might not subscribe to.
0Elo7y
this may be an odd counter position to the normal. I think that adults are more morally valuable because they have proven their ability to not be murderous etc. Or possibly also to not be the next ghandi. Children could go either way.
2TiffanyAching7y
Could you explain this a little more? I don't quite see your reasoning. Leaving aside the fact that "morally valuable" seems too vague to me to be meaningfully measured anyway, adults aren't immutably fixed at a "moral level" at any given age. Andrei "Rostov Ripper" Chikatilo didn't take up murdering people until he was in his forties. At twenty, he hadn't proven anything. Bob at twenty years old hasn't murdered anybody, though Bob at forty might. Now you can say that we have more data about Bob at twenty than we do about Bob at ten, and therefore are able to make more accurate predictions based on his track record, but by that logic Bob is at his most morally valuable when he's gasping his last on a hospital bed at 83, because we can be almost certain at that point that he's not going to do anything apart from shuffle off the mortal coil. And if "more or less likely to commit harmful acts in future" is our metric of moral value, then children who are abused, for example, are less morally valuable than children who aren't, because they're more likely to commit crimes. That's not intended to put any words in your mouth by the way, I'm just saying that when I try to follow your reasoning it leads me to weird places. I'd be interested to see you explain your position in more detail.
0Viliam7y
That reminds me of a scene in Psycho-Pass where... ...va gur svefg rcvfbqr, n ivpgvz bs n ivbyrag pevzr vf nyzbfg rkrphgrq ol gur cbyvpr sbepr bs n qlfgbcvna fbpvrgl, onfrq ba fgngvfgvpny ernfbavat gung genhzngvmrq crbcyr ner zber yvxryl gb orpbzr cflpubybtvpnyyl hafgnoyr, naq cflpubybtvpnyyl hafgnoyr crbcyr ner zber yvxryl gb orpbzr pevzvanyf va gur shgher. (rot 13)
0TiffanyAching7y
Yes, that's the sort of idea I was getting at - though not anything so extreme. Of course I don't really think Elo was saying that at all anyway, I'm not trying to strawman. I'd just like to see the idea clarified a bit. (We use substitution ciphers as spoiler tags? Fancy!)
0Elo7y
I am not keen on a dystopian thought police. We have at the moment a lot more care given to children than to adults. For example children's hospitals VS adult's hospitals. The idea is not drawn out to further conclusions as you have done, but I had to ask why we do the thing where we care about children's hospitals more than adult's hospitals, and generally decided that I don't like the way it is. I believe common behaviour to like children more comes out of some measure of, "they are cute" and is similar to why we like baby animals more than fully grown ones. Simply because they have a babyness to them. If that is the case then it's a relatively unfounded belief and a bias that I would rather not carry. Adults are (probably) productive members of society, we can place moralistic worth on that life as it stands in the relative concrete present, not the potential that you might be measuring when a child grows up. Anyone could wake up tomorrow and try to change the world, or wake up tomorrow and try to lie around on the beach. What causes people to change suddenly? Not part of this puzzle. I am confident that the snapshot gives a reasonably informative view of someone's worth. They are working hard in EA? That's their moral worth they present when they reveal with their actions what they care about. What about old people? I don't know... Have not thought that far ahead. Was dealing with the cute-baby bias first. I suppose they are losing worth to society as they get less productive. And at the same time they have proven themselves worthy of being held/protected/cared for (or maybe they didn't).
0TiffanyAching7y
The urge to protect and prioritize children is partly biological/evolutionary - they have to be "cute" otherwise who'd put up with all the screaming and poop long enough to raise them to adulthood? The urge to protect and nurture them is a survival-of-the-species thing. Baby animals are cute because they resemble human babies - disproportionately big heads, big eyes, mewling noises, helplessness. But from a moral perspective I'd argue that there is a greater moral duty to protect and care for children because they can neither fend nor advocate for themselves effectively. They're largely at the mercy of their carers and society in general. An adult may bear some degree of responsibility for his poverty, for example, if he has made bad choices or squandered resources. His infant bears none of the responsibility for the poverty but suffers from it nonetheless and can do nothing to alleviate it. This is unjust. There's also the self-interest motive. The children we raise and nurture now will be the adults running the world when we are more or less helpless and dependent ourselves in old age. And there's the future-of-humanity as it extends past your own lifetime too, if you value that. But of course these are all points about moral duty rather than moral value. I'm fuzzier on what moral value means in this context. For example the difference in moral value between the young person who is doing good right now and the old person who has done lots of good over their life, but isn't doing any right now because that life is nearly over and they can't. Does ability vs. desire to do good factor into this? The child can't do much and the end-of-life old person can't do much, though they may both have a strong desire to do good. Only the adult in between can match the ability to the will.
0Elo7y
Yes. I agree with most of what you have said. I would advocate a "do no harm", attitude. Rather than a "provide added benefit" just because they are children. I wouldn't advocate to neglect children, but I wouldn't put them ahead of adults. As for what we should do. I don't have answers to these questions, I suspect it comes down to how each person weighs the factors in their own heads, and consequently how they want the world to be balanced. Just like some people care about animal suffering and others do not. (I like kids, definitely, but moral value is currently subjectively determined)
0ChristianKl7y
It depends very much on the context. In many instances where we want to save lives QALY are a good metric. In other cases like deciding how should be able to sit down in a bus, the metric is worthless.

Is there a simple coding trick to allow this blockchain micropayment scheme into Reddit based sites ?

https://steemit.com/facebook2steemit/@titusfrost/in-simple-english-for-my-facebook-friends-how-and-why-to-join-steemit

This seems like a interesting way to get folks to write deeper and more thoughtful articles, by motivating them with some solid reward. And if something does go viral, it can allow some monetization without resorting to ad-based sites....

BTW, there was a link to simple markdown on Github in there

https://guides.github.com/features/mastering-m... (read more)

1Luke_A_Somers7y
OK, I had dropped this for a while, but here are my thoughts. I haven't scrubbed everything that could be seen through rot13 because it became excessively unreadable For Part 1: gur enqvhf bs gur pragre fcurer vf gur qvfgnapr orgjrra bar bs gur qvnzrgre-1/2 fcurerf naq gur pragre. Gur qvfgnapr sebz gur pragre bs gur fvqr-fcurer gb gur pragre bs gur birenyy phor vf fdeg(A)/4. Fhogenpg bss n dhnegre sbe gur enqvhf bs gur fcurer, naq jr unir gur enqvhf bs gur pragre fcurer: (fdeg(A)-1)/4. Guvf jvyy xvff gur bhgfvqr bs gur fvqr-1 ulcrephor jura gung'f rdhny gb n unys, juvpu unccraf ng avar qvzrafvbaf. Zber guna gung naq vg jvyy rkgraq bhgfvqr. Part 2: I admit that I didn't have the volume of high-dimensional spheres memorized, but it's up on wikipedia, and from there it's just a matter of graphing and seeing where the curve crosses 1, taking into account the radius formula derived above.. I haven't done it, but will eventually. Part 3 looks harder and I'll look at it later.
0Thomas7y
Part 1 is good.
0gjm7y
dhrfgvba bar Qvfgnapr sebz prager bs phor gb prager bs "pbeare" fcurer rdhnyf fdeg(a) gvzrf qvfgnapr ba bar nkvf = fdeg(a) bire sbhe. Enqvhf bs "pbeare" fcurer rdhnyf bar bire sbhe. Gurersber enqvhf bs prageny fcurer = (fdeg(a) zvahf bar) bire sbhe. Bs pbhefr guvf trgf nf ynetr nf lbh cyrnfr sbe ynetr a. Vg rdhnyf bar unys, sbe n qvnzrgre bs bar, jura (fdeg(a) zvahf bar) bire sbhe rdhnyf bar unys <=> fdeg(a) zvahf bar rdhnyf gjb <=> fdeg(a) rdhnyf guerr <=> a rdhnyf avar. dhrfgvba gjb Guvf arire unccraf. Hfvat Fgveyvat'f sbezhyn jr svaq gung gur nflzcgbgvpf ner abg snibhenoyr, naq vg'f rnfl gb pbzchgr gur svefg ubjrire-znal inyhrf ahzrevpnyyl. V unira'g gebhoyrq gb znxr na npghny cebbs ol hfvat rkcyvpvg obhaqf rireljurer, ohg vg jbhyq or cnvashy engure guna qvssvphyg. dhrfgvba guerr Abar. Lbh pnaabg rira svg n ulcrefcurer bs qvnzrgre gjb orgjrra gjb ulcrecynarf ng qvfgnapr bar, naq gur ulcrephor vf gur vagrefrpgvba bs bar uhaqerq fcnprf bs guvf fbeg.
0Thomas7y
One: Correct Two: Incorrect Three: Correct
1gjm7y
Oooh, I dropped a factor of 2 in the second one and didn't notice because it takes longer than you'd expect before the numbers start increasing. Revised answer: dhrfgvba gjb Vs lbh qb gur nflzcgbgvpf pbeerpgyl engure guna jebatyl, gur ibyhzr tbrf hc yvxr (cv gvzrf r bire rvtug) gb gur cbjre a/2 qvivqrq ol gur fdhner ebbg bs a. Gur "zvahf bar" va gur sbezhyn sbe gur enqvhf zrnaf gung gur nflzcgbgvp tebjgu gnxrf ybatre gb znavsrfg guna lbh zvtug rkcrpg. Gur nafjre gb gur dhrfgvba gheaf bhg gb or bar gubhfnaq gjb uhaqerq naq fvk, naq V qb abg oryvrir gurer vf nal srnfvoyr jnl gb trg vg bgure guna npghny pnyphyngvba.
0Thomas7y
Correct. I gave some Haskell code as a comment over there on my blog, under the posted problem. 1206 dimension is the smallest number. One can experiment with other values.
0Luke_A_Somers7y
On the face of it, the premise seems wrong. For any finite number of dimensions, there will be a finite number of objects in the cube, which means you aren't getting any infinity shenanigans - it's just high-dimensional geometry. And in no non-shenanigans case will the hypervolume of a thing be greater than a thing it is entirely inside of.
2Thomas7y
Are you sure, it's entirely inside?
0Luke_A_Somers7y
OK, that's an angle (pun intended) I didn't catch upon first consideration.
2gjm7y
High-dimensional cubes are really thin and spiky.
0Thomas7y
They are counterintuitive. A lot is counterintuitive in higher dimensions. Especially something, I may write about in the future. This 1206 business is even Googleable. Which I have learned only after I have calculated the actual number 1206. https://sbseminar.wordpress.com/2007/07/21/spheres-in-higher-dimensions/

I wanted to make a discussion post about this but apparently I need 2 karma points and this forum is too ignorant to give them out. I'll post here and I guess probably be done with this place since its not even possible for me to attempt to engage in meaningful discussion. I'd also like to make the conjecture that this place cannot be based on rationality with the rule sets that are in place for joining-and I don't understand why that isn't obvious.

Anyways, here is what would have been my article for discussion:

"I am not perfectly sure how this site... (read more)

4MrMind7y
People have come here and said: "Hey, I've something interesting to say regarding X, and I need a small amount of karma to post it. Can I have some?" and have been given plenty. A little reflection and a moderate amount of politeness can go a long way.
0Flinter7y
Yup but that ruins my first post cause I wanted it to be something specific. So what you are effectively saying is I have to make a sh!t post first, and I think that is irrational. I came here to bring value not be filtered from doing so. Cheers!
2MrMind7y
It makes sense from the inside of the community. The probability of someone posting something of value as the first post is much lower than that of someone posting spam on the front page. So a very low bar to post on the front page is the best compromise between "discourage spammer" and "discourage poster that has something valuable to say".
0Flinter7y
If it filters out Nash's argument, Ideal Money, then it makes no sense and is completely irrational for it. Think about what you are saying, its ridiculous. Are you also unwilling to discuss the content, and simply are stuck on my posting methods, writing, and character?
2MrMind7y
Well, since it's an automated process, it filters anything, be it spam, Nash' argument or the words of Omega itself. As I said, it's a compromise. The best we could come up, so far. If you have a better solution, spell it out. No, mine was just a suggestion for a way that would allow you to lubricate the social friction I think you're experiencing here. On the other side, I am reading your posts carefully and reply when done thinking about.
0Flinter7y
You are defending irrationality. It filters out the one thing it needs to not filter out. A better solution would be to eliminate it. Sigh, I guess we never will address Ideal Money will we. I've already spent all day with like 10 posters, that refuse to do anything but attack my character. Not surprising since the subject was insta-mod'd anyways. Well, as a last hail mary, I just want to say I think you are dumb for purposefully trolling me like this and refusing to address Nash's proposal. Its John Nash, and he spent his life on this proposal, ya'll won't even read it. There is no intelligence here, just pompous robots avoiding real truth. Do you know who Nash is? It took 40 years the first time to acknowledge what he did with his equilibrium work. Its been 20 in regard to Ideal Money...
2MrMind7y
I wonder what my failure in communicating my idea is in this case. Let me rephrase my argument in favor of filtering and see if I can get my point across: if we eliminated the filter, the site would be inundated with spam and fake accounts posts. By having a filter we block all this, and people willing to pass a small threshold will not be denied to post their contributions. In due time, I will. That is unfortunate, but you must be prepared to make these discussions on the lon run. There are people that come here only once a week or only once every three months. A day can be enough to filter out the most visceral reactions, but here discussions can span days, weeks or years. I am reading it right now, and exactly because it's Nash I'm reading as careful as I can. But what won't fly here is insulting. Frustration for not being able to communicate your idea is something that we all felt, after all communicating clearly is hard. But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.
0Flinter7y
Let me communicate to you what I am saying. I bring the most important writing ever known to mankind. Who is the mod that moderated Nash? Where is the intelligence in that? Let's not call that intelligence and try and defend it. Let's call it an error. Cheers! :) Do you think I am not prepared? I have been at this for about 4 years I think. I have writing 100's maybe thousands of related articles and been on many many forums and sites discussing it and "arguing" with many many people. Ah, sincerity!!!!!!! I have been insulted by nearly every poster that has responded. The mod insulted me, and Nash. I have never been more insulted so quick so much on any other site. Yup ban the messenger and ignore the message. Why would these people remain ignorant to Nash? How did Nash go 20 years without anyone giving his lectures serious thought?
0Flinter7y
I don't think I should have done what I did to get my first two karma points. I suspect it degrades the quality of the site at a rate in which rationality can't inflate it. But I'll save my reasoning and the discussion of it ftm. I am now able to post my discussion on its own it seems, so I did it. 2x cheers.
3niceguyanon7y
Your first paragraph venting your frustration at the 2 karma rule was unnecessary, but cool you realized that. I think this post is fine as an Open Thread or as an introduction post. I don't see why it is necessary for its own discussion. Plus it seems like you are making an article stating that you will make an article. I don't think you need to do that. Just come right out and say what you have to say.
0Flinter7y
No you don't understand. I have something valuable to bring but I needed to make my INTRO post an independent one and I was stripped of that possibility by the process.
3gjm7y
You weren't "stripped of that possibility". LW has small barriers to entry here and there; you are expected to participate in other ways and demonstrate your bona fides and non-stupidity before posting articles. Do you think that is unreasonable? Would it be better if all the world's spammers could come along and post LW articles about their sex-enhancing drugs and their exam-cheating services and so on?
0Flinter7y
Yes I think its not reasonable, because it acted counter-productive to the intended use that you are suggesting it was implemented for.
0gjm7y
How?
0Flinter7y
Because I cannot do what was required to make a proper post, which was to not have to make "shit posts" before I make my initial post (which needed to be independent). So the filter, which is trying to foster rational thinking, filtering out the seeds of it.
6gjm7y
No one's requiring you to make "shit posts". You have not explained why your post had to be "independent". Perhaps there are reasons -- maybe good ones -- why you wanted your first appearance here to be its posting, but I don't see any reason why it's better for LW for that to be so. In any case, "X has a cost" is not a good argument against X; there can be benefits that outweigh the costs. I hope you will not be offended, but I personally am quite happy for you to be slightly inconvenienced if the alternative is having LW deluged with posts from spammers.
[-][anonymous]7y00

Was reminded to say hello here!

I'm Jacob Liechty, with a new account after using a less active pseudonym for a while. I've been somewhat active around the rationality community and know a bunch of people therein and throughout. Rationalism and its writings had a pretty deep impact on my life about 5 years ago, and I haven't been able to shake it since.

I currently make video games for a living, but will be keeping my finger to the pulse to determine when to move into more general tech startups, some sort of full time philanthropy, maybe start an EA nonprofi... (read more)

[This comment is no longer endorsed by its author]Reply

Good news: People are becoming more aware that AI is a thing, even mainstream media mention it sometimes.

Bad news: People think that spellchecker is an example of AI.

¯\_(ツ)_/¯

0ingive7y
I think then you should ask what can you do about it (or do the most effective action).
2chaosmage7y
You could give this answer to literally anything.

a

[This comment is no longer endorsed by its author]Reply

I heard Britain just passed a Robotic Rights Act, but only in passing, and can't find anything on it in search, except the original paper by the U.K. Office of Science and Innovation's Horizon Scanning Centre.

"However, it warned that robots could sue for their rights if these were denied to them.

Should they prove successful, the paper said, "states will be obligated to provide full social benefits to them including income support, housing and possibly robo health care to fix the machines over time.""

not to mention slavery, international... (read more)

Some of us sometimes make predictions with probabilities attached; does anybody here actually try to keep up a legit belief web and do Bayesian updating as the results of predictions come to pass?

If so, how do you do it?

1ChristianKl7y
No, and having a self-consistent belief net might decrease the quality of the beliefs a lot. Having multiple distinct perspectives on an issue was suggested by Tetlock to be very useful.
1moridinamael7y
A Bayesian network is explicitly intended to accommodate conflicting perspectives and update the weights of two or more hypotheses based on the result of an observation. There's absolutely no contradiction between "holding multiple distinct perspectives" and "mapping belief dependencies and using Bayesian updating".

How would we go about changing human behavior to be more aligned with reality? I was thinking it is undoubtedly the most effective thing to do. Ensure world domination of rationalist, effective altruist and utilitarian ideas. There are two parts to this, I simply mention R, EA and U because it resonates very well here with the types of users here and alignment with reality I explain next. How I expect alignment to reality to be, is accepting facts fully. For example, thinking and emotionally, this includes uncertainty of facts (because of facts like an int... (read more)

3Thomas7y
A.K.A. Soviet Union and dependent states.
0Viliam7y
What makes you believe that the ruling class of Soviet Union was rational? It was a country where Lysenkoism was the official science, and where Kolmogorov was allowed to do math despite being gay only because he made a contribution to military technology.
2Thomas7y
It was NOT rational. It was declared rational. As "we are not going to pursue profit, but we will instead drop the prices, as the socialism is going to be much more rational system". And many, many more such paroles. Several might be even true. The social democrats of today still want some of those "rationalizations" to implement. The problem is, the world doesn't operate on such rationales. And this Effective Altruism looks similar to me.. If one wants "to do good" for others, he should invest his money wisely. He should employ people, he should establish new businesses with those less fortunate people. Giving something for nothing is not a very good idea! But using your powers for others to give something for nothing ... is a bad idea. In the name of self-perceived rationality - it's even worse.
0ingive7y
I wrote to align with reality, thus accept facts fully, which includes the uncertainty of facts. There is no alignment with reality in any of what you've said in comparison to mine, so strawman at best. You're implying that "doing good" effectively couldn't be of investing, employing or establishing businesses. It's independent of a method as long as it is effective in the context of effective altruistic actions. It makes no difference as long as it's the most effective with positive expected value.
0ingive7y
Why do you think that?
3Thomas7y
It was the same rationale. "We know what's the best for everybody else, so we will take the power!" Besides the fact that those revolutionaries were wrong at the beginning, they purged each other throughout the process, so that the most cunning one was selected. Which was even more wrong, than those early revolutionaries were. Or maybe Stalin was more right than Trotsky, who knows, but it didn't matter very much. Even Lenin was wrong. But even if Lenin was right, Andropov would still be corrupted.
0ingive7y
I didn't really mean that. It was just setting an emotional stage for the rest of the comment. What do you think of the rest?
2ZankerH7y
Having actually lived under a regime that purported to "change human behaviour to be more in line with reality", my prior for such an attempt being made in good faith to begin with is accordingly low. Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place. This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people's brain centres with 15th century technology. Why don't you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide? And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation? If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point? Because that's essentially what you're proposing from the point of view of all potential futures where you fail.
0ingive7y
You're excluding being aligned with objective reality (accepting facts, etc) with said effectiveness. Otherwise, it's useless. I'm unsure why you're presuming rearranging people's brains isn't done constantly independent of our volition. This simply starts questioning how we can do it, with our current knowledge. Why would it lead to megalomania and genocide, when it's not aligned with reality? An understanding of neuroscience and evolutionary biology, presuming you were aligned with reality to figure it out and accept facts, would be enough and still understanding that we can be wrong until we know more. As I said "this includes uncertainty of facts (because of facts like an interpretation of QM)." which makes us embrace uncertainty, that reality is probabilistic with this interpretation. It's not absolute. I'm not.
0MrMind7y
I think that the problem you state in unsolvable. Human brain evolved to solve social problems related to survival, not to be a perfect Bayesian reasoner (Bayesian models have a tendency to explode in computational complexity as the number of parameters increases). Unless you want to design a brain anew I see no way to modify ourselves to become perfect epistemic rationalist, besides a lot of effort. That might be a shortcoming of my imagination, though. There's also the case that we shouldn't be perfect rationalists: possibly the cost of adding a further decimal to a probability is much higher than the utility gained because of it, but of course we couldn't know in advance. Also, sometimes our brain prefers to fool itself so that it is better motivated to something / happier, although Eliezer argued at length against this attitude. So yeah, the landscape of the problem is thorny. You really meant U(read sequences) < U(figure this out)
0ingive7y
I see that the problem in your reasoning is that you've already presumed what it entails, what you have missed out on is understanding ourselves. Science and reasoning already tell us that we share neural activity, are a social species thus each of us could be considered to be a cell in a brain. It's not as much if every cell decides to push the limits of its rationality, rather the whole collective as long as the expected value is positive. But to do that the first cells have to be U(figure this out). It's not either perfect or non-perfect, that's absolute thinking. Rather by inductive reasoning or QM probabilistic thinking, "when should I stop refining this, instead share this?" after enough modification and understanding of neuroscience and evolutionary biology for the important facts in what we are. Based on not thinking in absolute perfection, it's not a question of if, but rather what do we do? Because your reasoning cannot be already flawed before thinking about this problem. We already know that we can change behavior and conditioning, look around the world how people join religious groups, but how do we capitalize on this brain mechanism to increase productivity, rationality, and so on? Before I said, "stop refining it then share it", that's all it takes and the entire world will have changed. Regarding that, our brain can fool itself, yeah, I don't see why there can't be objective measurement outside of subjective opinion and that it'll surely be thought of in the investigation process.
0moridinamael7y
Could you unpack "aligning with reality" a bit? Is it meaningfully different from just having a scientific mindset?
0ingive7y
A scientific mindset has a lower probability of being positive expected value because there is more than one value when it comes to making decisions, sometimes in conflict with each other. This can lead to cognitive dissonance in daily life. It's because science is a tool, the best one we got. Aligning with reality has a higher probability as it's an emotional heuristic, with only one value necessary. Aligning with reality means submitting yourself emotionally, similar to how a religious person submits to God, but in this case, our true creator: To logic, where it is defined here as "the consistent patterns which bring about reality". Then you accept facts fully. You understand how everything is probabilities, as per one interpretation of quantum mechanics and that experience is a tool rather than a goal. Using inductive reasoning and deciding actions as per positive expected value allows you to accept facts and be aligned with reality. It's hard if you keep thinking binary, whether it be absolutes or not, 1's or 0's. Because to be able to accept facts it to be able to accept one might be wrong, everything is probabilities, infinite possibilities. Practically, if you know exercising every day is positive expected value, for example, then as you align yourself with reality in every moment, you realize even if you injure yourself accidentally today, you won't give up reality. Because you made the most efficient action as per your knowledge and you already accounted for the probability of accidentally injuring yourself. So as you keep feeling you also upgrade it with the probabilities to keep your emotions aligned with reality and easier able to handle situations as I mentioned above, however, maybe something more specific if someone breaks your trust. You already took it in consideration so you won't completely lose trust and emotions for reality. When you accept and align yourself with reality, then the facts which underlie it, with our current understandings and
0moridinamael7y
I think there is an irreconcilable tension between your statement that one should completely emotionally submit to and align with facts, and that one should use a Bayesian epistemology to manage beliefs. There are many things in life and in science that I'm very certain about, but by the laws of probability I can never be 100% certain. There are many more things that I am less than certain about, and hold a cloud of possible explanations, the most likely of which may only be 20% probable in my estimation. I should only "submit" to any particular belief in accordance with my assessment of its likelihood, and can never justify submitting to some belief 100%. Indeed, doing so would be a form of irrational fundamentalism. I feel it might help you to know that none of this is actually factual. These are your interpretations of really vague and difficult-to-pin-down philosophical ideas, ideas about which very smart and well-read people can and do disagree. For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses. The same could be said for the idea that helping another is helping yourself. That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality. I suspect that you haven't read through all of Eliezer's blog posts. His writings cover all the things you're talking about, but do it in a way that is grounded in much sturdier foundations than you appear to be using. It also seems that you are very much in love with this idea of Logic as being the One Final Solution to Everything, and that is always a huge danger sign in human thinking. Just thinking probablistically, the odds that the true Final Solution to Everything has been discovered and that you are in possession of it are very low. Hence the
0ingive7y
Not necessarily, because the submitting is a means rather than the goal, and you will always never be certain. It's important to recognize empirically how your emotions work in contrary to a Bayesian epistemology, how using its mechanisms paradoxically lead to something which is more aligned with reality. It's not done with Bayesian epistemology, it is done with emotions, that do not speak in our language and it's possibly hard-wired to be that way. So we become aware of it and mix in the inductive reasoning. "true in some narrow technical sense" yet "false in probably more relevant senses" this is called cognitive dissonance, empirically it can even be this way by some basic reasoning, both emotionally and factually, which is what I am talking about, and which needs to be investigated. You're proving my point :) That's simply semantics, the problem is attaching emotionally to a sense of "I", which is not aligned with reality, independent of action, you may speak of this practical body, hands, I, for communication, it all arises in your neural activity without a center and it's ever changing. Empirically, that arises in the subjective reference frame, which is taken as a premise for this conversation. Yes. Unsure if his writings cover what I am talking about since evident by what you've said so far. Not that I blame you, I just want us to meta observe ourselves so we can be more aligned. I'm unsure what considers as danger sign in human thinking if you change perspective, the likelihood that something is worse than what we have is low. You only need a limited emotional connection to science and rationality to realize this and how bad thinking spreads epidemically now, but from someone like us, it's more likely to be good thinking? The likelihood to investigate this is very high to be positive expected value because inherently you, I and more possess the qualities which are not aligned with reality. I want to reassure you of something, however. Alignment with re
0moridinamael7y
Science does not actually know how emotions work to the degree of accuracy you are implying. Your statement that using emotional commitment rather than Bayesian epistemology leads to better alignment with reality is a hypothesis that you believe, not a fact that has been proven. If you become a very successful person by following the prescription you advocate, that would be evidence in favor of your hypothesis, but even that would not be very strong evidence by itself. I am not sure what you're saying here. "Cognitive dissonance" is not the same thing as observing that a phenomenon can be framed in two different mutually contradictory ways. I do not have an experience of dissonance when I say, "From one point of view we're inseparable from the universe, from a different point of view we can be considered independent agents." These are merely different interpretative paradigms and neither are right or wrong. I am trying to say nicely that Eliezer's writings comprehensively invalidate what you're saying. The reason you're getting pushback from Less Wrong is that we collectively see the mistakes that you're making because we have a shared bag of epistemic tools that are superior to yours, not because you have access to powerful knowledge and insights that we don't have. You would really benefit in a lot of ways from reading the essays I linked before you continue proselytizing on Less Wrong. We would love to have you as a member of the community, but in order to really join the community you will need to be willing to criticize yourself and your own ideas with detachment and rigor. I'm not arguing that changing perspective from default modes of human cognition is bad. I'm arguing that your particular brand of improved thinking is not particularly compelling, and is very far from being proven superior to what I'm already doing as a committed rationalist. I would actually suggest that you stop using the phrase "aligning with reality" because it does not seem to conve
0ingive7y
I don't know, that's why I wanted to raise an investigation into it, but empirically you can validate or invalidate the hypothesis by emotional awareness, which is what I said at the start of my message you quoted and somehow make me seem to imply science when I say empirically. First sentence: "It's important to recognize empirically" You might've had, but no longer. That's how cognitive dissonance works. Independent agents is an empirical observation which I have already taken as a premise as a matter of communication. Emotionally you don't have to be an independent agent of the universe if you emotionally choose to. It's a question whether one alignment is more aligned with reality based on factual evidence or what you feel (been conditioned). Right or wrong is a question of absolutes. More aligned overtime is not. I'm unsure what it is I have not written which has not tried to communicate this message, in case you don't understand, that's exactly what I am trying to tell you. I am offering to raise a discussion to figure out how to do it. Aligning with reality implies detachment from things which are not aligned. If you wonder if attachment to it is possible, yeah as a means, but you'll soon get over it by empirical and scientific evidence. I'm not sure, that's why I want to raise a discussion or a study group to investigate this idea. Simply being aligned with reality gives you equilibrium as that's what you were designed to do. Using Occam's razor here simplifies your programming. The bottom line is being able to accept facts emotionally (such as neural activity before) rather than relying on empirical observations of social conditioning. I'm unsure that you've in any way disproved my point I just made. That's the point I want to bring, we should want to investigate that further and how we can align ourselves with the facts emotionally (empirically). But how do we do it? Simply by saying it like this "true in some narrow technical sense" then "false i
0moridinamael7y
There is a narrow technical sense in which my actions are dependent on the gravitational pull of some particular atom in a random star in a distant galaxy. That atom is having a physical effect on me. This is true and indisputable. In a more relevant sense, that atom is not having any effect on me that I should bother with considering. If a magical genie intervened and screened off the gravitational field of that atom, it would change none of my choices in any way that could be observed. What am I supposedly believing that is false, that is contradicted by science? What specific scientific findings are you implying that I have got wrong? ... Let me back way up. You are saying a lot of really uncontroversial things that nobody here particularly cares to argue about, like "Occam's razor is good" and "we are not causally separate from the universe at large" and "living life as a human requires a constant balancing and negotiation between the emotional/sensing/feeling and rational/deliberative/calculating parts of the human mind". These ideas are all old hat around here. They go all the way back to Eliezer's original essays, and he got those ideas from much older sources. Then you're jumping forward and making quasi-religious statements about "aligning with reality" and "emotionally submitting" and talking about how your "sense of self disappears". All that stuff is your own unsupported extrapolations. This is the reason you're having trouble communicating here.
0ingive7y
This is what you said: "For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses." You're believing that you and your environment are separate based on "relevant" senses. Scientific evidence is irrelevant to your some of your senses, it is technical. If all of your senses were in resonance, including emotional, then there wouldn't be such a thing where scientific evidence is irrelevant in this context. So your environment and you are not separate. This is a scientific fact. Because it's all a part of your neural activity. Now I am not denying consciousness, qualia or empirical evidence. I'm already taking it as a premise. But you are emotionally attached to the idea that you and environment are separate, that's why you're unable to accept the scientific evidence. However, if you had a scientific mindset, facts would make you accept it. It's not in the way you think right now "It's true in a technical sense, but not for the relevant senses", whereas one part of you accept it but the other, your emotions, do not. Exactly this is what I am explaining by aligning with reality, you're aligning and letting the evidence in rather than rejecting from preconditioned beliefs. I think you're starting to understand and that you will be stronger because of it. Even if it might seem a little scary at start. Of course we have to investigate it. You don't bother considering because it's an analogy in which the hypothetical scenario leads to that conclusion. Do the same with the statements in context, repeat it, is it having any effect on you that you feel that you're not separate from your environment ("Helping others is helping you?") and so on? But of course you have to write down in the same manner, but now not for an analogy. Aligning with reality is an emotional heuristic which follows Occam's razor. Emotionally submitting, you already do
0moridinamael7y
No, I'm not. This is just not a very interesting or useful line of thinking. I (and most people on this forum) already try to live as rationalists, and where your proposal implies any deviation in from that framework, your deviations are inferior to simply doing what we are already doing. Furthermore, you consistently rely on buzzwords of your own invention ("aligning with reality", "emotionally submitting") which greatly inhibit your attempts at clarifying what you're trying to say. Perhaps if you read the essays as I suggest, you could provide substantive criticisms/improvements that did not rely on your own idiosyncratic terminology.
0ingive7y
You say you're not, yet you're contradicting your previous statement where scientific facts are irrelevant to your other senses [emotions]. Which you completely omitted in responding to. Please explain. Is it a blind spot? I'm unsure why accepting facts to the extent where falsehoods by other senses are overwritten, is uninteresting or not useful. It's obviously not inferior or superior as I've already explained a flaw in your reasoning, which you're either already too much of an affective death spiral to notice, or completely omitting because you have some vague sense that you are right. You could've welcomed me rather than prove to me what I've been saying all along. :) It's very explanatory. If you go against what you are and your purpose then you are not aligned with reality. If you go alongside with what you are and your purpose then you are aligned with reality. Accepting facts in all senses, including emotionally. By everything I've written so far, it should able to connect the dots with your pattern-recognition machine what these 'buzzword's mean? If I say X means this, this that, multiple times then you should have a vague sense in what I mean it? I wasn't using 'my terminology' when I explained your contradiction, and that this contradiction is the problem? . That's the improvement we have to make.
0moridinamael7y
Where did I say scientific facts are irrelevant to my emotions? Please remind me or re-highlight where this flaw/contradiction happened. I did not notice you pointing it out before and cannot ascertain what you're referring to. I have an idea of what you're trying to say, but I suspect that you don't. Your thinking is not clear. By using different words, you will force yourself to interrogate your own understanding of what you're putting forth. Is this what you're talking about where you say I'm making an error in reasoning? If so it seems like you just misunderstood me. The gravitational pull of a distant atom is causally present but practically irrelevant to any conceivable choice that I make. This is not a statement that I feel is particularly controversial. It is obviously true.
0ingive7y
"For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense" In a technical sense. "but it is also very much false in probably more relevant senses." The relevant sense here is your emotions. Technically you understand that self and environment is one and the same, but you don't emotionally resonate with that idea [you don't emotionally resonate with facts]. Otherwise, what do you mean with: "For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense" It's true...? "but it is also very much false in probably more relevant senses." But it's false... for a relevant sense? What is the 'relevant sense'? (not emotions?) Is it more or less probable that 'you and your environment' is separated and based on what evidence? Emotionally accepting or submitting to something is an empirical fact. There are no different words, but if there is, you're free to put them forward. You keep using analogies rather than the example you gave earlier. Why? I already understand what you mean, but the actual example is not irrelevant to your decisions. So what you actually meant was: "You and your environment are not separated. This is obviously true"? Can you confirm? Please spot the dissonance and be honest.
0moridinamael7y
Thanks, this is clarifying. You're reading way too much into word choice things and projecting onto me a mentality that I don't hold. Indeed, that was what I said. It is still true. This is also true. Whether or not that particular atom is there or is magically whisked away, it's not going to change where I decide to eat lunch today. The activity of that atom is not relevant to my decision making process. That's it. What part of this is supposed to be in error?
0ingive7y
Indeed, this is true in the sense that it's most likely that this is the case based on the available evidence. I'm glad that you're aligned with reality on this certain point, there's not many that are, but I wonder, why do you claim that helping others is not helping yourself, excluding practicality of semantics? It seemed as you were very new to the concept of non-emotional attachment to identity/I because you argued my semantics. But, you claimed earlier that none of this is actually factual would you like to elaborate on that? That these are my interpretations of vague and difficult-to-pin-down philosophical ideas. The reason why I push this is because you contradict yourself and you very much seemed to have an opinion on this specific matter. So... "none of this is actually factual", it's philosophical ideas, but later on you agree that "you and your environment are not separated. This is obviously true" by saying "Indeed, that was what I said. It is still true." Which you did but it was "...in some narrow technical sense..." and "...but it is also very much false ... relevant ..." now it's "It's true" "factual"? Is it also a "philosophical idea" and a part of the ideas that "none of this is actually factual"? Your statements in order: * not actually factual. * really vague philosophical ideas * may be true in some narrow technical sense * but it is also very much false in probably more relevant senses * indeed, that what was I said * it is still true It's fine to be wrong and correct yourself :) Yeah, it isn't, but the example you gave of you and environment, is relevant to your decision-making process, as evident by your claim (outside of practicality) and of semantics that "helping others is not helping yourself" for example. So using an analogy which is not relevant to your decision-making process in contrary to your example where it is, is incorrect. That's why I say use the example which you used before. Instead of making an analogy that I d
0moridinamael7y
Not really, I've been practicing various forms of Buddhist meditation for several years and have pretty low attachment to my identity. This is substantially different from saying with any kind of certainty that helping other people is identical to helping myself. Other people want things contrary to what I want. I am not helping myself if I help them. Having low attachment to my identity is not the same thing as being okay with people hurting or killing me. The rest of your post, which I'm not going to quote, is just mixing up lots of different things. I'm not sure if you're not aware of it or if you are aware of it and you're trying to obfuscate this discussion, but I will give you the benefit of the doubt. I will untangle the mess. You said: Then I said, Since I have now grasped the source of your confusion with my word choice, I will reengage. You specifically say: This is a pure non sequitur. The fact that human brains run on physics in no way implies that helping another is helping yourself. Again, if a person wants to kill me, I'm not helping myself if I hand him a gun. If you model human agents the way Dennis Hoffman's character does in I Heart Huckabees you're going to end up repeatedly confused and stymied by reality. This is also just not factual. You're making an outlandish and totally unsupported claim when you say that "emotionally accepting reality" causes the annihilation of the self. The only known things that can make the identity and self vanish are * high dose psychotropic compounds * extremely long and intense meditation of particular forms that do not look much like what you're talking about and even these are only true for certain circumscribed senses of the word "self". So let's review: I don't object to the naturalistic philosophy that you seem to enjoy. That's all cool and good. We're all about naturalistic science around here. The problem is statements like and These are pseudo-religious woo, not supported by science anywhere.
0ingive7y
No, it's not. What does that have to do with helping yourself, thus other people? Yeah, but 'me' is used practically. I said your neural activity includes you and your environment and that there is no differentiation. So there is no differentiation by helping another as in helping yourself. That's the practical 'myself' to talk about this body, its requirements and so on. You are helping yourself by not giving him a gun because you are not differentiated by your environment. You are presuming that you are helping yourself by giving gun because you think that there is another. No there is only yourself. You help yourself by not giving the gun because your practical 'myself' is included in 'yourself'. I don't deny that it is not that factual as there is limited objective evidence. I disagree with 'helping another is helping you' being psuedo-religious woo but it's because we're talking about semantics. We have to decide what 'me' or my 'self' or 'I' is. I use the neural activity as the definition of this. You seem to use some type philosophical reasoning where you are presuming I use the same definition. So we should investigate if your self and identity can die from that and if other facts which we don't embrace emotionally leads to a similar process but for their area. That's the entire point of my original post.
0moridinamael7y
It doesn't look like there's anywhere to go from here. It looks like you are acknowledging that where your positions are strong, they are not novel, and where they are novel, they are not strong. If you enjoy drawing the boundaries of your self in unusual places or emotionally associating your identity with certain ideas, go for it. Just don't expect anybody else to find those ideas compelling without evidence.
0ingive7y
I agree. These are the steps I did to have identity death: link to steps I also meditated on the 48 min hypnosis track youtube If you are interested in where I got my ideas from and if you want to try it yourself. It's of course up to you but you have a strong identity and ego issues and I think it will help "you"(and me).
0moridinamael7y
You've had people complete these steps and report that the "What will happen after you make the click" section actually happens?
0ingive7y
Yeah, it's also called 'Enlightenment' in theological traditions. You can read the testimonies here. MrMind has, for example, read them, but he's waiting a bit longer to contact these people on Reddit to see if it sticks around. I think the audio can work really well with a good pair of headphones and playing it as FLAC.
0ingive7y
How disappointing. No one on LW appears to want to discuss this. Except for a few who undoubtedly misunderstood this post and started raving about some irrelevant topics. At least let me know why you don't want to. 1) How would we go about changing human behavior to be more aligned with reality? Aligned with reality = Accepting facts fully (probably leads to EA ideas, science, etc) 2) When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?
1username27y
Replace all humans with machines. That's basically related to the entire topic of this site. People probably aren't engaging with this question because it's too tiresome to summarize all the information that is available from that little search bar in the upper right corner.
0ingive7y
Changing human behavior is probably more efficient than to build machines, to align more with reality. It's a question whether a means is a goal for you? If not, you would base your operations on the most effective action, probably changing behavior (because you could change the behavior of one, to equal the impact of your robot building, but probably more). I don't think replacing all humans with machines is a smart idea anyway. Merging biology with technology would be a smarter approach from my view as I deem life to be conscious and machines to not be. Of course, I might be wrong, but sometimes you might not have an answer but still give yourself the benefit of the doubt, for example, if you believed that every action is inherently selfish, you would still do actions which were not. By giving you the benefit of the doubt, if you figured out later on (which we did) that it is not the case, then that was a good choice. This includes consciousness since we can't prove the external world it would be wise to keep humans around or utilize the biological hardware. If we had machines which replaced all humans, then that would be not very smart machines to at least not keep some around in a jungle or so, which hadn't been contacted. Which undoubtedly mean unfriendly AI, like a paperclip maximizer. I just want to tell you that you have to recognize what you're saying and how it looks, even though you only wrote 5 words, you could as well be supporting a paperclip maximizer. What should I search for to find an answer to my question? Flaws of human behavior that can be overcome (can they?) like biases and fallacies is relevant, but it's quite specific however, I guess that's very worthwhile to go through to improve functionality. Something other would be stupid.
0niceguyanon7y
Why I think people are not engaging you. But don't take this as a criticism of your ideas or questions. * You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics, because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place. * I think username2 was making a non-serious cheeky comment which went over your head and you responded with a wall of text touching on several ideas. People sometimes just want small exchanges and they have no confidence in you to keep exchanges short. * Agreeing with the sentiment that people probably aren't engaging with this question because it's too tiresome to summarize all the information that is available, and what is available is probably incomplete as well. By asking such a broad question rather than a narrower, specific, or applied question, you won't get many responses.
0username27y
I was being cheeky, yes, but also serious. What do you call a perfect rationalist? A sociopath[1]. A fair amount of rationality training is basically reprogramming oneself to be mechanical in one's response to evidence and follow scripts for better decision making. And what kind of world would we live in if every single person was perfectly sociopathic in their behaviour? For this reason in part, I think the idea of making the entire world perfectly rationalist is a potentially dangerous proposition and one should at least consider how far along that trajectory we would want to take it. But the response I gave to ingive was 5 words because for all the other reasons you gave I did not feel it would be a productive use of my time to engage further with him. [1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.
0ingive7y
No, you don't. A perfect rationalist is not a sociopath because a perfect rationalist understands what they are, and by scientific inquiry can constantly update and align themselves with reality. If every single person was a perfect rationalist then the world would be a utopia, in the sense that extreme poverty would instantly be eliminated. You're assuming that a perfect rationalist cannot see through the illusion of self and identity, and update its beliefs by understanding neuroscience and evolutionary biology. Complete opposite, they will be seen as philanthropic, altruistic and selfless. The reason why you think so is because of straw Vulcan, your own attachment to your self and identity, and your own projections onto the world. I have talked about your behavior previously in one of my posts. do you agree? I also gave you suggestions on how to improve, by meditating, for example. http://lesswrong.com/lw/5h9/meditation_insight_and_rationality_part_1_of_3/ In another example, as you and many in society seem to have a fetish for sociopaths, yes you'll be a sociopath, but not for yourself, for the world. By recognizing your neural activity includes your environment and that they are not separate, that all of us evolved from stardust, and practicing for example meditation or utilizing psychotropic substances, your "Identity" "I" "self" becomes more aligned, and thus what your actions are directed to. That's called Effective Altruism. (emotions aside, selflessness speaks louder in actions!) Edit: You changed your post after I replied to you. Still apply. Doesn't matter.
0niceguyanon7y
If I remember correctly username2 is a shared account, so the person are talking to now might not be whom you have had previously conversed with. Just thought you should know because I don't want you to mistake the account with a static person.
0ingive7y
It's unlikely that it's not the same person, or people on average utilize shared accounts to try and share their suffering (by that I mean have a specific attitude) in a negative way. It would be interesting to compare shared accounts with other accounts by for example IBM Watson personality insights. In a large scale analysis. I would just ban them from the site. I'd rather see a troll spend time creating new accounts and people noticing the sign-up dates. Relevant: Internet Trolls Are Narcissists, Psychopaths, and Sadists By the way, I was not consciously aware of the user when I wrote my text or the analysis of the user agenda. But afterwards I remembered "oh it's that user again".
0username27y
The username2 account exists for a reason. Anonymous speech does have a role in any free debate, and it is virtuous to protect the ability to speak anonymously.
0ingive7y
I agree. Now I'd like the password for username2. -niceguyanon
0username27y
The password is a Schelling point, the most likely candidate for an account named 'username'. Consider it a right of passage to guess... (and don't post it when you discover it).
0ingive7y
You forgot to say that you think that. But for username 2's point, you had to reiterate that you think. That's unfortunate if it is the case if ideas which are outside their echo chamber create such fear, then what I say might be of use in the first place, if we all come together and figure things out :) It was but it speaks of his underlying ideas and character to even be in the position to do that. I don't mind it, I enjoy typing walls of texts. What would you want me to respond, if at all? Yeah, I think so too, but I do think there is a technological barrier in how this forum was setup for the type of problem-solving I am advising for. If we truly want to be Less Wrong, it's fine with how it is now, but there can definitely be improvements in an effort for the entire species rather than a small subset of it, 2k people.
0niceguyanon7y
What do you mean by this? Assuming its a joke, why does it speaks to his character and underlying ideas; why would it, it wasn't meant for you to take seriously. Probably not at all.
0ingive7y
Because a few words tell a large story when they also decided it was worth their time to write it. I wrote in my post and explained for example what type of viewpoints it implies and that it's stupid (in the sense inefficient and not aligned with reality). I will update my probabilities then as I gain more feedback.
0plethora7y
It's more likely to lead to Islam; that's at least on the right side of the is-ought gap.
0ingive7y
I'm having a hard time understanding what you mean. Accepting facts fully is EA/Utilitarian ideas. There is no 'ought' to be. 'leads' was the incorrect word choice.
2plethora7y
No. Accepting facts fully does not lead to utilitarian ideas. This has been a solved problem since Hume, FFS.
0ingive7y
You're welcome to explain why this isn't the case. I'm thinking mostly about neuroscience and evolutionary biology. It tells us everything.
1moridinamael7y
Is-ought divide. If you have solved this problem, mainstream philosophy wants to know.
0ingive7y
If someone wins the Nobel prize you heard it here first. The is-ought problem implies that the universe is deterministic, which is incorrect, it's an infinite range of possibilities or probabilities which are consistent but can never be certain. Humes beliefs about is-ought came from his own understanding of his emotions and those around him's emotions. He correctly presumed that it is what drives us and that logic and rationality could not (thus not ought to be in any way because things are) and thought the universe is deterministic (without the knowledge of the brain and QM). The insight he's not aware of that even though his emotions are the driving factor, he misses out that he can emotionally be with rationality and logic, facts, so there is no ought to be from what is. 'What is' implies facts, rationality, and logic and so on, EA/Utilitarian ideas. The question about free will is an emotional one if you are aware your subjective reference frame, awareness, was a part of it then you can let go of that.
1moridinamael7y
1. The universe is deterministic. 2. You seem to be misunderstanding is-ought. The point is that you cannot conclude what ought to be, or what you ought to do, from what is. You can conclude what you ought to do in order to achieve some specific goal, but you cannot infer "evolutionary biology, therefor effective altruism". You are inserting your own predisposition into that chain and pretending it is a logical consequence.
0ingive7y
1. With that interpretation, not Copenhagen. I'm unsure, because inherently, can we really be certain of absolutes because of our lack of understanding of the human brain? I think that how memory storage and how the brain works shows us that we can't be certain of our own knowledge. 2. If you are right with that the universe is deterministic then what ought to be is what is. But if you ought to do the opposite from what 'is' tell us, what are you doing then? You are not allowed to have a goal which is not aligned with what is because that goes against what you are. I do agree with you now however, I think that this is semantics. I think it was a heuristic. But then I'll say "What is, is what you ought to be".
1moridinamael7y
If reasonable people can disagree regarding Copenhagen vs. Many Worlds, then reasonable people can disagree on whether the universe is deterministic. In which case, since your whole philosophy seems to depend on the universe not being deterministic, you should scream "oops!" and look for where you went wrong, not try to come up with some way to quickly patch over the problem without thinking about it too hard. Also: How could 'is' ever tell you what to do? An innocent is murdered. That 'is'. So it's okay? You learn that an innocent is going to be murdered. That 'is', so what force compels you to intervene? The universe is full of suffering. That 'is'. So you ought to spread and cause suffering? If not, what is your basis for saying so?
0ingive7y
I'm glad that it's clarified, indeed it relies on the universe not being deterministic. However, I do think that a belief in a deterministic universe has an easier time for its agents to go against their utility so my philosophy might boil down more to one's emotions, probably what even put Humes to philosophize about this in the first place. He has apparently talked a lot about emotions/rationality duality and probably contradicted himself on 'is-ought' in his own statements. Is tells me what I should write to your hypothetical scenario to align you more with reality, rather than continuing the intellectual masturbation. Which philosophers are notorious for, all talk, no action. We are naturally aligned into the decrease of suffering, I don't know exactly, so what is is in every moment whereas the low hanging fruit has to be picked up in for example poverty reduction. Long-term probably awareness of humans like you and I, the next on the list might be an existential risk reduction, seems to be high expected value.
1moridinamael7y
Not sure what this means. If "Just align with reality!" is your guiding ethical principle, and it doesn't return answers to ethical questions, it is useless. Naw, we're naturally aligned to decrease our own suffering. Our natural impulses and ethical intuitions are frequently mutually contradictory and a philosophy of just going with whatever feels right in the moment is (a) not going to be self-consistent and (b) pretty much what people already do, and definitely doesn't require "clicking". Sufficiently wealthy and secure 21st century Westerners sometimes conclude that they should try to alleviate the suffering of others, for a complex variety of reasons. This also doesn't require or "clicking". By the way, you seem to have surrendered on several key points along the way without acknowledging or perhaps realizing it. I think it might be time for you to consider whether your position is worth arguing for at all.
0ingive7y
It does return answers for ethical questions. In fact I think it will for all. What if your suffering is gone and there are only others suffering based on intellectual assumptions? What if that was the goal and being wealthy and secure 21st century Westerner was the means as with all? I didn't surrender, I tried to wake you up. I can easily refute all of your arguments by advising you to gain knowledge of certain things and accepting it fully.
2moridinamael7y
ingive, I made it an experiment this last few days to interact with you much more than I would normally be inclined to. I had previously noticed my own tendency to disengage with people online when I suspected that my interactions with them would not lead anywhere useful. I thought there was a possibility that my default tendency was to disengage prematurely, and that I might be missing out on opportunities to learn, or test myself in various other ways. What I have learned is that my initial instinct to not engage with you was correct, and that my initial impression of you as essentially a member of a cult was accurate. I had thought there was a chance that I was missing something, or failing that, there was a chance that I could actually break through to you by simply pointing out the errors in your thought processes. I thought maybe I could spare you some confusion and pain in your life. I think that neither of those outcomes have come to pass. All I've learned is that I should trust my instincts and remain reserved and cautious in my online persona.
0ingive7y
That's interesting. You haven't simply pointed out my errors in my thought processes. I have yet to see you simply point them out, rather than arguing with assumptions that I can refute with basic reasoning. It's cute that you, for example, assume I don't have an answer to your hypothetical scenarios because I simply point out that it's a waste of time. Hypotheticals are intellectual entertainment. But it might've been a better choice to answer your questions from the mindset I was speculating of. I just watched The Master which was an aesthetically pleasing movie. It does give some taste of cults/new-age thinking, and I can see myself doing the same type of thinking for other things. I've discussed with people with different perspectives and watched such content as well. I've come to the conclusion that this is human nature. Thinking back long ago in my life and now, unfortunately, if you think you're incapable of such thinking or not actually a part of such a thing right now, you probably are. But that is very confrontational and I wouldn't be surprised that you, or someone else, would without hesitation deny that fact. I can only tell you that in some hope that you don't reinforce the belief that you probably are not. I'm going to open my mind now, you're free to reprogram my brain, tell me, Master and break through to me. Seriously, I am open minded.
0plethora7y
What?
0ingive7y
Because Hume thought the universe is without taking in consideration that it ought to be different because of probabilistic nature (one interpretation) of it all.
0Luke_A_Somers7y
What?