Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jan. 16 - Jan. 22, 2016

2 Post author: MrMind 16 January 2017 07:52AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (131)

Comment author: Brillyant 16 January 2017 04:44:13PM 8 points [-]

My "RECENT ON RATIONALITY BLOGS" section on the right sidebar is blank.

If this isn't just me, and remains this way for long, I predict LW traffic will drop markedly as I primarily use LW habitually as a way to access SSC, and I'd bet my experience is not unique in this way.

Comment author: The_Jaded_One 16 January 2017 04:49:03PM *  6 points [-]

Maybe you're just not rational enough to be shown that content? I see like 10 posts there.

MIRI has invented a proprietary algorithm that uses the third derivative of your mouse cursor position and click speed to predict your calibration curve, IQ and whether you would one-box on Newcomb's problem with a correlation of 95%. LW mods have recently combined those into an overall rationality quotient which the site uses to decide what level of secret rationality knowledge you are permitted to see.

Maybe you should do some debiasing, practice being well-calibrated, read the sequences and try again later?

EDIT: Some people seem to be missing that this is intended as humor ............

Comment author: Manfred 16 January 2017 06:09:25PM 1 point [-]

it's a shame downvoting is temporarily disabled.

Comment author: The_Jaded_One 20 January 2017 01:36:19AM 1 point [-]

Why does everyone want to downvote everything, ever!? Seriously, lighten up!!!

Comment author: Elo 20 January 2017 03:31:36AM 0 points [-]

no, some things would benefit from being voted down out of existence.

Comment author: The_Jaded_One 20 January 2017 07:35:29AM *  3 points [-]

Yes, I totally agree. In the last few weeks, I have seen some totally legit targets for being on -10 and not visible unless you you click on them, such as the 'click' posts, repetitive spam about that other website, probably the weird guy who just got banned from the open thread too.

However, I have also seen people advocate using mass downvoting on an OK-but-not-great article on cults that they just disagree with, and now someone wants to downvote to oblivion a joke in the open thread. Why? Is humor banned?

There is a legitimate middle ground between toxicity and brilliance.

Comment author: Elo 20 January 2017 08:09:08AM *  0 points [-]

There is a legitimate middle ground between toxicity and brilliance.

Agreed.

I think humour is a mixed bag. Sometimes good and sometimes bad. In my ideal situation there would be a place for humour to happen where people can choose to go, or choose not to go. Humour should exist but mixing it in with everything else is not always great.

Comment author: Brillyant 16 January 2017 05:15:57PM 1 point [-]

I see like 10 posts there.

Perhaps you are looking at the "RECENT POSTS" section rather than the section I mentioned?

Maybe you should do some debiasing, practice being well-calibrated, read the sequences and try again later?

I'll work on this.

Maybe you could work on reading?

Comment author: The_Jaded_One 16 January 2017 05:23:49PM *  0 points [-]

No it's definitely "RECENT ON RATIONALITY BLOGS" section ;)

Comment author: Vaniver 16 January 2017 08:23:34PM 1 point [-]

My "RECENT ON RATIONALITY BLOGS" section on the right sidebar is blank.

If this isn't just me, and remains this way for long, I predict LW traffic will drop markedly as I primarily use LW habitually as a way to access SSC, and I'd bet my experience is not unique in this way.

It looks that way to me as well, and I don't think that should be the case. I'll investigate what's up.

Comment author: Vaniver 16 January 2017 08:37:24PM 0 points [-]

On an initial pass, the code hasn't been updated in a month, so I doubt that's a cause. If you look at the list of feedbox urls here, two of them seem like they're not working (the Givewell one and the CFAR one).

It's not clear to me yet how Google's feed object made here works; it looks like we feed it a url, then try to load it in a way that handles errors. But if it checks the URL ahead of the load, that might error out in a way that breaks the feedbox.

(The page also has an Uncaught Error: Module: 'feeds' not found! which I'm not sure how to interpret yet, but makes me more suspicious of that region.)

Comment author: morganism 16 January 2017 08:55:34PM 1 point [-]

Both NoScript, and Disconnect blockers, block those. I still have to whitelist Vigilink every time i come here, and i can't see lots of features and editing handles if i haven't gone to Reddit and whitelisted it before visiting here....

Comment author: Vaniver 16 January 2017 08:53:59PM *  1 point [-]

So, we use google.feeds.Feed(url) to manage this. If you go to the docs page for that, you find:

This API is officially deprecated and will stop working after December 15th, 2016. See our deprecation policy in our Terms of Service for details.

Comment author: Vaniver 17 January 2017 06:47:38PM *  3 points [-]

Flinter has been banned after a private warning. I'm deleting the comment thread that led to the ban because it's an inordinate number of comments cluttering up a welcome thread.

Users are reminded that responding to extremely low-quality users creates more extremely low quality comments, and extended attempts to elicit positive communication almost never work. Give up after a third comment, and probably by your second.

Comment author: Viliam 18 January 2017 09:52:25AM *  6 points [-]

From Flinter's comment:

The mod insulted me, and Nash.

While I respect your decision as a moderator to ban Flinter, insulting Nash is a horrible thing to do and you should be ashamed of yourself!

/ just kidding

Also, someone needs to quickly make a screenshot of the deleted comment threads, and post them as new LW controversy on RationalWiki, so that people all around the world are properly warned that LW is pseudoscientific and disrespects Nash!

/ still kidding, but if someone really does it, I want to have a public record that I had this idea first

Comment author: drethelin 20 January 2017 01:06:45AM 1 point [-]

this is why we need downvotes

Comment author: Vaniver 17 January 2017 07:00:49PM 3 points [-]

As the Churchill quote goes:

A fanatic is one who can't change his mind and won't change the subject.

Less Wrong is not, and will not be, a home for fanatics.

Comment author: TiffanyAching 17 January 2017 07:06:53PM 1 point [-]

Fair enough. Kindest thing to do really. I think people have a hard time walking away even when the argument is almost certainly going to be fruitless.

Comment author: Lumifer 17 January 2017 03:29:57AM *  3 points [-]

For general information -- since Flinter is playing games to get people to follow the steps he suggests, it might be useful to read some of his other writings on the 'net to cut to the chase. He is known as Juice/rextar4444 on Twitter and Medium and as JokerPravis on Steemit.

Comment author: elephantiskon 16 January 2017 09:17:17PM 2 points [-]

At what age do you all think people have the greatest moral status? I'm tempted to say that young children (maybe aged 2-10 or so) are more important than adolescents, adults, or infants, but don't have any particularly strong arguments for why that might be the case.

Comment author: knb 17 January 2017 01:46:11AM *  2 points [-]

I don't think children actually have greater moral status, but harming children or allowing children to be harmed carries more evidence of depraved/dangerous mental state because it goes against the ethic of care we are supposed to naturally feel toward children.

Comment author: btrettel 17 January 2017 01:55:13AM *  1 point [-]

If you think in terms of QALYs, that could be one reason to prefer interventions targeted at children. Your average child has more life to live than your average adult, so if you permanently improve their quality of life from 0.8 QALYs per year to 0.95 QALYs per year, that would result in a larger QALY change than the same intervention on the adult.

This argument has numerous flaws. One which comes to mind immediately are that many interventions are not so long lasting, so both adults and children would presumably gain the same. It also is tied to particular forms of utilitarianism one might not subscribe to.

Comment author: Elo 18 January 2017 12:54:47AM 0 points [-]

this may be an odd counter position to the normal.

I think that adults are more morally valuable because they have proven their ability to not be murderous etc. Or possibly also to not be the next ghandi. Children could go either way.

Comment author: TiffanyAching 18 January 2017 01:30:56AM 1 point [-]

Could you explain this a little more? I don't quite see your reasoning. Leaving aside the fact that "morally valuable" seems too vague to me to be meaningfully measured anyway, adults aren't immutably fixed at a "moral level" at any given age. Andrei "Rostov Ripper" Chikatilo didn't take up murdering people until he was in his forties. At twenty, he hadn't proven anything.

Bob at twenty years old hasn't murdered anybody, though Bob at forty might. Now you can say that we have more data about Bob at twenty than we do about Bob at ten, and therefore are able to make more accurate predictions based on his track record, but by that logic Bob is at his most morally valuable when he's gasping his last on a hospital bed at 83, because we can be almost certain at that point that he's not going to do anything apart from shuffle off the mortal coil.

And if "more or less likely to commit harmful acts in future" is our metric of moral value, then children who are abused, for example, are less morally valuable than children who aren't, because they're more likely to commit crimes. That's not intended to put any words in your mouth by the way, I'm just saying that when I try to follow your reasoning it leads me to weird places. I'd be interested to see you explain your position in more detail.

Comment author: Viliam 18 January 2017 09:47:13AM *  0 points [-]

children who are abused, for example, are less morally valuable than children who aren't, because they're more likely to commit crimes

That reminds me of a scene in Psycho-Pass where...

...va gur svefg rcvfbqr, n ivpgvz bs n ivbyrag pevzr vf nyzbfg rkrphgrq ol gur cbyvpr sbepr bs n qlfgbcvna fbpvrgl, onfrq ba fgngvfgvpny ernfbavat gung genhzngvmrq crbcyr ner zber yvxryl gb orpbzr cflpubybtvpnyyl hafgnoyr, naq cflpubybtvpnyyl hafgnoyr crbcyr ner zber yvxryl gb orpbzr pevzvanyf va gur shgher.

(rot 13)

Comment author: TiffanyAching 18 January 2017 07:05:55PM 0 points [-]

Yes, that's the sort of idea I was getting at - though not anything so extreme.

Of course I don't really think Elo was saying that at all anyway, I'm not trying to strawman. I'd just like to see the idea clarified a bit.

(We use substitution ciphers as spoiler tags? Fancy!)

Comment author: Elo 19 January 2017 08:52:13PM 0 points [-]

I am not keen on a dystopian thought police. We have at the moment a lot more care given to children than to adults. For example children's hospitals VS adult's hospitals.

The idea is not drawn out to further conclusions as you have done, but I had to ask why we do the thing where we care about children's hospitals more than adult's hospitals, and generally decided that I don't like the way it is.

I believe common behaviour to like children more comes out of some measure of, "they are cute" and is similar to why we like baby animals more than fully grown ones. Simply because they have a babyness to them. If that is the case then it's a relatively unfounded belief and a bias that I would rather not carry.

Adults are (probably) productive members of society, we can place moralistic worth on that life as it stands in the relative concrete present, not the potential that you might be measuring when a child grows up. Anyone could wake up tomorrow and try to change the world, or wake up tomorrow and try to lie around on the beach. What causes people to change suddenly? Not part of this puzzle. I am confident that the snapshot gives a reasonably informative view of someone's worth. They are working hard in EA? That's their moral worth they present when they reveal with their actions what they care about.

What about old people? I don't know... Have not thought that far ahead. Was dealing with the cute-baby bias first. I suppose they are losing worth to society as they get less productive. And at the same time they have proven themselves worthy of being held/protected/cared for (or maybe they didn't).

Comment author: TiffanyAching 19 January 2017 09:26:49PM 0 points [-]

The urge to protect and prioritize children is partly biological/evolutionary - they have to be "cute" otherwise who'd put up with all the screaming and poop long enough to raise them to adulthood? The urge to protect and nurture them is a survival-of-the-species thing. Baby animals are cute because they resemble human babies - disproportionately big heads, big eyes, mewling noises, helplessness.

But from a moral perspective I'd argue that there is a greater moral duty to protect and care for children because they can neither fend nor advocate for themselves effectively. They're largely at the mercy of their carers and society in general. An adult may bear some degree of responsibility for his poverty, for example, if he has made bad choices or squandered resources. His infant bears none of the responsibility for the poverty but suffers from it nonetheless and can do nothing to alleviate it. This is unjust.

There's also the self-interest motive. The children we raise and nurture now will be the adults running the world when we are more or less helpless and dependent ourselves in old age.

And there's the future-of-humanity as it extends past your own lifetime too, if you value that.

But of course these are all points about moral duty rather than moral value. I'm fuzzier on what moral value means in this context. For example the difference in moral value between the young person who is doing good right now and the old person who has done lots of good over their life, but isn't doing any right now because that life is nearly over and they can't. Does ability vs. desire to do good factor into this? The child can't do much and the end-of-life old person can't do much, though they may both have a strong desire to do good. Only the adult in between can match the ability to the will.

Comment author: Elo 20 January 2017 02:35:19AM 0 points [-]

Yes. I agree with most of what you have said.

I'd argue that there is a greater moral duty to protect and care for children because they can neither fend nor advocate for themselves effectively.

I would advocate a "do no harm", attitude. Rather than a "provide added benefit" just because they are children. I wouldn't advocate to neglect children, but I wouldn't put them ahead of adults.

As for what we should do. I don't have answers to these questions, I suspect it comes down to how each person weighs the factors in their own heads, and consequently how they want the world to be balanced.

Just like some people care about animal suffering and others do not. (I like kids, definitely, but moral value is currently subjectively determined)

Comment author: ChristianKl 17 January 2017 06:58:13AM 0 points [-]

It depends very much on the context. In many instances where we want to save lives QALY are a good metric. In other cases like deciding how should be able to sit down in a bus, the metric is worthless.

Comment author: morganism 16 January 2017 09:02:24PM 1 point [-]

Is there a simple coding trick to allow this blockchain micropayment scheme into Reddit based sites ?

https://steemit.com/facebook2steemit/@titusfrost/in-simple-english-for-my-facebook-friends-how-and-why-to-join-steemit

This seems like a interesting way to get folks to write deeper and more thoughtful articles, by motivating them with some solid reward. And if something does go viral, it can allow some monetization without resorting to ad-based sites....

BTW, there was a link to simple markdown on Github in there

https://guides.github.com/features/mastering-markdown/

Comment author: Flinter 16 January 2017 05:44:34PM *  1 point [-]

I wanted to make a discussion post about this but apparently I need 2 karma points and this forum is too ignorant to give them out. I'll post here and I guess probably be done with this place since its not even possible for me to attempt to engage in meaningful discussion. I'd also like to make the conjecture that this place cannot be based on rationality with the rule sets that are in place for joining-and I don't understand why that isn't obvious.

Anyways, here is what would have been my article for discussion:

"I am not perfectly sure how this site has worked (although I skimmed the "tutorials") and I am notorious for not understanding systems as easily and quickly as the general public might. At the same time I suspect a place like this is for me, for what I can offer but also for what I can receive (ie I intend on (fully) traversing the various canons).

I also value compression and time in this sense, and so I think I can propose a subject that might serve as an "ideal introduction" (I have an accurate meaning for this phrase I won't introduce atm).

I've read a lot of posts/blogs/papers that are arguments which are founded on a certain difficulties, where the observation and admission of this difficulty leads the author and the reader (and perhaps the originator of the problem/solution outlines) to defer to some form of a (relative to what will follow) long winded solution.

I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.

I think maybe at first this will seem like an empty proposal. I think then, and also, some will see it as devilry (which I doubt anyone here thinks exists). And I think I will be accused of many of the fallacies and pitfalls that have already been previously warned about in the canons.

That latter point I think might suggest that I might learn well and fast from this post as interested and helpful people can point me to specific articles and I WILL read them with sincere intent to understand them (so far they are very well written in the sense that I feel I understand them because they are simple enough) and I will ask questions.

But I also think ultimately it will be shown that my proposal and my understanding of it doesn't really fall to any of these traps, and as I learn the canonical arguments I will be able to show how my proposal properly addresses them."

Comment author: MrMind 17 January 2017 08:25:49AM 2 points [-]

I wanted to make a discussion post about this but apparently I need 2 karma points and this forum is too ignorant to give them out

People have come here and said: "Hey, I've something interesting to say regarding X, and I need a small amount of karma to post it. Can I have some?" and have been given plenty.
A little reflection and a moderate amount of politeness can go a long way.

Comment author: Flinter 17 January 2017 08:28:09AM 0 points [-]

Yup but that ruins my first post cause I wanted it to be something specific. So what you are effectively saying is I have to make a sh!t post first, and I think that is irrational. I came here to bring value not be filtered from doing so.

Cheers!

Comment author: MrMind 17 January 2017 08:47:32AM 1 point [-]

It makes sense from the inside of the community.
The probability of someone posting something of value as the first post is much lower than that of someone posting spam on the front page. So a very low bar to post on the front page is the best compromise between "discourage spammer" and "discourage poster that has something valuable to say".

Comment author: Flinter 17 January 2017 08:53:31AM 0 points [-]

If it filters out Nash's argument, Ideal Money, then it makes no sense and is completely irrational for it.

Think about what you are saying, its ridiculous.

Are you also unwilling to discuss the content, and simply are stuck on my posting methods, writing, and character?

Comment author: MrMind 17 January 2017 09:45:46AM *  1 point [-]

If it filters out Nash's argument, Ideal Money, then it makes no sense and is completely irrational for it.

Well, since it's an automated process, it filters anything, be it spam, Nash' argument or the words of Omega itself. As I said, it's a compromise. The best we could come up, so far. If you have a better solution, spell it out.

Are you also unwilling to discuss the content, and simply are stuck on my posting methods, writing, and character?

No, mine was just a suggestion for a way that would allow you to lubricate the social friction I think you're experiencing here. On the other side, I am reading your posts carefully and reply when done thinking about.

Comment author: Flinter 17 January 2017 10:07:49AM 0 points [-]

Well, since it's an automated process, it filters anything, be it spam, Nash' argument or the words of Omega itself. As I said, it's a compromise. The best we could come up, so far. If you have a better solution, spell it out.

You are defending irrationality. It filters out the one thing it needs to not filter out. A better solution would be to eliminate it.

No, mine was just a suggestion for a way that would allow you to lubricate the social friction I think you're experiencing here. On the other side, I am reading your posts carefully and reply when done thinking about.

Sigh, I guess we never will address Ideal Money will we. I've already spent all day with like 10 posters, that refuse to do anything but attack my character. Not surprising since the subject was insta-mod'd anyways.

Well, as a last hail mary, I just want to say I think you are dumb for purposefully trolling me like this and refusing to address Nash's proposal. Its John Nash, and he spent his life on this proposal, ya'll won't even read it.

There is no intelligence here, just pompous robots avoiding real truth.

Do you know who Nash is? It took 40 years the first time to acknowledge what he did with his equilibrium work. Its been 20 in regard to Ideal Money...

Comment author: MrMind 17 January 2017 10:36:54AM 1 point [-]

You are defending irrationality. It filters out the one thing it needs to not filter out. A better solution would be to eliminate it.

I wonder what my failure in communicating my idea is in this case. Let me rephrase my argument in favor of filtering and see if I can get my point across: if we eliminated the filter, the site would be inundated with spam and fake accounts posts. By having a filter we block all this, and people willing to pass a small threshold will not be denied to post their contributions.

Sigh, I guess we never will address Ideal Money will we

In due time, I will.

I've already spent all day with like 10 posters, that refuse to do anything but attack my character.

That is unfortunate, but you must be prepared to make these discussions on the lon run. There are people that come here only once a week or only once every three months. A day can be enough to filter out the most visceral reactions, but here discussions can span days, weeks or years.

Its John Nash, and he spent his life on this proposal, ya'll won't even read it.

I am reading it right now, and exactly because it's Nash I'm reading as careful as I can.

But what won't fly here is insulting. Frustration for not being able to communicate your idea is something that we all felt, after all communicating clearly is hard. But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.

Comment author: Flinter 17 January 2017 04:40:57PM 0 points [-]

I wonder what my failure in communicating my idea is in this case. Let me rephrase my argument in favor of filtering and see if I can get my point across: if we eliminated the filter, the site would be inundated with spam and fake accounts posts. By having a filter we block all this, and people willing to pass a small threshold will not be denied to post their contributions.

Let me communicate to you what I am saying. I bring the most important writing ever known to mankind. Who is the mod that moderated Nash? Where is the intelligence in that? Let's not call that intelligence and try and defend it. Let's call it an error.

In due time, I will.

Cheers! :)

That is unfortunate, but you must be prepared to make these discussions on the lon run. There are people that come here only once a week or only once every three months. A day can be enough to filter out the most visceral reactions, but here discussions can span days, weeks or years.

Do you think I am not prepared? I have been at this for about 4 years I think. I have writing 100's maybe thousands of related articles and been on many many forums and sites discussing it and "arguing" with many many people.

I am reading it right now, and exactly because it's Nash I'm reading as careful as I can.

Ah, sincerity!!!!!!!

But what won't fly here is insulting. Frustration for not being able to communicate your idea is something that we all felt, after all communicating clearly is hard. But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.

I have been insulted by nearly every poster that has responded. The mod insulted me, and Nash. I have never been more insulted so quick so much on any other site.

But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.

Yup ban the messenger and ignore the message. Why would these people remain ignorant to Nash? How did Nash go 20 years without anyone giving his lectures serious thought?

Comment author: Flinter 16 January 2017 06:15:50PM 0 points [-]

I don't think I should have done what I did to get my first two karma points. I suspect it degrades the quality of the site at a rate in which rationality can't inflate it. But I'll save my reasoning and the discussion of it ftm. I am now able to post my discussion on its own it seems, so I did it.

2x cheers.

Comment author: niceguyanon 16 January 2017 06:39:52PM 2 points [-]

I suspect it degrades the quality of the site...

Your first paragraph venting your frustration at the 2 karma rule was unnecessary, but cool you realized that.

I think this post is fine as an Open Thread or as an introduction post. I don't see why it is necessary for its own discussion. Plus it seems like you are making an article stating that you will make an article. I don't think you need to do that. Just come right out and say what you have to say.

Comment author: Flinter 16 January 2017 07:20:56PM *  0 points [-]

No you don't understand. I have something valuable to bring but I needed to make my INTRO post an independent one and I was stripped of that possibility by the process.

Comment author: gjm 16 January 2017 08:12:51PM 2 points [-]

You weren't "stripped of that possibility". LW has small barriers to entry here and there; you are expected to participate in other ways and demonstrate your bona fides and non-stupidity before posting articles. Do you think that is unreasonable? Would it be better if all the world's spammers could come along and post LW articles about their sex-enhancing drugs and their exam-cheating services and so on?

Comment author: Flinter 16 January 2017 08:22:12PM 0 points [-]

Yes I think its not reasonable, because it acted counter-productive to the intended use that you are suggesting it was implemented for.

Comment author: gjm 16 January 2017 09:24:26PM 0 points [-]

How?

Comment author: Flinter 16 January 2017 09:29:06PM 0 points [-]

Because I cannot do what was required to make a proper post, which was to not have to make "shit posts" before I make my initial post (which needed to be independent). So the filter, which is trying to foster rational thinking, filtering out the seeds of it.

Comment author: gjm 17 January 2017 12:25:04AM 3 points [-]

No one's requiring you to make "shit posts".

You have not explained why your post had to be "independent". Perhaps there are reasons -- maybe good ones -- why you wanted your first appearance here to be its posting, but I don't see any reason why it's better for LW for that to be so.

In any case, "X has a cost" is not a good argument against X; there can be benefits that outweigh the costs. I hope you will not be offended, but I personally am quite happy for you to be slightly inconvenienced if the alternative is having LW deluged with posts from spammers.

Comment author: Thomas 16 January 2017 08:03:46AM 1 point [-]
Comment author: Luke_A_Somers 06 February 2017 02:43:53PM 1 point [-]

OK, I had dropped this for a while, but here are my thoughts. I haven't scrubbed everything that could be seen through rot13 because it became excessively unreadable

For Part 1: gur enqvhf bs gur pragre fcurer vf gur qvfgnapr orgjrra bar bs gur qvnzrgre-1/2 fcurerf naq gur pragre.

Gur qvfgnapr sebz gur pragre bs gur fvqr-fcurer gb gur pragre bs gur birenyy phor vf fdeg(A)/4. Fhogenpg bss n dhnegre sbe gur enqvhf bs gur fcurer, naq jr unir gur enqvhf bs gur pragre fcurer: (fdeg(A)-1)/4. Guvf jvyy xvff gur bhgfvqr bs gur fvqr-1 ulcrephor jura gung'f rdhny gb n unys, juvpu unccraf ng avar qvzrafvbaf. Zber guna gung naq vg jvyy rkgraq bhgfvqr.

Part 2: I admit that I didn't have the volume of high-dimensional spheres memorized, but it's up on wikipedia, and from there it's just a matter of graphing and seeing where the curve crosses 1, taking into account the radius formula derived above.. I haven't done it, but will eventually.

Part 3 looks harder and I'll look at it later.

Comment author: Thomas 06 February 2017 03:16:11PM 0 points [-]

Part 1 is good.

Comment author: gjm 16 January 2017 01:48:48PM *  0 points [-]

dhrfgvba bar

Qvfgnapr sebz prager bs phor gb prager bs "pbeare" fcurer rdhnyf fdeg(a) gvzrf qvfgnapr ba bar nkvf = fdeg(a) bire sbhe. Enqvhf bs "pbeare" fcurer rdhnyf bar bire sbhe. Gurersber enqvhf bs prageny fcurer = (fdeg(a) zvahf bar) bire sbhe. Bs pbhefr guvf trgf nf ynetr nf lbh cyrnfr sbe ynetr a. Vg rdhnyf bar unys, sbe n qvnzrgre bs bar, jura (fdeg(a) zvahf bar) bire sbhe rdhnyf bar unys <=> fdeg(a) zvahf bar rdhnyf gjb <=> fdeg(a) rdhnyf guerr <=> a rdhnyf avar.

dhrfgvba gjb

Guvf arire unccraf. Hfvat Fgveyvat'f sbezhyn jr svaq gung gur nflzcgbgvpf ner abg snibhenoyr, naq vg'f rnfl gb pbzchgr gur svefg ubjrire-znal inyhrf ahzrevpnyyl. V unira'g gebhoyrq gb znxr na npghny cebbs ol hfvat rkcyvpvg obhaqf rireljurer, ohg vg jbhyq or cnvashy engure guna qvssvphyg.

dhrfgvba guerr

Abar. Lbh pnaabg rira svg n ulcrefcurer bs qvnzrgre gjb orgjrra gjb ulcrecynarf ng qvfgnapr bar, naq gur ulcrephor vf gur vagrefrpgvba bs bar uhaqerq fcnprf bs guvf fbeg.

Comment author: Thomas 16 January 2017 01:59:41PM *  0 points [-]

One: Correct

Two: Incorrect

Three: Correct

Comment author: gjm 16 January 2017 03:05:23PM 1 point [-]

Oooh, I dropped a factor of 2 in the second one and didn't notice because it takes longer than you'd expect before the numbers start increasing. Revised answer:

dhrfgvba gjb

Vs lbh qb gur nflzcgbgvpf pbeerpgyl engure guna jebatyl, gur ibyhzr tbrf hc yvxr (cv gvzrf r bire rvtug) gb gur cbjre a/2 qvivqrq ol gur fdhner ebbg bs a. Gur "zvahf bar" va gur sbezhyn sbe gur enqvhf zrnaf gung gur nflzcgbgvp tebjgu gnxrf ybatre gb znavsrfg guna lbh zvtug rkcrpg. Gur nafjre gb gur dhrfgvba gheaf bhg gb or bar gubhfnaq gjb uhaqerq naq fvk, naq V qb abg oryvrir gurer vf nal srnfvoyr jnl gb trg vg bgure guna npghny pnyphyngvba.

Comment author: Thomas 16 January 2017 03:19:43PM 0 points [-]

Correct.

I gave some Haskell code as a comment over there on my blog, under the posted problem.

1206 dimension is the smallest number. One can experiment with other values.

Comment author: Luke_A_Somers 16 January 2017 01:22:40PM 0 points [-]

On the face of it, the premise seems wrong. For any finite number of dimensions, there will be a finite number of objects in the cube, which means you aren't getting any infinity shenanigans - it's just high-dimensional geometry. And in no non-shenanigans case will the hypervolume of a thing be greater than a thing it is entirely inside of.

Comment author: Thomas 16 January 2017 02:33:25PM 1 point [-]

And in no non-shenanigans case will the hypervolume of a thing be greater than a thing it is entirely inside of.

Are you sure, it's entirely inside?

Comment author: Luke_A_Somers 16 January 2017 03:45:27PM 0 points [-]

OK, that's an angle (pun intended) I didn't catch upon first consideration.

Comment author: gjm 16 January 2017 05:37:55PM 1 point [-]

High-dimensional cubes are really thin and spiky.

Comment author: Thomas 17 January 2017 09:45:16AM *  0 points [-]

They are counterintuitive. A lot is counterintuitive in higher dimensions. Especially something, I may write about in the future.

This 1206 business is even Googleable. Which I have learned only after I have calculated the actual number 1206.

https://sbseminar.wordpress.com/2007/07/21/spheres-in-higher-dimensions/

Comment author: Viliam 19 January 2017 10:12:24AM *  0 points [-]

Good news: People are becoming more aware that AI is a thing, even mainstream media mention it sometimes.

Bad news: People think that spellchecker is an example of AI.

¯\_(ツ)_/¯

Comment author: ingive 19 January 2017 03:07:54PM *  0 points [-]

I think then you should ask what can you do about it (or do the most effective action).

Comment author: chaosmage 21 January 2017 11:26:10PM 1 point [-]

You could give this answer to literally anything.

Comment author: ingive 18 January 2017 07:54:45PM *  0 points [-]

a

Comment author: morganism 17 January 2017 07:26:20PM 0 points [-]

I heard Britain just passed a Robotic Rights Act, but only in passing, and can't find anything on it in search, except the original paper by the U.K. Office of Science and Innovation's Horizon Scanning Centre.

"However, it warned that robots could sue for their rights if these were denied to them.

Should they prove successful, the paper said, "states will be obligated to provide full social benefits to them including income support, housing and possibly robo health care to fix the machines over time.""

not to mention slavery, international transportation of sex workers, overtime, right to quit, etc.

http%3A%2F%2Frobots.law.miami.edu%2Fwp-content%2Fuploads%2F2012%2F04%2FDarling_Extending-Legal-Rights-to-Social-Robots-v2.pdf

anyone writing this up ?

Comment author: moridinamael 16 January 2017 09:38:45PM 0 points [-]

Some of us sometimes make predictions with probabilities attached; does anybody here actually try to keep up a legit belief web and do Bayesian updating as the results of predictions come to pass?

If so, how do you do it?

Comment author: ChristianKl 17 January 2017 06:55:36AM 1 point [-]

Some of us sometimes make predictions with probabilities attached; does anybody here actually try to keep up a legit belief web and do Bayesian updating as the results of predictions come to pass?

No, and having a self-consistent belief net might decrease the quality of the beliefs a lot. Having multiple distinct perspectives on an issue was suggested by Tetlock to be very useful.

Comment author: moridinamael 17 January 2017 02:54:52PM 1 point [-]

A Bayesian network is explicitly intended to accommodate conflicting perspectives and update the weights of two or more hypotheses based on the result of an observation. There's absolutely no contradiction between "holding multiple distinct perspectives" and "mapping belief dependencies and using Bayesian updating".

Comment author: ingive 16 January 2017 12:04:42PM 0 points [-]

How would we go about changing human behavior to be more aligned with reality? I was thinking it is undoubtedly the most effective thing to do. Ensure world domination of rationalist, effective altruist and utilitarian ideas. There are two parts to this, I simply mention R, EA and U because it resonates very well here with the types of users here and alignment with reality I explain next. How I expect alignment to reality to be, is accepting facts fully. For example, thinking and emotionally, this includes uncertainty of facts (because of facts like an interpretation of QM).

One example is that consciousness, Qualia, experience is a tool, not a goal. This is facts, consciousness arose or dissociated (Monistic Idealism) as an evolutionary process. If you deny this, you're denying evolution and in a death spiral of experience. If you start accepting facts emotionally, rather than fighting emotionally with reality, you merge and paradoxically get what you wanted emotionally. An example of aligning with reality. But if you are aware of the paradox you might seek for the goal of experience, so be aware.

This is truly the essence of epistemic rationality and it's hard work. Most of us want to deny that experience is not our goal, but that's why we don't care about anything except endless intellectual entertainment. How do we change human behavior to be more aligned with reality? I'm unsure. I'm thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.

I think it's important to figure out what drives human behavior to not be aligned with reality and what make us more aligned. When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

When we know how to become the most hardcore altruist, then obviously, everyone should as well.

As far as I can tell, P (read sequences) < P (figure this out)

Comment author: Thomas 16 January 2017 12:09:46PM 2 points [-]

Ensure world domination of rationalist

A.K.A. Soviet Union and dependent states.

Comment author: Viliam 16 January 2017 05:21:39PM 0 points [-]

Ensure world domination of rationalist

A.K.A. Soviet Union and dependent states.

What makes you believe that the ruling class of Soviet Union was rational? It was a country where Lysenkoism was the official science, and where Kolmogorov was allowed to do math despite being gay only because he made a contribution to military technology.

Comment author: Thomas 16 January 2017 07:00:24PM *  1 point [-]

It was NOT rational. It was declared rational. As "we are not going to pursue profit, but we will instead drop the prices, as the socialism is going to be much more rational system".

And many, many more such paroles. Several might be even true.

The social democrats of today still want some of those "rationalizations" to implement. The problem is, the world doesn't operate on such rationales.

And this Effective Altruism looks similar to me.. If one wants "to do good" for others, he should invest his money wisely. He should employ people, he should establish new businesses with those less fortunate people.

Giving something for nothing is not a very good idea! But using your powers for others to give something for nothing ... is a bad idea. In the name of self-perceived rationality - it's even worse.

Comment author: ingive 16 January 2017 08:02:31PM 0 points [-]

I wrote to align with reality, thus accept facts fully, which includes the uncertainty of facts. There is no alignment with reality in any of what you've said in comparison to mine, so strawman at best.

And this Effective Altruism looks similar to me.. If one wants "to do good" for others, he should invest his money wisely. He should employ people, he should establish new businesses with those less fortunate people.

You're implying that "doing good" effectively couldn't be of investing, employing or establishing businesses. It's independent of a method as long as it is effective in the context of effective altruistic actions. It makes no difference as long as it's the most effective with positive expected value.

Comment author: ingive 16 January 2017 01:29:02PM 0 points [-]

Why do you think that?

Comment author: Thomas 16 January 2017 02:12:16PM 2 points [-]

It was the same rationale. "We know what's the best for everybody else, so we will take the power!"

Besides the fact that those revolutionaries were wrong at the beginning, they purged each other throughout the process, so that the most cunning one was selected. Which was even more wrong, than those early revolutionaries were. Or maybe Stalin was more right than Trotsky, who knows, but it didn't matter very much. Even Lenin was wrong.

But even if Lenin was right, Andropov would still be corrupted.

Comment author: ingive 16 January 2017 02:57:54PM *  0 points [-]

I didn't really mean that. It was just setting an emotional stage for the rest of the comment. What do you think of the rest?

Comment author: ZankerH 16 January 2017 05:11:26PM *  1 point [-]

Having actually lived under a regime that purported to "change human behaviour to be more in line with reality", my prior for such an attempt being made in good faith to begin with is accordingly low.

Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place.

I'm thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.

This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people's brain centres with 15th century technology.

Why don't you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide? And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation? If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point? Because that's essentially what you're proposing from the point of view of all potential futures where you fail.

Comment author: ingive 16 January 2017 07:31:32PM *  0 points [-]

Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place.

You're excluding being aligned with objective reality (accepting facts, etc) with said effectiveness. Otherwise, it's useless.

This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people's brain centres with 15th century technology.

I'm unsure why you're presuming rearranging people's brains isn't done constantly independent of our volition. This simply starts questioning how we can do it, with our current knowledge.

Why don't you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide?

Why would it lead to megalomania and genocide, when it's not aligned with reality? An understanding of neuroscience and evolutionary biology, presuming you were aligned with reality to figure it out and accept facts, would be enough and still understanding that we can be wrong until we know more.

And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation?

As I said "this includes uncertainty of facts (because of facts like an interpretation of QM)." which makes us embrace uncertainty, that reality is probabilistic with this interpretation. It's not absolute.

If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point

Because that's essentially what you're proposing from the point of view of all potential futures where you fail.

I'm not.

Comment author: MrMind 17 January 2017 09:11:27AM *  0 points [-]

I think that the problem you state in unsolvable. Human brain evolved to solve social problems related to survival, not to be a perfect Bayesian reasoner (Bayesian models have a tendency to explode in computational complexity as the number of parameters increases). Unless you want to design a brain anew I see no way to modify ourselves to become perfect epistemic rationalist, besides a lot of effort. That might be a shortcoming of my imagination, though.
There's also the case that we shouldn't be perfect rationalists: possibly the cost of adding a further decimal to a probability is much higher than the utility gained because of it, but of course we couldn't know in advance. Also, sometimes our brain prefers to fool itself so that it is better motivated to something / happier, although Eliezer argued at length against this attitude.
So yeah, the landscape of the problem is thorny.

As far as I can tell, P (read sequences) < P (figure this out)

You really meant U(read sequences) < U(figure this out)

Comment author: ingive 17 January 2017 12:14:38PM *  0 points [-]

I see that the problem in your reasoning is that you've already presumed what it entails, what you have missed out on is understanding ourselves. Science and reasoning already tell us that we share neural activity, are a social species thus each of us could be considered to be a cell in a brain. It's not as much if every cell decides to push the limits of its rationality, rather the whole collective as long as the expected value is positive. But to do that the first cells have to be U(figure this out).

It's not either perfect or non-perfect, that's absolute thinking. Rather by inductive reasoning or QM probabilistic thinking, "when should I stop refining this, instead share this?" after enough modification and understanding of neuroscience and evolutionary biology for the important facts in what we are.

Based on not thinking in absolute perfection, it's not a question of if, but rather what do we do? Because your reasoning cannot be already flawed before thinking about this problem. We already know that we can change behavior and conditioning, look around the world how people join religious groups, but how do we capitalize on this brain mechanism to increase productivity, rationality, and so on?

Before I said, "stop refining it then share it", that's all it takes and the entire world will have changed. Regarding that, our brain can fool itself, yeah, I don't see why there can't be objective measurement outside of subjective opinion and that it'll surely be thought of in the investigation process.

Comment author: moridinamael 17 January 2017 12:08:13AM 0 points [-]

Could you unpack "aligning with reality" a bit? Is it meaningfully different from just having a scientific mindset?

Comment author: ingive 17 January 2017 01:23:09AM *  0 points [-]

A scientific mindset has a lower probability of being positive expected value because there is more than one value when it comes to making decisions, sometimes in conflict with each other. This can lead to cognitive dissonance in daily life. It's because science is a tool, the best one we got. Aligning with reality has a higher probability as it's an emotional heuristic, with only one value necessary.

Aligning with reality means submitting yourself emotionally, similar to how a religious person submits to God, but in this case, our true creator: To logic, where it is defined here as "the consistent patterns which bring about reality". Then you accept facts fully. You understand how everything is probabilities, as per one interpretation of quantum mechanics and that experience is a tool rather than a goal. Using inductive reasoning and deciding actions as per positive expected value allows you to accept facts and be aligned with reality.

It's hard if you keep thinking binary, whether it be absolutes or not, 1's or 0's. Because to be able to accept facts it to be able to accept one might be wrong, everything is probabilities, infinite possibilities. Practically, if you know exercising every day is positive expected value, for example, then as you align yourself with reality in every moment, you realize even if you injure yourself accidentally today, you won't give up reality. Because you made the most efficient action as per your knowledge and you already accounted for the probability of accidentally injuring yourself.

So as you keep feeling you also upgrade it with the probabilities to keep your emotions aligned with reality and easier able to handle situations as I mentioned above, however, maybe something more specific if someone breaks your trust. You already took it in consideration so you won't completely lose trust and emotions for reality.

When you accept and align yourself with reality, then the facts which underlie it, with our current understandings and as long as the likelihood is high, you keep aligning yourself. Experience truly is a feedback loop which results in whatever you feed it.

Regarding what aligning with reality entails: When you're constantly aligning yourself to reality, as long as you deem the probability high you'll be able to emotionally resonate with insights gained. For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you. If that doesn't resonate enough, for example, evolutionary biology that we're all descendants from stardust might. Or that there is a probability that you don't exist (as per QM) although very small. So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally. Then you keep the momentum by doing logical actions as per positive expected value after you learn everything what truly is you, and so on.

It's about what Einstein believed in and Carl Sagan, Spinoza's. However Einstein couldn't accept QM because he was thinking in absolutes already, and was unaware of how the brain works. Which we do now, for example, know we're all inherently in denial, and how memory storage works, etc. If he knew that he might have had a different view.

I can't really fix up this text right now but I hope it can somehow help for you to understand what it means to align with reality. It's really important to accept that experience is a tool, not a goal, from insights from evolutionary biology for example. Then there is reality. Who is aligning, if there is only reality?

Comment author: moridinamael 17 January 2017 03:07:45PM *  0 points [-]

I think there is an irreconcilable tension between your statement that one should completely emotionally submit to and align with facts, and that one should use a Bayesian epistemology to manage beliefs.

There are many things in life and in science that I'm very certain about, but by the laws of probability I can never be 100% certain. There are many more things that I am less than certain about, and hold a cloud of possible explanations, the most likely of which may only be 20% probable in my estimation. I should only "submit" to any particular belief in accordance with my assessment of its likelihood, and can never justify submitting to some belief 100%. Indeed, doing so would be a form of irrational fundamentalism.

For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you. If that doesn't resonate enough, for example, evolutionary biology that we're all descendants from stardust might. Or that there is a probability that you don't exist (as per QM) although very small. So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally.

I feel it might help you to know that none of this is actually factual. These are your interpretations of really vague and difficult-to-pin-down philosophical ideas, ideas about which very smart and well-read people can and do disagree.

For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses. The same could be said for the idea that helping another is helping yourself. That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

I suspect that you haven't read through all of Eliezer's blog posts. His writings cover all the things you're talking about, but do it in a way that is grounded in much sturdier foundations than you appear to be using. It also seems that you are very much in love with this idea of Logic as being the One Final Solution to Everything, and that is always a huge danger sign in human thinking. Just thinking probablistically, the odds that the true Final Solution to Everything has been discovered and that you are in possession of it are very low. Hence the need to keep a distribution of likelihoods over beliefs rather than putting all your weight down 100% on some perspective that appeals to you aesthetically.

Comment author: ingive 17 January 2017 03:57:50PM 0 points [-]

I should only "submit" to any particular belief in accordance with my assessment of its likelihood, and can never justify submitting to some belief 100%. Indeed, doing so would be a form of irrational fundamentalism.

Not necessarily, because the submitting is a means rather than the goal, and you will always never be certain. It's important to recognize empirically how your emotions work in contrary to a Bayesian epistemology, how using its mechanisms paradoxically lead to something which is more aligned with reality. It's not done with Bayesian epistemology, it is done with emotions, that do not speak in our language and it's possibly hard-wired to be that way. So we become aware of it and mix in the inductive reasoning.

For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses.

"true in some narrow technical sense" yet "false in probably more relevant senses" this is called cognitive dissonance, empirically it can even be this way by some basic reasoning, both emotionally and factually, which is what I am talking about, and which needs to be investigated. You're proving my point :)

That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

That's simply semantics, the problem is attaching emotionally to a sense of "I", which is not aligned with reality, independent of action, you may speak of this practical body, hands, I, for communication, it all arises in your neural activity without a center and it's ever changing. Empirically, that arises in the subjective reference frame, which is taken as a premise for this conversation.

I suspect that you haven't read through all of Eliezer's blog posts. His writings cover all the things you're talking about, but do it in a way that is grounded in much sturdier foundations than you appear to be using.

Yes. Unsure if his writings cover what I am talking about since evident by what you've said so far. Not that I blame you, I just want us to meta observe ourselves so we can be more aligned.

It also seems that you are very much in love with this idea of Logic as being the One Final Solution to Everything, and that is always a huge danger sign in human thinking. Just thinking probablistically, the odds that the true Final Solution to Everything has been discovered and that you are in possession of it are very low. Hence the need to keep a distribution of likelihoods over beliefs rather than putting all your weight down 100% on some perspective that appeals to you aesthetically.

I'm unsure what considers as danger sign in human thinking if you change perspective, the likelihood that something is worse than what we have is low. You only need a limited emotional connection to science and rationality to realize this and how bad thinking spreads epidemically now, but from someone like us, it's more likely to be good thinking? The likelihood to investigate this is very high to be positive expected value because inherently you, I and more possess the qualities which are not aligned with reality. I want to reassure you of something, however.

Alignment with reality is the most probable to give equilibrium as it's aligned with the utility function. When in a death spiral and not aligned (yet think is aligned) then aligning with reality might seem as not aligning ("very much false in probably more relevant senses") but the opposite and that it would be against utility function and lead to experience opposite to before. That's the case, but if you are honest with your emotions, the experience which is baseline has a hard time to see beyond itself. That's why understanding that experience is a tool, not a goal, although it gives to what would be considered a "satisfaction of that goal", it is only by accepting facts that it happens, and it can't happen in the death spiral.

I'm unsure if this is possible to communicate with words, this is quite a limitation of language and it seems as regardless what I say to you, you cannot see beyond it. That's why I want to start a discussion of how we should be more aligned with reality and where to start from. Whether it be neuroscience studies or whatever.

Comment author: moridinamael 17 January 2017 04:19:19PM *  0 points [-]

It's important to recognize empirically how your emotions work in contrary to a Bayesian epistemology, how using its mechanisms paradoxically lead to something which is more aligned with reality. It's not done with Bayesian epistemology, it is done with emotions, that do not speak in our language and it's possibly hard-wired to be that way. So we become aware of it and mix in the inductive reasoning.

Science does not actually know how emotions work to the degree of accuracy you are implying. Your statement that using emotional commitment rather than Bayesian epistemology leads to better alignment with reality is a hypothesis that you believe, not a fact that has been proven. If you become a very successful person by following the prescription you advocate, that would be evidence in favor of your hypothesis, but even that would not be very strong evidence by itself.

"true in some narrow technical sense" yet "false in probably more relevant senses" this is called cognitive dissonance, empirically it can even be this way by some basic reasoning, both emotionally and factually, which is what I am talking about, and which needs to be investigated. You're proving my point :)

I am not sure what you're saying here. "Cognitive dissonance" is not the same thing as observing that a phenomenon can be framed in two different mutually contradictory ways. I do not have an experience of dissonance when I say, "From one point of view we're inseparable from the universe, from a different point of view we can be considered independent agents." These are merely different interpretative paradigms and neither are right or wrong.

Yes. Unsure if his writings cover what I am talking about since evident by what you've said so far. Not that I blame you, I just want us to meta observe ourselves so we can be more aligned.

I am trying to say nicely that Eliezer's writings comprehensively invalidate what you're saying. The reason you're getting pushback from Less Wrong is that we collectively see the mistakes that you're making because we have a shared bag of epistemic tools that are superior to yours, not because you have access to powerful knowledge and insights that we don't have. You would really benefit in a lot of ways from reading the essays I linked before you continue proselytizing on Less Wrong. We would love to have you as a member of the community, but in order to really join the community you will need to be willing to criticize yourself and your own ideas with detachment and rigor.

I'm unsure what considers as danger sign in human thinking if you change perspective, the likelihood that something is worse than what we have is low. You only need a limited emotional connection to science and rationality to realize this and how bad thinking spreads epidemically now, but from someone like us, it's more likely to be good thinking? The likelihood to investigate this is very high to be positive expected value because inherently you, I and more possess the qualities which are not aligned with reality. I want to reassure you of something, however.

I'm not arguing that changing perspective from default modes of human cognition is bad. I'm arguing that your particular brand of improved thinking is not particularly compelling, and is very far from being proven superior to what I'm already doing as a committed rationalist.

Alignment with reality is the most probable to give equilibrium as it's aligned with the utility function. When in a death spiral and not aligned (yet think is aligned) then aligning with reality might seem as not aligning ("very much false in probably more relevant senses") but the opposite and that it would be against utility function and lead to experience opposite to before. That's the case, but if you are honest with your emotions, the experience which is baseline has a hard time to see beyond itself. That's why understanding that experience is a tool, not a goal, although it gives to what would be considered a "satisfaction of that goal", it is only by accepting facts that it happens, and it can't happen in the death spiral.

I would actually suggest that you stop using the phrase "aligning with reality" because it does not seem to convey the meaning you want it to convey. I think you should replace every instance of that phrase with the concrete substance of what you actually mean. You may find that it means essentially nothing and it just a verbal/cognitive placeholder that you're using to prop up unclear thinking. For example, in the above paragraph, "Alignment with reality is the most probable to give equilibrium as it's aligned with the utility function" could be rewritten as "Performing the actions most likely to yield highest utility is most probable to be aligned with the utility function", which is a tautology, not an insight.

Comment author: ingive 17 January 2017 05:01:32PM *  0 points [-]

Science does not actually know how emotions work to the degree of accuracy you are implying. Your statement that using emotional commitment rather than Bayesian epistemology leads to better alignment with reality is a hypothesis that you believe, not a fact that has been proven. If you become a very successful person by following the prescription you advocate, that would be evidence in favor of your hypothesis, but even that would not be very strong evidence by itself.

I don't know, that's why I wanted to raise an investigation into it, but empirically you can validate or invalidate the hypothesis by emotional awareness, which is what I said at the start of my message you quoted and somehow make me seem to imply science when I say empirically.

First sentence: "It's important to recognize empirically"

I do not have an experience of dissonance when I say,

You might've had, but no longer. That's how cognitive dissonance works.

"From one point of view we're inseparable from the universe, from a different point of view we can be considered independent agents." These are merely different interpretative paradigms and neither are right or wrong.

Independent agents is an empirical observation which I have already taken as a premise as a matter of communication. Emotionally you don't have to be an independent agent of the universe if you emotionally choose to. It's a question whether one alignment is more aligned with reality based on factual evidence or what you feel (been conditioned). Right or wrong is a question of absolutes. More aligned overtime is not.

you will need to be willing to criticize yourself and your own ideas with detachment and rigor.

I'm unsure what it is I have not written which has not tried to communicate this message, in case you don't understand, that's exactly what I am trying to tell you. I am offering to raise a discussion to figure out how to do it. Aligning with reality implies detachment from things which are not aligned. If you wonder if attachment to it is possible, yeah as a means, but you'll soon get over it by empirical and scientific evidence.

I'm not arguing that changing perspective from default modes of human cognition is bad. I'm arguing that your particular brand of improved thinking is not particularly compelling, and is very far from being proven superior to what I'm already doing as a committed rationalist.

I'm not sure, that's why I want to raise a discussion or a study group to investigate this idea.

"Performing the actions most likely to yield highest utility is most probable to be aligned with the utility function",

Simply being aligned with reality gives you equilibrium as that's what you were designed to do. Using Occam's razor here simplifies your programming.

The bottom line is being able to accept facts emotionally (such as neural activity before) rather than relying on empirical observations of social conditioning. I'm unsure that you've in any way disproved my point I just made.

That's the point I want to bring, we should want to investigate that further and how we can align ourselves with the facts emotionally (empirically). But how do we do it?

Simply by saying it like this "true in some narrow technical sense" then "false in probably more relevant senses" so your empirical observation is probably "true" rather than scientific evidence, or facts? (which you call narrow and technical), no it's not probably true and there is a disconnect between your emotional attachments to what's less probable to what's more probable. You don't even see it as a problem because it's your lens, yet you have to do your best to admit it in a way where it doesn't seem too obvious by using words like "narrow". That's exactly what I invite you to discuss further, why are you believing things to be false, when the scientific evidence says otherwise? ("true in some narrow technnical sense") I presume you're also using true and false in a linguistic way, there's no such thing.

That's exactly why I deem it important, because if you did, you'd say "yeah the scientific evidence says so" instead of "no my senses tells me it's false" or both (which makes no sense, worth to investigate!), what if by learning of the scientific evidence, you adopt the "truth" so that your senses tell you what is "true"? That's what you would do.

Comment author: moridinamael 17 January 2017 05:31:22PM *  0 points [-]

Simply by saying it like this "true in some narrow technical sense" then "false in probably more relevant senses" so your empirical observation is probably "true" rather than scientific evidence, or facts? (which you call narrow and technical), no it's not probably true and there is a disconnect between your emotional attachments to what's less probable to what's more probable. You don't even see it as a problem because it's your lens, yet you have to do your best to admit it in a way where it doesn't seem too obvious by using words like "narrow". That's exactly what I invite you to discuss further, why are you believing things to be false, when the scientific evidence says otherwise? ("true in some narrow technnical sense") I presume you're also using true and false in a linguistic way, there's no such thing.

There is a narrow technical sense in which my actions are dependent on the gravitational pull of some particular atom in a random star in a distant galaxy. That atom is having a physical effect on me. This is true and indisputable.

In a more relevant sense, that atom is not having any effect on me that I should bother with considering. If a magical genie intervened and screened off the gravitational field of that atom, it would change none of my choices in any way that could be observed.

What am I supposedly believing that is false, that is contradicted by science? What specific scientific findings are you implying that I have got wrong?

...

Let me back way up.

You are saying a lot of really uncontroversial things that nobody here particularly cares to argue about, like "Occam's razor is good" and "we are not causally separate from the universe at large" and "living life as a human requires a constant balancing and negotiation between the emotional/sensing/feeling and rational/deliberative/calculating parts of the human mind". These ideas are all old hat around here. They go all the way back to Eliezer's original essays, and he got those ideas from much older sources.

Then you're jumping forward and making quasi-religious statements about "aligning with reality" and "emotionally submitting" and talking about how your "sense of self disappears". All that stuff is your own unsupported extrapolations. This is the reason you're having trouble communicating here.

Comment author: ingive 17 January 2017 06:16:40PM *  0 points [-]

What am I supposedly believing that is false, that is contradicted by science? What specific scientific findings are you implying that I have got wrong?

This is what you said:

"For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses."

You're believing that you and your environment are separate based on "relevant" senses. Scientific evidence is irrelevant to your some of your senses, it is technical. If all of your senses were in resonance, including emotional, then there wouldn't be such a thing where scientific evidence is irrelevant in this context.

So your environment and you are not separate. This is a scientific fact. Because it's all a part of your neural activity. Now I am not denying consciousness, qualia or empirical evidence. I'm already taking it as a premise. But you are emotionally attached to the idea that you and environment are separate, that's why you're unable to accept the scientific evidence. However, if you had a scientific mindset, facts would make you accept it. It's not in the way you think right now "It's true in a technical sense, but not for the relevant senses", whereas one part of you accept it but the other, your emotions, do not.

Exactly this is what I am explaining by aligning with reality, you're aligning and letting the evidence in rather than rejecting from preconditioned beliefs. I think you're starting to understand and that you will be stronger because of it. Even if it might seem a little scary at start. Of course we have to investigate it.

There is a narrow technical sense in which my actions are dependent on the gravitational pull of some particular atom in a random star in a distant galaxy. That atom is having a physical effect on me. This is true and indisputable. In a more relevant sense, that atom is not having any effect on me that I should bother with considering. If a magical genie intervened and screened off the gravitational field of that atom, it would change none of my choices in any way that could be observed.

You don't bother considering because it's an analogy in which the hypothetical scenario leads to that conclusion. Do the same with the statements in context, repeat it, is it having any effect on you that you feel that you're not separate from your environment ("Helping others is helping you?") and so on? But of course you have to write down in the same manner, but now not for an analogy.

Then you're jumping forward and making quasi-religious statements about "aligning with reality" and "emotionally submitting" and talking about how your "sense of self disappears". All that stuff is your own unsupported extrapolations. This is the reason you're having trouble communicating here.

Aligning with reality is an emotional heuristic which follows Occam's razor. Emotionally submitting, you already do. That's an example of if you emotionally submit to a heuristic which constantly aligns you to reality and acts as a guide to your decisions. Then if there is evidence, like I've written in the start of the post, you submit yourself to the extent where it's no longer in "a technical sense".

Comment author: moridinamael 17 January 2017 06:28:14PM 0 points [-]

But you are emotionally attached to the idea that you and environment are separate, that's why you're unable to accept the scientific evidence.

No, I'm not.

This is just not a very interesting or useful line of thinking. I (and most people on this forum) already try to live as rationalists, and where your proposal implies any deviation in from that framework, your deviations are inferior to simply doing what we are already doing. Furthermore, you consistently rely on buzzwords of your own invention ("aligning with reality", "emotionally submitting") which greatly inhibit your attempts at clarifying what you're trying to say. Perhaps if you read the essays as I suggest, you could provide substantive criticisms/improvements that did not rely on your own idiosyncratic terminology.

Comment author: ingive 16 January 2017 08:23:45PM *  0 points [-]

How disappointing. No one on LW appears to want to discuss this. Except for a few who undoubtedly misunderstood this post and started raving about some irrelevant topics. At least let me know why you don't want to.

1) How would we go about changing human behavior to be more aligned with reality?

Aligned with reality = Accepting facts fully (probably leads to EA ideas, science, etc)

2) When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

Comment author: username2 18 January 2017 10:33:54AM *  1 point [-]

1) How would we go about changing human behavior to be more aligned with reality?

Replace all humans with machines.

2) When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

That's basically related to the entire topic of this site. People probably aren't engaging with this question because it's too tiresome to summarize all the information that is available from that little search bar in the upper right corner.

Comment author: ingive 18 January 2017 11:28:53AM *  0 points [-]

Replace all humans with machines.

Changing human behavior is probably more efficient than to build machines, to align more with reality. It's a question whether a means is a goal for you? If not, you would base your operations on the most effective action, probably changing behavior (because you could change the behavior of one, to equal the impact of your robot building, but probably more). I don't think replacing all humans with machines is a smart idea anyway. Merging biology with technology would be a smarter approach from my view as I deem life to be conscious and machines to not be. Of course, I might be wrong, but sometimes you might not have an answer but still give yourself the benefit of the doubt, for example, if you believed that every action is inherently selfish, you would still do actions which were not. By giving you the benefit of the doubt, if you figured out later on (which we did) that it is not the case, then that was a good choice. This includes consciousness since we can't prove the external world it would be wise to keep humans around or utilize the biological hardware. If we had machines which replaced all humans, then that would be not very smart machines to at least not keep some around in a jungle or so, which hadn't been contacted. Which undoubtedly mean unfriendly AI, like a paperclip maximizer.

I just want to tell you that you have to recognize what you're saying and how it looks, even though you only wrote 5 words, you could as well be supporting a paperclip maximizer.

That's basically related to the entire topic of this site. People probably aren't engaging with this question because it's too tiresome to summarize all the information that is available from that little search bar in the upper right corner.

What should I search for to find an answer to my question? Flaws of human behavior that can be overcome (can they?) like biases and fallacies is relevant, but it's quite specific however, I guess that's very worthwhile to go through to improve functionality. Something other would be stupid.

Comment author: niceguyanon 18 January 2017 05:20:13PM 0 points [-]

Why I think people are not engaging you. But don't take this as a criticism of your ideas or questions.

  • You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics, because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

  • I think username2 was making a non-serious cheeky comment which went over your head and you responded with a wall of text touching on several ideas. People sometimes just want small exchanges and they have no confidence in you to keep exchanges short.

  • Agreeing with the sentiment that people probably aren't engaging with this question because it's too tiresome to summarize all the information that is available, and what is available is probably incomplete as well. By asking such a broad question rather than a narrower, specific, or applied question, you won't get many responses.

Comment author: username2 18 January 2017 07:31:12PM *  0 points [-]

I was being cheeky, yes, but also serious. What do you call a perfect rationalist? A sociopath[1]. A fair amount of rationality training is basically reprogramming oneself to be mechanical in one's response to evidence and follow scripts for better decision making. And what kind of world would we live in if every single person was perfectly sociopathic in their behaviour? For this reason in part, I think the idea of making the entire world perfectly rationalist is a potentially dangerous proposition and one should at least consider how far along that trajectory we would want to take it.

But the response I gave to ingive was 5 words because for all the other reasons you gave I did not feel it would be a productive use of my time to engage further with him.

[1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.

Comment author: ingive 18 January 2017 08:10:24PM *  0 points [-]

No, you don't. A perfect rationalist is not a sociopath because a perfect rationalist understands what they are, and by scientific inquiry can constantly update and align themselves with reality. If every single person was a perfect rationalist then the world would be a utopia, in the sense that extreme poverty would instantly be eliminated. You're assuming that a perfect rationalist cannot see through the illusion of self and identity, and update its beliefs by understanding neuroscience and evolutionary biology. Complete opposite, they will be seen as philanthropic, altruistic and selfless.

The reason why you think so is because of straw Vulcan, your own attachment to your self and identity, and your own projections onto the world. I have talked about your behavior previously in one of my posts. do you agree? I also gave you suggestions on how to improve, by meditating, for example. http://lesswrong.com/lw/5h9/meditation_insight_and_rationality_part_1_of_3/

In another example, as you and many in society seem to have a fetish for sociopaths, yes you'll be a sociopath, but not for yourself, for the world. By recognizing your neural activity includes your environment and that they are not separate, that all of us evolved from stardust, and practicing for example meditation or utilizing psychotropic substances, your "Identity" "I" "self" becomes more aligned, and thus what your actions are directed to. That's called Effective Altruism. (emotions aside, selflessness speaks louder in actions!)

Edit: You changed your post after I replied to you.

[1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.

Still apply. Doesn't matter.

Comment author: niceguyanon 18 January 2017 08:33:41PM 0 points [-]

If I remember correctly username2 is a shared account, so the person are talking to now might not be whom you have had previously conversed with. Just thought you should know because I don't want you to mistake the account with a static person.

Comment author: ingive 18 January 2017 08:46:12PM *  0 points [-]

It's unlikely that it's not the same person, or people on average utilize shared accounts to try and share their suffering (by that I mean have a specific attitude) in a negative way. It would be interesting to compare shared accounts with other accounts by for example IBM Watson personality insights. In a large scale analysis.

I would just ban them from the site. I'd rather see a troll spend time creating new accounts and people noticing the sign-up dates. Relevant: Internet Trolls Are Narcissists, Psychopaths, and Sadists

By the way, I was not consciously aware of the user when I wrote my text or the analysis of the user agenda. But afterwards I remembered "oh it's that user again".

Comment author: username2 18 January 2017 08:58:08PM 0 points [-]

The username2 account exists for a reason. Anonymous speech does have a role in any free debate, and it is virtuous to protect the ability to speak anonymously.

Comment author: ingive 18 January 2017 05:40:25PM 0 points [-]

You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics,

You forgot to say that you think that. But for username 2's point, you had to reiterate that you think.

because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

That's unfortunate if it is the case if ideas which are outside their echo chamber create such fear, then what I say might be of use in the first place, if we all come together and figure things out :)

I think username2 was making a non-serious cheeky comment which went over your head and you responded with a wall of text touching on several ideas. People sometimes just want small exchanges and they have no confidence in you to keep exchanges short.

It was but it speaks of his underlying ideas and character to even be in the position to do that. I don't mind it, I enjoy typing walls of texts. What would you want me to respond, if at all?

Agreeing with the sentiment that people probably aren't engaging with this question because it's too tiresome to summarize all the information that is available, and what is available is probably incomplete as well. By asking such a broad question rather than a narrower, specific, or applied question, you won't get many responses.

Yeah, I think so too, but I do think there is a technological barrier in how this forum was setup for the type of problem-solving I am advising for. If we truly want to be Less Wrong, it's fine with how it is now, but there can definitely be improvements in an effort for the entire species rather than a small subset of it, 2k people.

Comment author: niceguyanon 18 January 2017 07:22:16PM 0 points [-]

It was but it speaks of his underlying ideas and character to even be in the position to do that.

What do you mean by this? Assuming its a joke, why does it speaks to his character and underlying ideas; why would it, it wasn't meant for you to take seriously.

What would you want me to respond, if at all?

Probably not at all.

Comment author: ingive 18 January 2017 08:35:23PM 0 points [-]

What do you mean by this? Assuming its a joke, why does it speaks to his character and underlying ideas; why would it, it wasn't meant for you to take seriously.

Because a few words tell a large story when they also decided it was worth their time to write it. I wrote in my post and explained for example what type of viewpoints it implies and that it's stupid (in the sense inefficient and not aligned with reality).

Probably not at all.

I will update my probabilities then as I gain more feedback.

Comment author: plethora 18 January 2017 02:09:28PM 0 points [-]

Accepting facts fully (probably leads to EA ideas,

It's more likely to lead to Islam; that's at least on the right side of the is-ought gap.

Comment author: ingive 18 January 2017 02:37:02PM *  0 points [-]

that's at least on the right side of the is-ought gap.

I'm having a hard time understanding what you mean.

Accepting facts fully is EA/Utilitarian ideas. There is no 'ought' to be. 'leads' was the incorrect word choice.

Comment author: plethora 19 January 2017 07:47:52AM 1 point [-]

No. Accepting facts fully does not lead to utilitarian ideas. This has been a solved problem since Hume, FFS.

Comment author: ingive 19 January 2017 02:55:57PM 0 points [-]

You're welcome to explain why this isn't the case. I'm thinking mostly about neuroscience and evolutionary biology. It tells us everything.

Comment author: moridinamael 19 January 2017 03:08:20PM 1 point [-]

Is-ought divide. If you have solved this problem, mainstream philosophy wants to know.

Comment author: ingive 19 January 2017 03:51:36PM *  0 points [-]

If someone wins the Nobel prize you heard it here first.

The is-ought problem implies that the universe is deterministic, which is incorrect, it's an infinite range of possibilities or probabilities which are consistent but can never be certain. Humes beliefs about is-ought came from his own understanding of his emotions and those around him's emotions. He correctly presumed that it is what drives us and that logic and rationality could not (thus not ought to be in any way because things are) and thought the universe is deterministic (without the knowledge of the brain and QM). The insight he's not aware of that even though his emotions are the driving factor, he misses out that he can emotionally be with rationality and logic, facts, so there is no ought to be from what is. 'What is' implies facts, rationality, and logic and so on, EA/Utilitarian ideas. The question about free will is an emotional one if you are aware your subjective reference frame, awareness, was a part of it then you can let go of that.

Comment author: moridinamael 19 January 2017 04:21:35PM 1 point [-]
  1. The universe is deterministic.

  2. You seem to be misunderstanding is-ought. The point is that you cannot conclude what ought to be, or what you ought to do, from what is. You can conclude what you ought to do in order to achieve some specific goal, but you cannot infer "evolutionary biology, therefor effective altruism". You are inserting your own predisposition into that chain and pretending it is a logical consequence.

Comment author: ingive 19 January 2017 05:11:39PM 0 points [-]
  1. With that interpretation, not Copenhagen. I'm unsure, because inherently, can we really be certain of absolutes because of our lack of understanding of the human brain? I think that how memory storage and how the brain works shows us that we can't be certain of our own knowledge.

  2. If you are right with that the universe is deterministic then what ought to be is what is. But if you ought to do the opposite from what 'is' tell us, what are you doing then? You are not allowed to have a goal which is not aligned with what is because that goes against what you are. I do agree with you now however, I think that this is semantics. I think it was a heuristic. But then I'll say "What is, is what you ought to be".

Comment author: plethora 23 January 2017 06:31:52PM 0 points [-]

The is-ought problem implies that the universe is deterministic

What?

Comment author: ingive 25 January 2017 03:41:41AM 0 points [-]

What?

Because Hume thought the universe is without taking in consideration that it ought to be different because of probabilistic nature (one interpretation) of it all.

Comment author: Luke_A_Somers 16 January 2017 03:44:26PM 0 points [-]

P (read sequences) < P (figure this out)

What?