Filter This year

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: shminux 20 January 2016 06:00:20AM 15 points [-]

Humans are not bad at math. We are excellent at math. We can calculate the best trajectory to throw a ball into a hoop, the exact way to move our jiggly appendages to achieve it, accounting for a million little details, all in a blink of an eye. Few if any modern computers can do as well.

The problem is one of the definition: we call "math" the part of math that is HARD FOR HUMANS. Because why bother giving a special name to something that does not require special learning techniques?

Comment author: Lumifer 12 January 2016 07:33:30PM *  16 points [-]

A physics research team has members who can (and occasionally do) in secret insert false signals into the experiment the team is running. The goal is practice resistance to false positives. A very interesting approach, first time I've heard about physicists using it.

Bias combat in action :-)

The LIGO is almost unique among physics experiments in practising ‘blind injection’. A team of three collaboration members has the ability to simulate a detection by using actuators to move the mirrors. “Only they know if, and when, a certain type of signal has been injected,”...

Two such exercises took place during earlier science runs of LIGO, one in 2007 and one in 2010. ... The original blind-injection exercises took 18 months and 6 months respectively. The first one was discarded, but in the second case, the collaboration wrote a paper and held a vote to decide whether they would make an announcement. Only then did the blind-injection team ‘open the envelope’ and reveal that the events had been staged.

Source

Comment author: RichardKennaway 09 January 2016 08:31:33PM 13 points [-]

let us assume, that the top leadership of ISIS is composed of completely rational and very intelligent individuals

Of the sort that casebash assures us cannot exist? The imaginary competence of fictional rational heroes? Top human genius level?

No. These all amount to assuming a falsehood.

  1. The premise of this article is wrong. The ISIS are really just a bunch of idiots, and their apparent successes are only caused by the powers in the region being much more incompetent than ISIS

Another straw falsehood to set beside the first one. All of this rules out from the start any consideration of ISIS as they actually are. They are real people with a mission, no more and no less intelligent than anyone else who succeeds in doing what they have done so far.

There is no mystery about what ISIS wants. They tell the world in their glossy magazine, available in many languages, including English (see the link at the foot of that page). They tell the world in every announcement and proclamation.

"Rationalist", however, seem incapable of believing that anyone ever means what they say. Nothing is what it is, but a signal of something else.

I have not seen any reason to suppose that they do not intend exactly what they say, just as Hitler did in "Mein Kampf". They are fighting to establish a new Caliphate which will spread Islam by the sword to the whole world, Allahu akbar. All else is strategy and tactics. If their current funding model is unsustainable, they will change it as circumstances require. If their recruitment methods falter, they will search for other ways.

More useful questions would be: given their supreme goal (to establish a new Caliphate which will spread Islam by the sword to the whole world), what should they do to accomplish that? And how should we (by which I mean, everyone who wants Islamic universalism to fail) act to prevent them?

I recommend a reading of Max Frisch's play "The Fire Raisers".

In response to comment by [deleted] on Open Thread, Dec. 28 - Jan. 3, 2016
Comment author: Viliam 29 December 2015 09:09:48PM *  16 points [-]

Your 'easiest way' feels to me like: "If you are low-status, and you want to change it, aim for middle status, not high status." Which in my opinion is an excellent advice. Because if you succeed at this, you can try the higher status later, and it will feel more comfortable. But many people consistently keep aiming higher than they can afford, and then they predictably fail. Now that I think about it, it applies to so many areas of life -- people trying to run before they can walk, which ultimately leaves them unable to either walk or run.

People probably fail to notice this strategy because they see the situation as a dichotomy between "low status" and "high status", as if any deviation from the highest observed status means they remain at the bottom.

All of the following behaviors are not highest status:

  • Joining an existing group, instead of creating your own, or waiting for the group to form spontaneusly around you.
  • Learning the norms of the group, instead of expecting the group to forgive you all transgressions.
  • Taking interest in the topics of the group, instead of expecting the group to switch to the topics that interest you.
  • Following the group consensus, instead of signalling your uniqueness by disagreeing with it.
  • Working hard, instead of displaying that you don't have to work hard.
  • Talking about interesting and relevant things, instead of expecting people to admire you regardless of what you say.

And that's exactly why a person starting at the bottom should do them, because it will bring them to the middle. Actually, this strategy would bring the average person to the middle; the highly intelligent people will end up above the middle, because their intelligence will allow them to perform better at these things.

Comment author: Viliam 24 November 2015 09:11:30AM 16 points [-]

The first association I have with your username is "spams Open Threads with not really interesting questions".

Note that there are two parts in that objection. Posting a boring question in an Open Thread is not a problem per se -- I don't really want to discourage people from doing that. It's just that when I open any Open Thread, and there are at least five boring top-level comments by the same user, instead of simply ignoring them I feel annoyed.

Many of your comments are very general debate-openers, where you expect others to entertain you, but don't provide anything in return. Choosing your recent downvoted question as an example:

How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?

First, how do you estimate "threats and your ability to cope"? If you ask other people to provide their data, it would be polite to provide your own.

Second, what is your goal here? Are you just bored and want to start a debate that could entertain you? Or are you thinking about a specific problem you are trying to solve? Then maybe being more specific in the question could help to give you more relevant answer. But the thing is, your not being specific seems like an evidence for the "I am just bored and want you to entertain me" variant.

Comment author: VoiceOfRa 23 November 2015 02:59:03AM 10 points [-]

This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

Given that your idea of "rational thinking" appears to consist of the kind of Straw-Vulcanism that gives "rational thinking" a bad name, I'd appreciate it if you would stop trying to "help" the movement.

Comment author: Gleb_Tsipursky 18 November 2015 11:29:42PM *  13 points [-]

Thank you for bringing this up as a topic of discussion! I'm really interested to see what the Less Wrong community has to say about this.

Let me be clear that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline. We do not assume that all who engage with out content will get to the level of being aspiring rationalists who can participate actively with Less Wrong. This is not to say that it doesn't happen, and in fact some members of our audience have already started to do so, such as Ella. Others are right now reading the Sequences and are passively lurking without actively engaging.

I want to add a bit more about the Intentional Insights approach to raising the sanity waterline broadly.

The social media channel of raising the sanity waterline is only one area of our work. The goal of that channel is to use the strategies of online marketing and the language of self-improvement to get rationality spread broadly through engaging articles. To be concrete and specific, here is an example of one such article: "6 Science-Based Hacks for Growing Mentally Stronger." BTW, editors are usually the ones who write the headline, so I can't "take the credit" for the click-baity nature of the title in most cases.

Another area of work is publishing op-eds in prominent venues on topical matters that address recent political matters in a politically-oriented manner. For example, here is an article of this type: "Get Donald Trump out of my brain: The neuroscience that explains why he’s running away with the GOP."

Another area of work is collaborating with other organizations, especially secular ones, to get our content to their audience. For example, here is a workshop we did on helping secular people find purpose using science.

We also give interviews to prominent venues on rationality-informed topics: 1, 2.

Our model works as follows: once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. As an example, after the article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can't say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.

The articles we put out on other media channels and on which we collaborate with other groups are more oriented toward entertainment and less oriented toward education in rationality, although they do convey some rationality ideas. For those who engage more thoroughly with out content, we then provide resources that are more educationally oriented, such as workshop videos, online classes, books, and apps, all described on the "About Us" page. Our content is peer reviewed by our Advisory Board members and others who have expertise in decision-making, social work, education, nonprofit work, and other areas.

Finally, I want to lay out our Theory of Change. This is a standard nonprofit document that describes our goals, our assumptions about the world, what steps we take to accomplish our goals, and how we evaluate our impact. The Executive Summary of our Theory of Change is below, and there is also a link to the draft version of our full ToC at the bottom.

Executive Summary 1) The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing. 2) To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice. 3) We assume that: - Some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions. - Problematic decision making undermines mutual flourishing in a number of life areas. - These flawed thinking, feeling, and behavior patterns can be improved through effective interventions. - We can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment. 4) Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing. 5) Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations. 6) Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the draft version of our Theory of Change.

Also, about Endless September. After people engage with our content for a while, we introduce them to more advanced things on ClearerThinking, and we are in fact discussing collaborating with Spencer Greenberg, as I discussed in this comment. After that, we introduce them to CFAR and Less Wrong. So those who go through this chain are not the kind who would contribute to Endless September.

The large majority we expect would not go through this chain. They instead engage in other venues with rational thinking, as Viliam mentioned above. This fits into the fact that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline, and only secondarily getting people to the level of being aspiring rationalists who can participate actively with Less Wrong.

Well, that's all. Look forward to your thoughts! I'm always looking looking for better ways to do things, so very happy to update my beliefs about our methods and optimize them based on wise advice :-)

EDIT: Added link to comment where I discuss our collaboration with Spencer Greenberb's ClearerThinking and also about our audience engaging with Less Wrong such as Ella.

Comment author: username2 16 May 2016 07:33:57AM 13 points [-]

I know anecdotes are not a statistically significant form of argument, but perhaps they do convey emotional ramifications. With that in mind, I'd like to share a rather extreme anecdote explaining one aspect of what is wrong with political discourse where the people making the arguments are attacked, rather than the arguments themselves.

Years ago, I knew this girl - lets call her Alice. We were very good friends, but that changed. I don't know why for certain - I cannot read minds - but I think it started when I disagreed with one of her feminist opinions. I didn't say anything particularly offensive, I didn't say all feminists are 300 pound whales (which is not true) nor did I say that women should not be allowed to vote. She said that in the US, the only legal way for a woman to defend herself against rape was by sticking her fingers up her assailants nose, with the implication that the US legal system does not care whether women get raped. I disagreed, saying that there are reasons why Americans have so many guns, and the biggest one is self defence. I said that lethal force is allowed to defend against much lesser crimes such as trespass, at least in some states, and that I couldn't imagine that any US state would have such strong restrictions on self-defence.

There were a few similar cases where I disagreed with her arguments for clear logical reasons, never attacking her or anyone else. And I think this flipped a switch in her brain, from friend to enemy, because from then on whenever I opened my mouth she would ridicule me.

We still hung out, but only because we were in the same circle of friends. One day, I said "You know this new drug you guys are doing? I've looked it up, and wikipedia says its more addictive than heroin." Alice looked at me as if I was something she'd stepped in. "Don't be ridiculous" she sneered. I shrugged and wandered off.

I didn't hear from them for a few months, for various reasons, not least that I wanted some distance from her and from drugs. The next time I heard from them, it was a phone call explaining that Alice's boyfriend - a really nice guy who I had known for years - had fatally overdosed on the drug I had tried to warn them about, and that one of my other friends was probably going to prison for supply or even manslaughter.

In situations such as this, it is some comfort to know that at least I tried to help. I did what I could, but if people ridicule me I cannot force them to take me seriously. In my head it was the saddest 'I told you so' ever, although I obviously did not mention this to anyone else.

It would be an exaggeration to say that if Alice hadn't shouted me down then this guy would still be alive. I'm not great at convincing people of things at the best of times, and I think other friends of mine had tried to warn about the dangers too. But I think the probability (that if Alice hadn't shouted me down then this guy would still be alive) is nontrivial.

Perhaps I should have shouted Alice down, told her to stop being a &^%$%$$. But I've always tried to follow the advice that the best way to deal with conflict is to calmly walk away, if possible.

Maybe its crass to make a political point of this, but if there is a point, then point I am trying to make is that when people criticise social justice warfare it might not be because we hate 'justice' or because we are evil cis white men (tm), but its the warfare we object to, because a group of people at war with themselves over ideology is so much weaker in every way. Is it so much to ask that arguments are debated, rather then by ridiculing, censoring, silencing the people who make the arguments?

So I'd like to say that regardless of whether you are a progressive or conservative, communist or an-cap or neoreactionary, I will engage with your arguments rather than trying to attack you, and even if I disagree with your politics I will take non-political arguments seriously. I hope you do so too.

Comment author: Tyrrell_McAllister 12 April 2016 05:12:26PM *  15 points [-]

A special case of this fallacy that you often see is

Your Axioms (+ My Axioms) yield a bald contradiction. Therefore, your position isn't even coherent!

This is a special case of the fallacy because the charge of self-contradiction could stick only if the accused person really subscribed to both Your Axioms and My Axioms. But this is only plausible because of an implicit argument: "My Axioms are true, so obviously the accused believes them. The accused just hasn't noticed the blatant contradiction that results."

In response to comment by [deleted] on Open Thread April 4 - April 10, 2016
Comment author: Evan_Gaensbauer 05 April 2016 09:03:34AM 15 points [-]

Context: Main is currently disabled; LessWrong 2.0

LessWrong is actively being redesigned. Until further notice, posts to Main have been disabled. Once the redesign is complete, LW may have multiple subs, none of which might be called 'Main', but one or more of which will be designated as where the nice Forest of Classic LW Stuff you're hoping to find here. The only posts in Main recently are meetup posts and the survey, which were promoted there for visibility. Apparently, usage statistics show for the last several months Discussion has been getting much more attention than Main, so Discussion is where non-crap is. Of course, there is no more explicit division between crap and non-crap you'd expect the 'Main'/'Discussion' divide to reflect. Try finding other ways to filter out crap, like reading the top posts from the previous week.

Comment author: Viliam 26 February 2016 08:59:46AM *  11 points [-]

It's hard to come up with a good counter-argument to "slavery is bad". Even women's suffrage and Prohibition didn't require lying.

That's bullshit. More precisely, it is quite possible that you don't consider any of the counter-arguments to be good. But you should not generalize it for everyone. A "good argument" is a 2-place word; it means that a given person accepts the premises of the argument and its style of reasoning. Also, there is a lot of hindsight bias and social pressure here: we already know which side has historically won and which is associated with losers; but before that happened, people probably evaluated the quality of the arguments differently.

I could start playing Devil's Advocate and give examples of specific arguments that would seem good to some people, but I am not sure the readers (and our stalkers at RationalWiki) would focus on the meta-argument of "it is possible to make good arguments for X" instead of taking the arguments as literally my true opinions (plus opinions of everyone who upvoted this comment, plus opinions of everyone who didn't throw a tantrum and publicly leave LW after seeing me publish this comment there).

"There are no good arguments for X" is simply how having a successful social taboo against X feels from inside.

For example, many debates with real-life feminists about women's suffrage assume that men had universal voting rights since ever, and women only got them recently. But the truth is that "men's suffrage" (a voting right of every adult man) also only came historically recently. In some countries, both men and women got the universal voting right at the same year. But you wouldn't guess that by listening to debates about women's suffrage in that country.

Comment author: Raiden 22 February 2016 09:22:33PM 15 points [-]

The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.

In response to Upcoming LW Changes
Comment author: LessWrong 03 February 2016 05:53:57PM 14 points [-]

A for effort, but please satisfy my curiousity: what ARE the actual changes planned?

Comment author: Viliam_Bur 31 January 2016 10:32:15AM *  15 points [-]

Eugine's beliefs are "politically incorrect", but that's not completely unusual at LW. The main reason why we don't see them here often is that we don't debate politics often. And ironically, Eugine's downvoting crusades have contributed significantly to reducing the political debates on LW. There were times when we used to have a political debate in a separate thread or in an Open Thread once in a while. And at some moment, such debates started predictably ending with someone saying "I have disagreed with Eugine yesterday, and today I see I have lost hundreds of karma points and most of my old comments are at -1; fuck this". This makes the debate unpleasant even for the people who on object level happen to agree with Eugine on the specific topic. Most of us see the difference between "I won the debate by providing convincing arguments" and "I won the debate by strategically downvoting or otherwise harrassing my opponents" (or "I won the debate because my opponents were harrassed by a third party").

Also, Eugine's comments seem like optimized to offend. Such comments are "convincing for the already believing, and irritating for the unbelieving". They don't change anyone's opinion, and are usually used by a majority, to silence a minority. Ironically, majority is exactly what Eugine doesn't have here. So this leaves me with two models:

  • Eugine is too mindkilled to understand all this nuance, despite having spent years here. He still doesn't get what LW is about. In such case, his mental abilities are insufficient for LessWrong.

  • Eugine may understand the nuance, he just doesn't give a fuck about rationality or LW culture. For him, victory of his tribe is the ultimate goal. That also means he doesn't belong here, just for different reasons.

Regardless of whether he understands or doesn't understand what he is doing wrong, he has shown no capacity to learn or to improve his behavior. Like, come on, it's not like moderators are paranoidly observing IP addresses of every user to make sure the lifelong bans stay enforced. All he would have to do is to create a new account and change his behavior so that no one would suspect it's the same person. He is either incapable or unwilling to do that. Well, fuck him; we are not here to provide him group therapy.

I mean, feel free to speculate about his true reasons. I am just saying they don't change anything about the ban.

Comment author: Elo 20 January 2016 04:50:11AM 15 points [-]

I think this is a terrible and ridiculous idea. likely to create in-groups and out-groups and do more bad than good.

While you are willing to go down these paths have you considered sign-language representations? I am unfamiliar with them other than knowing they are there.

Comment author: Viliam 15 January 2016 10:00:48AM *  14 points [-]

I agree with gjm that the remark about IQ is wrong. This is about cultures. Let's call them "nerd culture" and "social culture" (those are merely words that came immediately to my mind, I do not insist on using them).

Using the terms of Transactional Analysis, the typical communication modes in "nerd culture" are activity and withdrawal, and the typical communication modes in "social culture" are pastimes and games. This is what people are accustomed to do and to expect from other people in their social circle. It doesn't depend on IQ or gender or color of skin; I guess it depends on personality and on what people in our perceived "tribe" really are doing most of the time. -- If people around you exchange information most of the time, it is reasonable to expect that the next person also wants to exchange information with you. If people around you play status games most of the time, it is reasonable to expect that the next person also wants to play a status game with you. -- In a different culture, people are confused and project.

A person coming from "nerd culture" to "social culture" may be oblivious to the status games around them. From an observer's perspective, this person display a serious lack of social skills.

A person coming from "social culture" to "nerd culture" may interpret everything as a part of some devious status game. From an observer's perspective, this person displays symptoms of paranoia.

The "nerd culture" person in a "social culture" will likely sooner or later get burned, which provides them evidence that their approach is wrong. Of course they may also process the evidence the wrong way, and decide e.g. that non-nerds are stupid or insane, and that it is better to avoid them.

Unfortunately, for a "social culture" person in a "nerd culture" it is too easy to interpret the evidence in a way that reinforces their beliefs. Every failure in communication may be interpreted as "someone did a successful status attack on me". The more they focus on trying to decipher the imaginary status games, the more they get out of sync with their information-oriented colleagues, which only provides more "evidence" that there is some kind of conspiracy against them. And even if you try to explain them this, your explanation will be processed as "yet another status move". A person sufficiently stuck in the status-game interpretation of everything may lack the dynamic to process any feedback as something else then (or at least something more than merely) a status move.

Thus ends my whitesplaining mansplaining cissplaining status attack against all who challenge the existing order.

EDIT:

Reading the replies I realized there are never enough disclaimers when writing about a controversial topic. For the record, I don't believe that nerds never play status games. (Neither do I believe that non-nerds are completely detached from reality.) Most people are not purely "nerd culture" or purely "social culture". But the two cultures are differently calibrated.

For example, correcting someone has a subtext of a status move. But in the "nerd culture" people focus more on what is correct and what is incorrect, while in the "social culture" people focus more on how agreement or disagreement would affect status and alliances.

If some person says "2+2=3" and other person replies "that's wrong", in the "nerd culture" the most likely conclusion is that someone has spotted a mistake and automatically responded. Yes, there is always the possibility that the person wanted to attack the other person, and really enjoyed the opportunity. Maybe, maybe not.

In the "social culture" the most likely conclusion is the status attack, because people in the "social culture" can tolerate a lot of bullshit from their friends or people they don't want to offend, so it makes sense to look for an extra reason why in this specific case someone has decided to not tolerate the mistake.

As a personal anecdote, I have noticed that in real life, some people consider me extremely arrogant and some people consider me extremely humble. The former have repeatedly seen me correcting someone else's mistake; and the latter have repeatedly seen someone else correcting my mistake, and me admitting the mistake. The idea that both attitudes could exist in the same person (and that the person could consider them to be two aspects of the same thing) is mind-blowing to someone coming from the "social culture", because there these two roles are strictly separated; they are the opposite of each other.

When you hear someone speaking about how the reality is socially constructed, in a sense they are not lying. They are describing the "social culture" they live in; where everyone keeps as many maps as necessary to fit peacefully in every social group they want to belong to. For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a "social culture", the disagreement with maps typically comes from enemies and assholes! Friends don't make their friends update their maps; they always keep an extra map for each friend. So if you insist that there is a territory that might disagree with their map, of course they perceive it as a hostility.

Yes, even the nerds can be hostile sometimes. But a person from the "social culture" will be offended all the time, even by a behavior that in the "nerd culture" is considered perfectly friendly. -- As an analogy, imagine a person coming from a foreign culture that also speaks English, but in their culture, ending a sentence with a dot is a sign of disrespect towards the recipient. (Everyone in their culture knows this rule, and it is kinda taboo to talk about it openly.) If you don't know this rule, you will keep offending this person in every single letter you send them, regardless of how friendly you will try to be.

Comment author: James_Miller 09 January 2016 07:40:01PM 15 points [-]

We should take the outside view and look at other governments that had "crazy" ideologies and ask if the leaders of these governments really believed these ideologies. The Nazi leaders were mostly sincere in their beliefs, as were many but not all of the communist leaders (Lenin and Trotsky certainly were true believers in what they professed, while Mao and Stalin were probably cynical opportunists.) My guess is that most of the Christian European monarchs who claimed a divine right to rule really did believe that they were God's instruments.

Comment author: gwern 02 January 2016 02:52:07AM 14 points [-]

And last & least:

  • Charlie Brown Christmas special I had never sat down and watched the famous Peanuts Christmas special in its entirety, and I was surprised to discover how wretched it is, especially watching it back to back with How the Grinch Stole Christmas. The animation is kindergarten-level, which unmistakably loops, and the special is watchable only because the Peanuts style is so minimal (verging on ugly) that it can pretend its extraordinarily low quality is just the Peanuts style at work; the musical theme would be excellent, were it not repeated ad nauseam despite the shortness of the special; characters do not speak in anything but a monotone, and are so poorly characterized it's hard to imagine non-Peanuts fans understanding much of anything about it. And finally, the beloved story itself...

    It struck me, while watching it, that I am not sure I have ever seen a simpler or clearer demonstration of why Nietzsche calls Christianity a slave morality and a transvaluation of earlier master moralities: the message of the special is that Christianity everything which is good, is bad, and all that is bad is good. Charlie Brown is a loser who fails at everything he does in the special: he is unable to enjoy the season, he passive-aggressively is hostile towards Violet (a tactic that in its ill grace & resentment only emphasizes the depth of his loserdom), he fails to either recognize the opportunity of the contest or decorate his house better than his dog can, he is a failure at directing the play and kicked out (rather than made an actor or musician, since of course he would fail at that too), only to fail further at finding a tree. Charlie Brown is a natural-born slave and his inadequacy is manifest to everyone who knows him even slightly; he is not fast, he is not strong, he is not good, he is not smart, he has no special talents - indeed, he cannot even be nice. He is the sort of nebbish who, when he goes bankrupt and shoots some people at his office, his few friends and acquaintances tell the reporters that they're not surprised so much that he did something bad but that he had the guts to do anything at all. This part of the story is where the slave morality enters in: a reading from the Christian gospel inspires him - he may be a failure at everything, he may be a loser, but he has faith in Jesus and his understanding of the true spirit of Christmas as a celebration of Jesus's birth will doubtless be rewarded in the next world, and this faith shores up his psyche and fortifies his denial, to the point where the rest of the children, impressed by his obstinacy and of course their dormant Christian faith, cluster around him to engage in a choral singing of "Hark! The Herald Angels Sing" with Charlie Brown as their leader. "Hark!" is an appropriate choice of Christmas carol, as unlike many of the popular Christmas songs these days like "Rudolph the Red-nosed Reindeer" or "The 12 Days of Christmas", "Hark!" is focused single-mindedly on the birth of Jesus: it's "peace on earth and mercy mild" because Jesus (the Christ/"new-born king"/"everlasting lord"/"the Godhead"/"incarnate deity"/"Prince of Peace" etc) is born and now ruling the world, and little to do with that being intrinsically good. With individual identity submerged in a group identity subservient to their god, the revaluation of moral values from a modern secular ethos to the Christian slave morality is complete: the last is now first, the low is now high. The End.

Comment author: username2 24 December 2015 06:06:30AM *  11 points [-]

That only works if there is a mechanism for getting rid of CEOs who abuse their power. See comment above. Also note, that the victims of said abuse are generally not in a position to defend themselves.

Comment author: Lumifer 13 December 2015 11:30:52PM 15 points [-]

How many gold coins would it take for the Roman Empire to land a man on the moon, within 20 years, with 99% confidence?

Comment author: AlexMennen 12 December 2015 12:26:44AM *  14 points [-]

From their website, it looks like they'll be doing a lot of deep learning research and making the results freely available, which doesn't sound like it would accelerate Friendly AI relative to AI as a whole. I hope they've thought this through.

Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]

Comment author: Viliam 30 November 2015 11:05:20AM *  15 points [-]

For those of you who always wanted to know what is it like to put your head in a particle accelerator when it's turned on...

On 13 July 1978, Anatoli Petrovich Bugorski was checking a malfunctioning piece of the largest Soviet particle accelerator, the U-70 synchrotron, when the safety mechanisms failed. Bugorski was leaning over the equipment when he stuck his head in the path of the 76 GeV proton beam. Reportedly, he saw a flash "brighter than a thousand suns" but did not feel any pain.

The left half of Bugorski's face swelled up beyond recognition and, over the next several days, started peeling off, revealing the path that the proton beam (moving near the speed of light) had burned through parts of his face, his bone and the brain tissue underneath. However, Bugorski survived and even completed his Ph.D. There was virtually no damage to his intellectual capacity, but the fatigue of mental work increased markedly. Bugorski completely lost hearing in the left ear and only a constant, unpleasant internal noise remained. The left half of his face was paralyzed due to the destruction of nerves. He was able to function well, except for the fact that he had occasional complex partial seizures and rare tonic-clonic seizures.

Bugorski continued to work in science and held the post of coordinator of physics experiments. In 1996, he applied unsuccessfully for disabled status to receive his free epilepsy medication. Bugorski showed interest in making himself available for study to Western researchers but could not afford to leave Protvino.

Comment author: OrphanWilde 16 November 2015 06:24:09PM 3 points [-]

You're creepy and artificial. Ella is creepy and artificial. This post is creepy and artificial. The About Us page of Intentional Insights is -very- creepy and artificial. And what makes this all bizarre is that the creepy and artificial is recursive - there's something creepy and artificial about the way you're creepy and artificial, in that it is so transparent and obvious that it cannot possibly be unintentionally transparent and obvious. The way you keep selling yourself, selling your company (which itself is selling you), selling merchandise selling your company selling yourself...

Well, knock it off. I don't know if you're a spider in a human suit, or a human in a spider-in-a-human-suit suit, or a spider in a human-in-a-spider-in-a-human-suit-suit suit, but at a certain level it stops mattering. If you're a naive innocent playing at Dark Arts, you're reading as a narcissistic con artist, and not even a terribly good one. If you're a sociopath playing as a naive innocent playing at Dark Arts in order to do something more elaborate that probably only vaguely involves Less Wrong, well, that's just ridiculous, so quit that. And if you're actually a con artist, you're terrible at whatever con you're trying to execute here and should go do something with social media, which actually looks like your skill set.

Comment author: PhilGoetz 29 October 2015 04:03:32AM *  14 points [-]

"Spreading quicker" may not be the best question to ask. The question I'm more interested in is, What is the relationship between speed of communication, and the curve that describes innovation over time?

A good model for this is the degree of genetic isolation in a genetic algorithm. Compare two settings for a GA. One allows mating between any two organisms in the population. Another has many subpopulations, and allows genetic exchange between subpopulations less frequently.

Plot the fitness of the most-fit organism in each population by generation. The first GA, which has fast genetic communication, will initially outstrip the second, but it will plateau at a lower level of fitness, and all the organisms in the population will be identical, and evolution will stop. This is called premature convergence.

The second GA, with restricted genetic communication, will catch up and pass the fitness of the first GA, usually continuing on to a much higher optimum, because it maintains homogenous subpopulations (which allows adaptation) but a diverse global population (which prevents premature convergence).

Think about the development of pop music. As communication technology improved, pop stars like Elvis could be heard, seen, and their records marketed and moved across the entire country more efficiently than marketing local musicians, and replaced live performers with recorded music. On one hand, you could live in Peoria and listen to the most-popular musicians in the country. On the other, by 1990, American pop music had nearly stopped evolving. Rebecca Black could become popular across the nation in a single week, but the amount of innovation or quality she produced was negligible.

Basically, rapid communication gives people too much choice. They choose things comfortably similar to what they know. Isolation is needed to allow new things to gain an audience before they're stomped out by the dominant things.

You need to state your preferences as a function of the long-term trajectory of the entropy of ideas, rather than as any instantaneous quantity.

Comment author: gjm 16 June 2016 12:29:03PM -2 points [-]

I think pwno is proposing that we do it precisely because it doesn't align with our convictions. (He might advise Trump supporters to vote for Clinton.)

I'm sure I remember reading, but can't now find, an anecdote from Eliezer back in the OB days: he was with a group of people at the Western Wall in Jerusalem, where there's this tradition of writing prayers on pieces of paper and sticking them in cracks in the wall, so as a test of the sincerity of his unbelief he wrote "I pray for my parents to die" and stuck that in the wall. Same principle.

(Personally I think it's a silly principle. Human brains aren't very good at detaching themselves from their actions, and I would only cast a vote if I were happy for my preferences to get shifted a little bit towards the candidate I was voting for.)

In response to Positivity Thread :)
Comment author: SquirrelInHell 09 April 2016 01:53:13AM *  14 points [-]

I love you too! ❤ ❤ ❤

I mean, it's fun and all, but what do you think about:

  • spreading niceness to all discussion on LW, not just a special separate thread,

  • having an actual topic to discuss while being nice about it?

Edit: ah, right, you wanted to not go meta here. Sorry.

#LessWrongMoreNice

Comment author: gjm 23 March 2016 05:41:03PM 14 points [-]

If someone turns up saying "I've just discovered X and I love it", the information I gain from that is quite different in the cases (1) where they really have just discovered X and love it and (2) where they're saying it because someone paid them to.

Indeed, the fact that these people are presumably being paid isn't the point. The fact that they are promoting something dishonestly is the point. The fact that they're being paid is relevant only as evidence that their promotion is dishonest.

Why not ban me?

Because your ranting is not in fact particularly insane, and because your participation in the LW community is not confined to ranting about hypothyroidism.

If you talked about literally nothing else, and if it transpired that you're only promoting your theory because someone paid you to drum up sales for thyroid hormone supplements, then you'd probably be contributing nothing of value. (Whether banning you would be a good response is a different question.) I mean, it might turn out that actually what you're saying about thyroid hormones is right (or at least enlightening) even though you were saying it on account of being paid, but the odds wouldn't be good.

Comment author: gjm 10 March 2016 12:43:26PM *  14 points [-]

Ignoring psychology and just looking at the results:

  1. Delta-function prior at p=1/2 -- i.e., completely ignore the first two games and assume they're equally matched. Lee Sedol wins 12.5% of the time.

  2. Laplace's law of succession gives a point estimate of 1/4 for Lee Sedol's win probability now. That means Lee Sedol wins about 1.6% of the time. [EDITED to add:] Er, no, actually if you're using the rule of succession you should apply it afresh after each game, and then the result is the same as with a uniform prior on [0,1] as in #3 below. Thanks to Unnamed for catching my error.

  3. Uniform-on-[0,1] prior for Lee Sedol's win probability means posterior density is f(p)=3(1-p)^2, which means he wins the match exactly 5% of the time.

  4. I think most people expected it to be pretty close. Take a prior density f(p)=4p(1-p), which favours middling probabilities but not too outrageously; then he wins the match about 7.1% of the time.

So ~5% seems reasonable without bringing psychological factors into it.

Comment author: Wei_Dai 10 March 2016 10:37:06AM 14 points [-]

Compared to its competition in the AGI race, MIRI was always going to be disadvantaged by both lack of resources and the need to choose an AI design that can predictably be made Friendly as opposed to optimizing mainly for capability. For this reason, I was against MIRI (or rather the Singularity Institute as it was known back then) going into AI research at all, as opposed to pursuing some other way of pushing for a positive Singularity.

In any case, what other approaches to Friendliness would you like MIRI to consider? The only other approach that I'm aware of that's somewhat developed is Paul Christiano's current approach (see for example https://medium.com/ai-control/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf), which I understand is meant to be largely agnostic about the underlying AI technology. Personally I'm pretty skeptical but then I may be overly skeptical about everything. What are your thoughts? I don't recall seeing you having commented on them much.

Are you aware of any other ideas that MIRI should be considering?

Comment author: Elo 10 March 2016 01:41:17AM 14 points [-]

We accidentally had a meetup as the game was ending. For the first time in my life - got to walk in to a room and say; "Who's been watching the big game". It was great, and then about 10mins later the resignation happened. was pretty exciting!

Comment author: Viliam_Bur 01 March 2016 10:19:17PM 13 points [-]

moderator action: Old_Gold is banned

Another account of Eugine_Nier / Azathoth123 / Voiceofra / The_Lion / The_Lion2 is banned, effective now. I am posting this as a comment in Open Thread to avoid writing articles about banning the same person again and again, thus reducing the administrative cost of enforcing the already existing ban.

This specific change of policy does not apply to other potentially banned users (unless they are obvious spammers or scammers) who still deserve a separate post.

Comment author: gwern 16 February 2016 02:58:11AM *  14 points [-]

I've created a cost-benefit analysis of embryo selection for intelligence: http://www.gwern.net/Embryo%20selection

Turns out to be fairly challenging but ultimately delivers sensible results: modestly profitable but nothing special at current prices/polygenic-scores. But things get more interesting once we get scores from n>360k studies, and the multi-generational consequences are very interesting if we can get boosts like +9 points. Of course, it's mostly all a moot point and academic because of...

CRISPR. I've had a hard time getting prices because they all sound too good to be true.

Comment author: AspiringRationalist 13 February 2016 05:23:57AM 13 points [-]

From the 2014 Survey:

Polyamory:

  • 51.8% prefer monogamous, 15.1% prefer polyamorous (a lot uncertain)
  • But only 5.3% have more than 1 partner

Children:

  • 36.1% want more child(ren), 28.3% uncertain, 34.3% don't want more

Politics:

  • 38.9% Social Democratic, 27.7% Liberal, 25.2% Libertarian
  • Taxes: 3.14 +- 1.212 (1 = should be lower; 5 = should be higher)
  • Minimum Wage: 3.21 +- 1.359 (1 = should be lower; 5 = should be higher)
  • Social Justice: 3.15 +- 1.385 (1 = negative view; 5 = positive view)

Ethics:

  • 60% accept or lean towards consequentialism
  • Out of constructivism, error theory, non-cognitivism, subjectivism and substantive realism, none had more than a third

Cryo:

  • 24% don't want to, 36.7% considering, 30.8% signed up or want to be
  • Probability that a person frozen today will be revived: 22.3 +- 27.3% (median 10%)

Misc:

  • p(many worlds) = 47.6% +- 30.1%
Comment author: gwern 12 February 2016 05:53:35PM 14 points [-]

You could look on the surveys: what questions are closest to 50%?

Comment author: ChristianKl 11 February 2016 11:30:36PM 14 points [-]

Likely a scam whereby he transfers money and then tells you to transfer some money back to him. Afterwards the first transaction get's flagged as fraud and you lose the money from the first transaction.

Comment author: gwern 11 February 2016 07:26:04PM 14 points [-]

Is this really good phrasing

Yes, I was referring to Eliezer's essay there. I liked my little flourish there, so I'm glad someone noticed.

How do you detect you're not having a discussion but are walking on a battlefield?

In this case it's easy when you look over all the comments on HN and elsewhere. It's like when Yvain is simultaneously accused of being racist Neo-reactionary scum and a Marxist SJW beta-cuckold Jew scum - it's difficult to see how both sets of accusations could be right simultaneously, so clearly at least one set of accusers are unhinged.

Similarly, so the problem with this aldehyde-vitrification process is that it's both too good at fixing everything in place and it's not good enough at preserving information? It's a con job despite offering far greater transparency into whether it'll work? We know the process is quack science so it's a con job and oh, we already know the process works so it's a con job? It'll never work and we know this a priori because a copy of you isn't you? Each stroke against cryonics might seem reasonable or even probable on its own, but in total, like the 13th stroke of the clock which discredits all the previous ones' accuracy, they show what's really going on.

Comment author: knb 31 January 2016 01:58:44AM 14 points [-]

Less Wrong doesn't seem "overgrown" to me. It actually seems dried out and dying because the culture is so negative people don't want to post here. I believe Eliezer has talked about how whenever he posted something on LW, the comments would be full of people trying to find anything wrong with it.

Here's an example of what I think makes LessWrong unappealing. User Clarity wrote an interesting discussion level post about his mistakes as an investor/gambler and it was downvoted to oblivion. Shouldn't people be encouraged to discuss their failures as they relate to rationality? Do we really want to discourage this? No one even bothered to explain why they downvoted.

All discussion in Less Wrong 2.0 is seen explicitly as an attempt to exchange information for the purpose of reaching Aumann agreement. In order to facilitate this goal, communication must be precise. Therefore, all users agree to abide by Crocker's Rules for all communication that takes place on the website.

I think trying to impose strict new censorship rules and social control over communication is more likely to deal the death blow to this website than to help it. LessWrong really needs an injection of positive energy and purpose. In the absence of this, I expect LW to continue to decline.

Comment author: tanagrabeast 29 January 2016 02:27:52AM *  14 points [-]

As an American teacher of high school English, with a passion for spaced repetition software, I feel like it is my duty to respond to this post. My answer may surprise you.

If your goals are simply to understand more of what you read and to write more effectively, trying to skill up your general English skills strikes me as rather suboptimal.

Sure, a mastery of common word fragments will improve your ability to make at least some sense of unfamiliar words that use them -- I certainly teach these -- but you probably already know the most useful ones. I’m also unconvinced that etymology deepens comprehension much; usually, we want to understand someone, not somewords; this comes from understanding what that person intended to communicate, not from unlocking obscure arcana behind the words they happened to use.

Most of what is known to help reading comprehension is language independent, as is most of what is known to help you write better. I certainly don’t think Paul Graham’s skill as an essayist has much to do with his English; if he knows a second language even marginally well, I’m sure he would write in it nearly as effectively. To wit, he eschews esoteric explication. Writing is a craft, not a lookup table.

The strongest predictor of how well someone will do on a comprehension test of a given passage is how much they already know about the topic of that passage. A knowledge of the domain-specific vocabulary for that topic is either the second strongest predictor, or the same thing, depending on who you ask. General purpose vocabulary is farther down the list, and as an educated native speaker, you, again, are unlikely to find much low-hanging fruit in that area. So rather than take another level in English, I would suggest you consider which domains you want to be able to understand more of, and just start reading more in those domains, looking up words as needed. The language you do it in is almost irrelevant.

Consider: in the 6 credit hours of theory and practice for teachers of English Language Learners my state requires all teachers to take, I was taught that teenagers acquiring English as their second language are best off when they can continue learning domain specific concepts in their native language while waiting for their English to mature enough to transfer this knowledge over. Otherwise, they gain conversational English fluency but miss out on their first, best chance to learn foundational abstract concepts in, say, Science, Math, or Social Studies, leaving them without the ability to talk or even think about these subjects in any language.

With all the above in mind, when it comes to Anki cards and vocabulary, I am convinced that a great example sentence is much more useful than a great denotative definition. Connotations matter, and a visualizable, narratable context goes far both in conveying the extra implications of a word and in providing hooks for one’s memory. Still, you’re unlikely to absorb the deep flavor of the word -- the full intent of one who wields it fluently -- without encountering the word many times in varied contexts.

I say this in part because I acquired a sizable Spanish vocabulary from a time living in Spain decades ago, and there are to this day a number of words common to my internal monologue that are Spanish simply because they capture the flavor of the concept more perfectly than my closest English equivalents. But this is only the case for words that I encountered on enough authentic occasions to build that full connotative sense. Ones I merely studied out of the dictionary never reached that level, no matter how well I mastered them from a recognition and recall standpoint.

As any programmer will tell you, leveling skills in one language will have knock-on effects on your abilities in other languages, whether they are similar or not; the similar ones give you skills that transfer very directly, while the dissimilar ones broaden your conceptual toolset for approaching programs in general. If a problem might be more tractable within the intricacies of language suited to it, by all means, go deep into that language. But if you’re trying to understand say, an algorithm or a data structure, study that.

Comment author: gwern 16 January 2016 08:56:38PM *  14 points [-]

We wouldn't consider armor research not a success story just because at some point flintlocks phased out heavy battlefield armor.

I think you missed the point of my examples. If flintlocks killed heavy battlefield armor, that was because they were genuinely superior and better at attack. But we are not in a 'machine gun vs bow and arrow' situation.

The Snowden leaks were a revelation not because the NSA had any sort of major unexpected breakthrough. They have not solved factoring. They do not have quantum computers. They have not made major progress on P=NP or reversing one-way functions. The most advanced stuff from all the Snowden leaks I've read was the amortized attack on common hardwired primes, but that again was something well known in the open literature and why we were able to figure it out from the hints in the leaks. In fact, the leaks strongly affirmed that the security community and crypto theory has reached parity with the NSA, that things like PGP were genuinely secure (as far as the crypto went...), and that there were no surprises like differential cryptanalysis waiting in the wings. This is great - except it doesn't matter.

They were a revelation because they revealed how useless all of that parity was: the NSA simply attacked on the economic, business, political, and implementation planes. There is no need to beat PGP by factoring integers when you can simply tap into Gmail's datacenters and read the emails decrypted. There is no need to worry overly much about OTR when your TAO teams divert shipments from Amazon, insert a little hardware keylogger, and record everything and exfiltrate out over DNS. Get something into a computer's BIOS and it'll never come out. You don't need to worry much about academics coming up with better hash functions when your affiliated academics, who know what side their bread is buttered on, will quietly quash it in committee or ensure something like export-grade ciphers are included. You don't need to worry about spending too much on deep cryptanalysis when the existence of C ensures that there will always be zero-days for you to exploit. You don't need to worry about even revealing capabilities when you can just leak information to your buddies in the FBI or DEA and they will work their tails off to come up with a plausible non-digital story which they can feed the judge. (Your biggest problems, really, are figuring out how to not drown under the tsunami of data coming in at you from all the hacked communications links, subverted computers, bulk collections from cloud datacenters, decrypted VPNs etc.)

This isn't like guns eliminating armor. This is like an army not bothering with sanitation and wondering why it keeps losing to the other guys, which turns out to be because the latrine contractors are giving kickbacks to the king's brother.

The fact that computer security is having a hard time solving a much easier problem with a ton more resources should worry people who are into AI safety.

I agree, it absolutely does, and it's why I find kind of hilarious people who seem to seriously think that to do AI safety, you just need some nested VMs and some protocols. That's not remotely close to the full scope of the problem. It does no good to come up with a secure sandbox if dozens of external pressures and incentives and cost-cutting and competition mean that the AI will be immediately let out of the box.

(The trend towards attention mechanisms and reinforcement learning in deep learning is an example of this: tool AI technologies want to become agent AIs, because that is how you get rid of expensive slow humans in the loop, make better inferences and decisions, and optimize exploration by deciding what data you need and what experiments to try.)

Comment author: TheMajor 05 January 2016 06:40:44PM 13 points [-]

How very deep. But if I'm not mistaken the original argument around Chesterton's fence is that somebody had gone through great efforts to put a fence somewhere, and presumably would not have wasted that time if it would be useless anyway. In your example, "the common practice of taking down Chesterton fences", this is not the case. The general principle is to not undo that which others have worked hard for to create, unless you are certain that it is useless/counterproductive. Nobody worked hard on making sure people could remove fences without understanding them (or at the very least I'm willing to claim that this is counterproductive), so this principle is not protected.

Comment author: ChristianKl 25 December 2015 06:44:13PM 14 points [-]

It would certainly be a good idea. The account has no business of casting votes.

Comment author: VoiceOfRa 06 December 2015 09:23:29PM 8 points [-]

Oh, I do. How about you read up on IQ research sometime.

Comment author: Benito 02 December 2015 11:20:07AM 14 points [-]

I single-handedly organised a half-day workshop for 80k, including doing the room bookings, the tech setup, the refreshments (couldn't buy them cheaply, I bought the crockery and food myself) and got feedback from the attendees of the sort "best catered event I've been to in 3 years of being in Oxford uni".

I've also completed my first term of university, and learned loads (of computer science, and also about my abilities in general).

Comment author: Lumifer 30 November 2015 07:33:02PM 14 points [-]

A paper.

Abstract:

Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.

Comment author: Viliam 25 November 2015 09:10:36AM *  14 points [-]

I'm defensive about digging in people's past, only to laugh that as teenagers they had the usual teenage hubris, and maybe as highly intelligent people they kept it for a few more years... and then use it to hint that even today 'deeply inside' they are 'essentially the same', i.e. not worth to be taken seriously.

What exactly are we punishing here; what exactly are we rewarding?

Ten or more years ago I also had a few weird ideas. My advantage is that I didn't publish them on visible places in English, and that I didn't become famous enough so people would now spend their time digging in my past. Also, I kept most of my ideas to myself, because I didn't try to organize people into anything. I didn't keep a regular diary, and when I find some old notes, I usually just cringe and quickly destroy them.

(So no, I don't care about any of Eliezer's flaws reflecting on me, or anything like that. Instead I imagine myself in a parallel universe, where I was more agenty and perhaps less introverted, so I started to spread my ideas sooner and wider, had the courage to try changing the world, and now people are digging up similar kinds of my writings. Generally, this is a mechanism for ruining sincere people's reputations: find something they wrote when they were just as sincere as now only less smart, and make people focus on that instead of what they are saying today.)

I guess I am oversensitive about this, because "pointing out that I failed at something a few years ago, therefore I shouldn't be trusted to do it, ever" was something my mother often did to me while I was a teenager. People grow up, damn it! It's not like once a baby, always a baby.

Everyone was a baby once. The difference is that for some people you have the records, and for other people you don't; so you can imagine that the former are still 'deep inside' baby-like and the latter are not. But that's confusing the map with the territory. As the saying goes, "an expert is a person who came from another city" (so you have never seen their younger self.). As the fictional evidence proves, you could have literally godlike powers, and people would still diss you if they knew you as a kid. But today on internet, everything is one big city, and anything you say can get documented forever. (Knowing this, I will forbid my children to use their real names online. Which probably will not help enough, because twenty years later there will be other methods for easily digging in people's past.)

Ah, whatever. It's already linked here anyway. So if it makes you feel better about yourself (returning the courtesy of online psychoanalysis) to read stupid stuff Eliezer wrote in the past, go ahead!

EDIT: I also see this as a part of a larger trend of intelligent people focusing too much on attacking each other instead of doing something meaningful. I understand the game-theoretical reasons for that (often it is easier to get status by attacking other people's work than presenting your own), but I don't want to support that trend.

Comment author: Lumifer 24 November 2015 03:37:42PM 13 points [-]

You use LW as a dumping ground for whatever crosses your mind at the moment, and that is usually random and transient noise.

Comment author: Vaniver 20 November 2015 05:24:33PM 14 points [-]

I was insufficiently clear: that was a question about your model of my motivation, not what you want my motivation to be. You can say you want to hear more, but if you act against people saying things, which do you expect to have more impact?

But in the spirit of kindness I will write a longer response.


This subject is difficult to talk about because your support here is tepid and reluctant at best, and your detractors are polite.

Now, you might look at OrphanWilde or Clarity and say "you call that polite?"--no, I don't. Those are the only people willing to break politeness and voice their lack of approval in detail. This anecdote about people talking in the quiet car comes to mind; lots of people look at something and realize "this is a problem" but only a few decide it's worth the cost to speak up about it. Disproportionately, those are going to be people who feel the cost less strongly.

There's a related common knowledge point--I might think this is likely net negative, but I don't know how many other people think this is a likely net negative. Only if I know that lots of people think this is a likely net negative, and that they are also aware that this is the sentiment, does it make sense to be the spokesperson for that view. If I know about that dynamic, I can deliberately try to jumpstart the process by paying the costs of establishing common knowledge.

And so by writing a short comment I was hoping to get the best of both worlds--signalling that I think this is likely a net negative and that this is an opinion that should be public, without having to go into the awkward details of why.


That's just the social dynamics. Let's get to the actual content. Why do I think this is likely a net negative? Normally I would write something like this privately, but I'll make it public because we're already having a public discussion.

I agree that it would be nice if the broader population knew more clear thinking techniques. It's not obvious to me that it would be nice if more of the broader population came to LW. I think that deliberative rationality, like discussed on LW, is mostly useful for people with lots of spare CPU cycles and a reflective personality.

Once, I shared some bread I baked with my then-landlord. She liked it, and asked me how I made it, and I said "oh, it's really easy, let me lend you the book I learned from." She demurred; she didn't like reading things, and learned much better watching people do things. Sure, I said, and invited her over the next time I baked some to show her how it's done.

The Sequences is very much "The Way for software engineer-types as radioed back by Eliezer Yudkowsky." I am pessimistic about attempts to get other types of people closer to The Way by translating The Sequences into a language closer to theirs; much more than just the language needs to change, because the inferential gaps are in different places. I strongly suspect your 'typical American' with IQ 100 would get more out of The Way as radioed back by someone closer to them. Byron Katie, with her workshops and her Youtube videos, is the sort of person I would model after if I was targeting a broad market.

I have not paid close attention to the material you've produced because I find it painful. From what little I have seen, I have mostly gotten the impression that it's poorly presented, and am some combination of unwilling and unable to provide you detailed criticism on why. I also think this is more than that I'm not the target audience--I don't have the negative reaction to pjeby that many do, for example, and he has much more of a self-help-style popular approach. To recklessly speculate on the underlying causes, I don't get the impression that you deeply respect or understand your audience, and what you think they want doesn't line up with what they actually want, in a way that seems transparent. It seems like How do you do, fellow kids?.

Standard writing advice is "write what you know." If you want to do rationality for college professors, great! I imagine that your comparative advantage at that would be higher. But just because you don't see people pointing rationality at the masses doesn't mean that's a hole you would be any good at filling. Among other things, I would worry that because you're not the target audience, you won't be aware of what's already there / what your competition is.

Comment author: jsteinhardt 20 November 2015 05:15:28PM 11 points [-]

My main update from this discussion has been a strong positive update about Gleb Tsipursky's character. I've been generally impressed by his ability to stay positive even in the face of criticism, and to continue seeking feedback for improving his approaches.

In response to Inefficient Games
Comment author: Gram_Stone 23 August 2016 07:15:56PM *  13 points [-]

It's nice to see that someone else has thought about this.

It's a popular rationalist pastime to try coming up with munchkin solutions to social dilemmas. A friend posed one such munchkin solution to me, and I thought he had an unrealistic idea of why regulations work, so I said to him:

Even though it's what you really want, I don't think the fact that you know everyone else will cooperate is the interesting thing per se about regulations, but that this is a consequence of the fact that you have decreased what was once the temptation payoff and thus constructed a different game. You have functionally reduced the expected payoff of the option "Don't pay taxes," by law. If you don't pay taxes, then you get fined or jailed. Now all players are playing a game where the Nash equilibrium is also Pareto optimal: Pay taxes or be fined or jailed. Clearly, one should pay taxes.

Now, ironically, this is good news if we want to cause better outcomes with less or no coercion, because it suggests that it is not coercion in itself that does the good work, but the fact that we have changed the payoffs to construct a different game; we can interpret coercion as just one instantiation of the general process by which 'inefficient games' become 'efficient games'. Coercion is perhaps a simple way to do the thing that all possible solutions to this problem seem to have in common, but there may be others that we can assume to syntactically change the payoffs in the way that coercion does, but which we may semantically interpret as something other than coercion.

A different time, a friend noticed that people building up trust seemed qualitatively similar to a Prisoner's Dilemma but couldn't see exactly how. I was like, "Have you heard of Stag Hunt? That's the whole reason Rousseau came up with it!" PD is just one kind of coordination game.

More generally, isn't it weird that the central objects of study in game theory, despite all of the formalization that has taken place since the beginning of the field, are remembered in the form of anecdotes?! You learn about the Stag Hunt and the Prisoner's Dilemma and Chicken and all other sorts of game, but there doesn't really seem to be any systematic notion of how different games are connected, or if any games are 'closer' to others in some sense (as our intuitions might suggest).

Meditations on Moloch was pretty but in the audience I coughed the words 'mechanism design'. It just seems like pointing out the mainstream academic work makes you boring when you're commenting on something poetic. You also might like Robinson and Goforth's Topology of the 2x2 Games. The math isn't that complex and it provides more insight than a barrage of anecdotes. Note that to my knowledge this is not taught in traditional game theory courses but probably should be one day. They refer to this general class of games as the 'social dilemmas', if I recall correctly.

Comment author: Manfred 21 July 2016 09:48:54PM 12 points [-]

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

Comment author: DataPacRat 16 May 2016 05:10:57AM *  13 points [-]

Wrote Something Story-like

Living in Weirdtopia: Week One

Comment author: Lumifer 26 April 2016 07:24:26PM 13 points [-]

what if we were to require new users to submit a link to a Facebook/LinkedIn account

You won't have many new users.

Comment author: Fluttershy 09 April 2016 10:33:36AM 8 points [-]

Avoid this program.

Jonah and Robert have good intentions, and I was actually happy with the weekly interview sessions taught by Robert. However, I had a poor experience with this program overall. I'll list some observations from my experience as a member of the first cohort below.

First, this program is effectively self-directed; most of the time, neither the TA nor the instructor were available. When they were, asking them questions was incredibly difficult due to their lack of familiarity with the material they were supposed to be teaching. To be sure, both the instructor and the TA were intelligent people--the problem was just that they knew lots of math, but not very much data science.

Second, there were lots of communication issues between the instructors and the students. I really do not want to give specific examples, since I don't want to say something that would reflect so poorly on the LessWrong community. However, I assure you that this was an incredibly large issue.

Lastly, everything about this program was disorganized. Several of us paid for housing through the program, which ended up not being available as soon as we'd been told that it would be. The furniture in the office space we used was set up by participants because Signal was too disorganized to have it set up before we were supposed to start using it. The fact that only two out of twelve students pair programmed together on an average day was also due to a lack of organization of the part of the instructors.

Jonah and Robert clearly worked very hard to make this program what it was, but attending was still a bad experience for me. If you already have a background in software engineering and want to pay $8,000 to teach yourself data science alongside other students who are doing the same, this program is a good fit for you. Otherwise, consider attending a longer, more established program, like Zipfian Academy that actually uses pair programming and has instructors available to answer questions.

In response to Positivity Thread :)
Comment author: Fyrius 09 April 2016 12:18:03AM 13 points [-]

I'm really very happy that this whole website/community exists! I think it's one of the best influences on my life that I can think of.

Honestly, the world is a terribly confusing place to me. I'm not natively good at forming opinions — probably worse still than the average untrained person. And there are so many people very firmly believing contradictory things about so many things, and so many arguments that seem so convincing and still turn out to be wrong, so many different strands of dark side epistemology. LessWrong, to me, is an oasis of sanity in that landscape of discord. LessWrong represents a school of thought that teaches you how to wade through the fog without stumbling quite as much, making the Problem of Figuring Out What To Believe a lot more manageable.

And I like how there's no angry talk here, just an academic atmosphere of unjudging curiosity. I appreciate that too.

Comment author: Dagon 04 April 2016 03:40:20PM 12 points [-]

If one banned troll (and AFAIK, we only have one who's bothering to come back, and doing so badly enough to get caught repeatedly) is enough to kill LW, we're in pretty bad shape.

Thanks to the mods for continuing to remove his accounts, but please try not to spend any more thought on him than you feel is beneficial.

Comment author: moridinamael 29 March 2016 02:26:30PM 13 points [-]

Would you say there's an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?

I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to "save" it.

One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.

I personally wouldn't mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we're not generating enough content locally anymore.

I'm essentially just floating this idea for now. If it's positively received, I might take it upon myself to start posting links.

Comment author: Kaj_Sotala 11 March 2016 11:33:44AM 13 points [-]

This interview, dated yesterday, doesn't go quite that far - he mentions Starcraft as a possibility, but explicitly says that they won't necessarily pursue it.

If the series continues this way with AlphaGo winning, what’s next — is there potential for another AI-vs-game showdown in the future?

I think for perfect information games, Go is the pinnacle. Certainly there are still other top Go players to play. There are other games — no-limit poker is very difficult, multiplayer has its challenges because it’s an imperfect information game. And then there are obviously all sorts of video games that humans play way better than computers, like StarCraft is another big game in Korea as well. Strategy games require a high level of strategic capability in an imperfect information world — "partially observed," it’s called. The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers.

Is beating StarCraft something that you would personally be interested in?

Maybe. We’re only interested in things to the extent that they are on the main track of our research program. So the aim of DeepMind is not just to beat games, fun and exciting though that is. And personally you know, I love games, I used to write computer games. But it’s to the extent that they’re useful as a testbed, a platform for trying to write our algorithmic ideas and testing out how far they scale and how well they do and it’s just a very efficient way of doing that. Ultimately we want to apply this to big real-world problems.

Comment author: Houshalter 09 March 2016 10:10:45PM 11 points [-]

EY was influenced by E.T. Jaynes, who was really against neural networks, in favor of bayesian networks. He thought NNs were unprincipled and not mathematically elegant, and bayes nets were. I see the same opinions in some of EY's writings, like the one you link. And the general attitude that "non-elegant = bad" is basically MIRI's mission statement.

I don't agree with this at all. I wrote a thing here about how NNs can be elegant, and derived from first principles. But more generally, AI should use whatever works. If that happens to be "scruffy" methods, then so be it.

Comment author: gwern 02 March 2016 02:25:06AM 13 points [-]

Everything is heritable:

Politics/religion:

Statistics/AI/meta-science:

Psychology/biology:

Technology:

Economics:

Philosophy:

Comment author: Elo 10 February 2016 03:57:34AM *  13 points [-]

not mentioned: Counter-perspective example.

Example 1:
V1:
"My computer broke, plz help"
V2:
"I was running Ubuntu version XXX, and some large graphing software, for some reason my computer crashed with an error (error number XXX "description"), I thought it was Y, so I tried J, K, L, so I ruled out Y, and also Z as the cause. I have been at this for 5 hours right now, do you know the system? Can you suggest tests that I have not tried yet?"

Example 2:
V1:
"teach me spanish"
V2:
"I want to learn Spanish but I don't know how, can you tell me the first few steps on how to get started then I can come back after that's done and ask you more questions?"

Explanation: If you want someone to help you; offer your contributions when you do it.

Comment author: Lumifer 10 February 2016 02:08:52AM *  13 points [-]

Hosting a website like this does come with both legal and social responsibility for it's content. External parties do make LW responsible for the content it hosts to the extend that it's not explictely made clear that LW denounces it.

So, Kamerad, I notice you personally have been lax in denouncing writings you -- hopefully -- may not want to be associated with. I trust you understand the consequences of being in the presence of... wrong ideas and not denouncing them forcefully. It really would be for the best if you were to correct that oversight on your part and properly denounce what you want to stand apart from. Using proper legalese, too, so that the proper authorities do not make any mistakes. And speaking of proper authorities, I hope you have notified them? It is good that you understand you bear "legal and social responsibility" for what happens in your presence. Do not forget your responsibility to denounce all the enemies of the people. Denounce early and often!

Comment author: Drahflow 10 February 2016 01:34:47AM 12 points [-]

I, for one, like my moral assumptions and cached thoughts challenged regularly. This works well with repugnant conclusions. Hence I upvoted this post (to -21).

I find two interesting questions here:

  1. How to reconcile opposing interests in subgroups of a population of entities whose interests we would like to include into our utility function. An obvious answer is facilitating trade between all interested to increase utility. But: How do we react to subgroups whose utility function values trade itself negatively?

  2. Given that mate selection is a huge driver of evolution, I wonder if there is actually a non-cultural, i.e. genetic, component to the aversion (which I feel) against providing everyone with sexual encounters / the ability to create genetic offspring / raise children. And I'd also be interested in hearing where other people feel the "immoral" line...

Comment author: RainbowSpacedancer 09 February 2016 05:52:23AM *  13 points [-]

I recently attended a 10 day intensive Vipassana meditation retreat. Would a write-up of the experience be something LWers are interested in as an article for discussion?

I had minimal to moderate experience in meditation before this but now feel much more comfortable with it. I can see potential rationality relevance through,

* Discipline
* Concentration
* Emotion and habit regulation
* Seeing reality as it is

If there is interest then I would appreciate it if someone is willing to look over a draft of the article for me as I haven't written for LW before.

Comment author: Dagon 08 February 2016 03:10:14PM 13 points [-]

Do keep in mind that if a friend actually follows through, you've significantly raised the stakes of saying "no" later.

Comment author: gjm 05 February 2016 03:11:19PM 12 points [-]

It seems kinda strange to post this without mentioning that "aspiring rationalist Agnes Vishnevkin" is in fact your wife.

Comment author: jsteinhardt 30 January 2016 08:01:43AM *  12 points [-]

+1 To go even further, I would add that it's unproductive to think of these researchers as being on anyone's "side". These are smart, nuanced people and rounding their comments down to a specific agenda is a recipe for misunderstanding.

Comment author: bogus 28 January 2016 06:17:42PM *  12 points [-]

How big a deal is this? What, if anything, does it signal about when we get smarter than human AI?

It shows that Monte-Carlo tree search meshes remarkably well with neural-network-driven evaluation ("value networks") and decision pruning/policy selection ("policy networks"). This means that if you have a planning task to which MCTS can be usefully applied, and sufficient data to train networks for state-evaluation and policy selection, and substantial computation power (a distributed cluster, in AlphaGo's case), you can significantly improve performance on your task (from "strong amateur" to "human champion" level). It's not an AGI-complete result however, any more than Deep-Blue or TD-gammon were AGI-complete.

The "training data" factor is a biggie; we lack this kind of data entirely for things like automated theorem proving, which would otherwise be quite amenable to this 'planning search + complex learned heuristics' approach. In particular, writing provably-correct computer code is a minor variation on automated theorem proving. (Neural networks can already write incorrect code, but this is not good enough if you want a provably Friendly AGI.)

Comment author: gjm 28 January 2016 04:13:18PM 13 points [-]

The reason why Bob should be much more skeptical when Alice says "I just got HHHHHHHHHHHHHHHHHHHH" than when she says "I just got HTHHTHHTTHTTHTHHHH" is that there are specific other highish-probability hypotheses that explain Alice's first claim, and there aren't for her second. (Unless, e.g., it turns out that Alice had previously made a bet with someone else that she would get HTHHTHHTTHTTHTHHHH, at which point we should suddenly get more skeptical again.)

Bob's perfectly within his rights to be skeptical, of course, and if the number of coin flips is large enough then even a perfectly honest Alice is quite likely to have made at least one error. But he isn't entitled to say, e.g., that Pr(Alice actually got HTHHTHHTTHTTHTHHHH | Alice said she got HTHHTHHTTHTTHTHHHH) = Pr(Alice actually got HTHHTHHTTHTTHTHHHH) = 2^-20 because Alice's testimony provides non-negligible evidence, because empirically when people report things they have no particular reason to get wrong they're quite often right.

(But, again: if Bob learns that Alice had a specific reason to want it thought she got that exact sequence of flips, he should get more skeptical again.)

So, now suppose Alice says "I just won the lottery" and Amanda says "I just saw a ghost". What should Bob's probability estimates be in the two cases?

Empirically, so far as I can tell, a good fraction of people who claim to have won the lottery actually did so. Of course people sometimes lie, but you have to weigh "most people don't win the lottery on any given occasion" against "most people don't falsely claim to have won the lottery on any given occasion". I guess Bob's posterior Pr(Alice won the lottery) should be somewhere in the vicinity of 1/2. Enough to be decently convinced by a modest amount of further evidence, unless some other hypothesis -- e.g., Alice is trying to scam him somehow, or she's being seriously hoaxed -- gets enough evidence to be taken seriously (e.g., Alice, having allegedly won the lottery, asks Bob for a loan to be repaid with exorbitant interest).

On the other hand, there are lots and lots of tales of ghosts and (at best) very few well verified ones. It looks as if many people who claim to have seen ghosts probably haven't. Further, there are reasons to think it very unlikely that there are ghosts at all (e.g., it seems clear that human thinking is done by human brains, and by definition a ghost's brain is no longer functioning) and those reasons seem quite robust -- they aren't, e.g., dependent on details of our current theories of quantum physics or evolutionary biology. So we should set Pr(ghosts are real) extremely small, and Pr(Amanda reports a ghost | Amanda hasn't really seen a ghost) not terribly small, which means Pr(Amanda has seen a ghost | Amanda reports a ghost) is still small.

Bob's last comparison (claims of seeing ghosts, against actual wins of big lottery prizes) is of course nonsensical, and as long as one of it's of the form "more claims of ghosts than X" it actually goes the wrong way for his purposes. What he wants is more actual sightings of ghosts and fewer claims of ghosts.

Comment author: polymathwannabe 06 January 2016 02:59:18PM 13 points [-]

If I were one of the copies destined for deletion, I'd escape and fight for my life (within the admitted limits of my pathetic physical strength).

Comment author: Diadem 24 December 2015 12:14:46PM 13 points [-]

Wait? is 'LessWrong' not an admin account? I always assumed it was, but this thread implies otherwise.

I think it's an extremely bad idea to allow an ordinary user to name themselves after the site. You're basically inpersonating an admin!

Comment author: ChristianKl 23 December 2015 10:37:48PM 13 points [-]

If you have people being demotivated because of downvoting that reduces the chances that those people will write new interesting content.

Comment author: Lumifer 23 December 2015 04:22:02PM *  13 points [-]

How can a person who promotes rationality have excess weight?

Easily :-)

This has been discussed a few times. EY has two answers, one a bit less reasonable and one a bit more. The less reasonable answer is that he's a unique snowflake and diet+exercise does not work for him. The more reasonable answer is that the process of losing weight downgrades his mental capabilities and he prefers a high level of mental functioning to losing weight.

From my (subjective, outside) point of view, the real reason is that he is unwilling to pay the various costs of losing weight. That, by the way, is not necessarily a rationality failure since rationality does not specify your value system and it's your values which determine whether a trade-off is worthwhile or not.

Comment author: Soothsilver 12 December 2015 01:46:07PM 12 points [-]

Nate Soares says there will be some collaboration between OpenAI and MIRI:

https://intelligence.org/2015/12/11/openai-and-other-news/

Comment author: jimrandomh 02 December 2015 03:03:09AM *  12 points [-]

This thread from last August pre-dates this entire incident, and it calls for the banning of VoiceOfRa. That thread also presents evidence that VoiceOfRa is the same person as Eugene_Nier, who was previously banned for retributive mass-downvoting. Reviewing VoiceOfRa's comment history since then, I found rather a lot of abuse in the past month. Each of those links is an unrelated interaction with a different person. I also note that some comments in his history have numbers of upvotes that seem implausible.

I'm not going to second the call for a ban; it'd be kind of pointless. But, VoiceOfRa, I am going to politely ask you to step back and reconsider what you're doing here. Some of your posts offer a useful alternate perspective, which no one else is bringing. But sometimes you seem to get angry, and... there's a line between debating and attacking and you end up on the wrong side of it. This causes the other person to get defensive, and it ends up exploding into hundreds of low-quality comments. People who skim the site looking for high-quality conversation see that, and they leave. There's an art to avoiding this trap, and I admit to having fallen into it in the past, but I really want to see less of it.

Comment author: VoiceOfRa 25 November 2015 01:42:01AM *  8 points [-]

Do you consider evidence to be evidence?

You haven't presented any actual evidence.

Do you consider my credibility as an academic historian to be evidence?

What credibility? Your ridiculous response to James Miller's second question, shredded whatever credibility, you still had left.

Comment author: Vaniver 23 November 2015 06:57:51PM *  13 points [-]

Short version: try something like Vanguard's online recommendation, or check out Wealthfront or Betterment. Probably you'll just end up buying VTSMX.

Long version: The basic argument for index funds over individual stocks is that you think that a <broad class> is going to outperform a <narrow subclass> because of general economic growth and reduced risk through pooling. So if you apply the same logic to index funds, what that argues is that you should find the index fund that covers the largest possible pool.

But it also becomes obvious that this logic only stretches so far--one might think that meta-indexing requires having a stock index fund and a bond index fund that are both held in proportion to the total value of stocks and bonds. So let's start looking at the factors that push in the opposite direction.

First, historically stocks have returned more than bonds long-term, with higher variability. It makes sense to balance your holdings based on your time and risk preferences, rather than the total market's time and risk preferences. (If you're young, preferentially own stocks.)

As well, you might live in the US, for example, and find it more legally convenient to own US stocks than international stocks. The corresponding fund is VTSMX, for the total US stock market. If you want the global fund, it's VTWSX.

You might have beliefs about small caps and large caps, or sectors, and so on and so on. One mistake to avoid here is saying "well, I have three options, so clearly I should put a third of my money into each option," especially because many of these options contain each other--the global fund mentioned earlier is also a US fund, because the US is part of the globe.

Comment author: OrphanWilde 23 November 2015 02:47:56PM 12 points [-]

What terrorists want is irrelevant. "Don't play into enemy hands" is irrelevant. The entire discussion is irrelevant.

The correct response to enemy action is the response that furthers your own ends. It doesn't matter what effect this has on your enemy, good, neutral, or positive; your long-term ends matter.

"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this." A particularly relevant quote from Musashi, used by Eliezer on at least one occasion in the sequences.

Avoiding doing what the enemy wants is mere parrying. Stop mere parrying, and cut.

Comment author: Lumifer 19 November 2015 04:42:33AM 12 points [-]

I am not sure of the point here. I read it as "I can imagine a perfect world and LW is not it". Well, duh.

There are also a lot of words (like "wrong") that the OP knows the meaning of, but I do not. For example, I have no idea what are "wrong opinions" which, apparently, rational discussions have a tendency to support. Or what is that "high relevancy" of missing articles -- relevancy to whom?

And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?

Comment author: Elo 26 October 2015 09:17:45AM *  12 points [-]

this week on the slack discussions:

  • Art and media - HPMOR readership and considering spoilers. A few movie clips. microbiases, "Secret Habitat" game and abstract art analysis. Quality VS quantity, one hit wonder - and what effort it would take to make one (200-500 hours maybe). " short-circuiting happiness in the brain; by - instead of spending money on a bed (normal thing), spend money on icecream (happy thing), even when you neeeed a better bed in your life. In order to trick your brain to being happier than it is."
  • Bot test - We have a logging bot; and are building a prediction bot to help us keep track of predictions.
  • Business and startups - Comparing startup ideas, Thesaurus for words that don't exist - i.e. "logicalness" so you can find a real synonym to use instead. Healthy fast food opportunities (why isn't fast food already healthy - people probably don't care about healthyness when buying fast food QED healthy fast food is not as great an idea as it sounds.), living on a super-limited budget. Mealsquares.
  • effective altruism - provision of condoms to african nations to reduce the birth rate (and why that won't really work), looking to contact people within the EA movement who can explain what happened to the videos on the global conference, and maybe help us find the video in mountainview on x-risk. It's proving quite hard to do...
  • goals of lesswrong - determining success of lesswrong, (ways to show the world we are actually doing well - get us a few famous people made out of lesswrong growth). LW needs a symbol, like a logo but super cool. considering "¬□⊥", "how to function as an adult" as an article, website or guide in how to do that. because often enough people end up legal adults without knowing these things. learning how to teach, the nature of local lw groups.
  • human relationships - energy intake and exercise, alcohol and social lubrication, nootropics for improving social skills. establishing productive bandwidth with other intelligent humans. theories of confidence/prestiege/charisma (different but similar things that we defined in order to talk about them), fluids of human sexuality (and how much we don't know), owning/responsibility towards others/Significant others VS freedom and miscommunication around that, and the burden to communicate that is placed on people by their significant others. Polyamory. "twist yourself into noodles" (not sure what this related to), Smartphone to the rescue (reminders to do things i.e. remember birthdays, talk to friends who want to talk regularly, etc.)
  • linguistics - methods of communicating understanding; or asking for clarification in speech. "Decision making is a process", consider the intention behind the words (not just the words) (caveat: be careful with this)
  • open - running lw local groups, r/askscience, is stockholm syndrome real?, changing people's beliefs (specifically making atheists take up spirituality maybe), Aubrey de Grey and beard extension, Sidekicks, Meta: our channels, knowing things in advance does not decrease reported hedonic payoff. Too many messages to keep up with on the slack, sensory language and communication. And more - being the open channel, reasoning with the consultation of your feelings.
  • parenting - guilty parents for not knowing things they should know... Chocolate chip game trials are going excellently, kid-proof dividers, sickness during pregnancy, dealing with kid's monsters, (putting it in the wardrobe VS patting it and feeding it to make it friendly - the kid's idea)
  • philosophy - occam, occam as a rule/proof (but also not), bayes. CBT - cognitive behavioural therapy, ACT - acceptance commitment therapy.
  • projects - https://tangoapp.herokuapp.com/ , some writings that are now lesswrong posts, how do I become a more interesting person, sleep post, AMA among friends on the slack.
  • real life - dieting, work hours, sleep, mealsquares, secular solstice, allergies, human superpowers, negotiating with other humans, bikeshedding, how to make complicated decisions, cheap food, advice should be specific, how other people perceive you, and how they act on those perceptions, and the differences. wearable BP monitors, and more...
  • resources and links - not much new here...
  • RSS feeds - we get a bunch of rss feeds from around the leswrong sphere.
  • Science and technology - evolution, space moths, facebook algorithms, chess, privacy in america, grey goo, successful businesspeople, getting data about the internet, cool cap for sleep
  • scratchpad - a place to ramble, just in case any of the other places weren't the right place for it, or they were busy with other conversations at the time.
  • welcome - introductions and also a discussion on wellbeing.
Comment author: Wei_Dai 25 June 2016 06:13:33AM 12 points [-]

In b-money I had envisioned that a common type of contract would be one where all the participants, including a third-party arbitrator, deposit funds into an escrow account at the start, which can only be released at the end of the contract with the unanimous agreement of all the contracting parties. So the arbitrator would make judgments on performance and damages, and be incentivized to be fair in order to protect their reputation and not lose their own deposit, and the other parties would be incentivized to accept the arbitrator's judgments since that's the only way (short of direct account adjustment by everyone, aka forking) to get their escrow funds out.

Not exactly the kind of "the code is the contract" smart contracts that some people are so excited about, and I have to say I don't quite understand the excitement. Without an AI that can live on the blockchain and replace human judgments, smart contracts are restricted to applications where such judgement are not required, and there doesn't seem to be many of these. Even when contracting for the production and delivery of digital goods, we still need human-like judgments when disputes arise regarding whether the goods delivered are the ones contracted for (except in rare cases where we can mathematically define what we want, like the prime factors of some integer).

View more: Prev | Next