If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
I've noticed I don't read 'Main' posts anymore.
When I come to LW, I click to the Discussion almost instinctively. I'd estimate it has been four weeks since I've looked at Main. I sometimes read new Slate Star Codex posts (super good stuff, if you are unfamiliar) from LW's sidebar. I sometimes notice interesting-sounding 'Recent Comments' and click on them.
My initial thought is that I don't feel compelled to read Main posts because they are the LW-approved ideas, and I'm not super interested in listening to a bunch of people agreeing with another. Maybe that is a caricature, not sure.
Anyone else Discussion-centric in their LW use?
Also, the Meetup stuff is annoying noise. I'm very sympathetic if placing it among posts helps to drive attendance. By all means, continue if it helps your causes. But it feels spammy to me.
Alternative hypothesis: you have been conditioned to click on discussion because it has a better reward schedule.
If one is able to improve how people are matched, it would bring about a huge amount of utility for the entire world.
People would be happier, they would be more productive, there would be less of the divorce-related waste. Being in a happy couple also means you are less distracted by conflict in the house, which leads to people better able to develop themselves and achieve their personal goals. You can keep adding to the direct benefits of being in a good pairing versus a bad pairing.
But it doesn't stop there. If we accept that better matched parents raise their children better, then you are looking at a huge improvement in the psychological health of the next generation of humans. And well-raised humans are more likely to match better with each other...
Under this light, it strikes me as vastly suboptimal that people today will get married to the best option available in their immediate environment when they reach the right age.
The cutting-edge online dating sites base their suggestions on a very limited list of questions. But each of us outputs huge amounts of data, many of them available through APIs on the web. Favourite books, movies, sleep patterns, browsing history, work hi...
There seem to be perverse incentives in the dating industry. Most obviously: if you successfully create a forever-happy couple, you have lost your customers; but if you make people date many promissingly-looking-yet-disappointing partners, they will keep returning to your site.
Actualy, maybe your customers are completely hypocritical about their goals: maybe "finding a true love" is their official goal, but what they really want is plausible deniability for fucking dozens of attractive strangers while pretending to search for the perfect soulmate. You could create a website which displays the best one or two matches, instead of hundreds of recommendations, and despite having higher success rate for people who try it, most people will probably be unimpressed and give you some bullshit excuses if you ask them.
Also, if people are delusional about their "sexual market value", you probably won't make money by trying to fix their delusions. They will be offended by the types of "ordinary" people you offer them as their best matches, when the competing website offers them Prince Charming (whose real goal is to maximize his number of one night stands) or Princ...
what they really want is plausible deniability for fucking dozens of attractive strangers while pretending to search for the perfect soulmate.
That sounds a lot like really wanting a soulmate and an open relationship.
I wonder to what extent the problems you describe (divorces, conflict, etc) are caused mainly by poor matching of the people having the problems, and to what extent they are caused by the people having poor relationship (or other) skills, relatively regardless of how well matched they are with their partner? For example, it could be that someone is only a little bit less likely to have dramatic arguments with their "ideal match" than with a random partner -- they just happen to be an argumentative person or haven't figured out better ways of resolving disagreements.
What makes you think these marriages are successful? Low divorce rates are not good evidence in places where divorce is often impractical.
Three main points in favor of arranged marriages that I'm aware of:
The chicken/egg issue is real with any dating site, yet dating sites do manage to start. Usually you work around this by focusing on a certain group/location, dominating that, and spreading out.
Off the cuff, the bay strikes me as a potentially great area to start for something like this.
Here is one improvement to OKcupid, which we might even be able to implement as a third party:
OKcupid has bad match algorithms, but it can still be useful as searchable classified adds. However, when you find a legitimate match, you need to have a way to signal to the other person that you believe the match could work.
Most messages on OKcupid are from men to women, so women already have a way to do this: send a message, however men do not.
Men spam messages, by glancing over profiles, and sending cookie cutter messages that mention something in the profile. Women are used to this spam, and may reject legitimate interest, because they do not have a good enough spam filter.
Our service would be to provide an I am not spamming commitment. A flag that can be put in a message which signals "This is the only flagged message I have sent this week"
It would be a link, you put in your message, which sends you to a site that basically says. Yes, Bob(profile link) has only sent this flag to Alice(profile link) in the week of 2/20/14-2/26/14, with an explanation of how this works.
Do you think that would be a useful service to implement? Do you think people would actually use it, and receive it well?
How do you pick a career if your goal is to maximize your income (technically, maximize the expected value of some function of your income)? The sort of standard answer is "comparative advantage", but it's unclear to me how to apply that concept in practice. For example how much demand there is for each kind of job is obviously very important, but how do you take that into consideration, exactly? I've been thinking about this and came up with the following. I'd be interested in any improvements or alternative ideas.
If you have a high IQ and are good at math go into finance. If you have a high IQ, strong social skills but are bad at math go into law. If you have a high IQ, a good memory but weak social and math skills become a medical doctor. If you have a low IQ but are attractive marry someone rich. If you have a very low IQ get on government benefits for some disability and work at an under-the-table job.
In "The Fall and Rise of Formal Methods", Peter Amey gives a pretty good description of how I expect things to play out w.r.t. Friendly AI research:
...Good ideas sometimes come before their time. They may be too novel for their merit to be recognised. They may be seen to threaten some party’s self interest. They may be seen as simply too hard to adopt. These premature good ideas are often swept into corners and, the world, breathing a sigh of relief, gets on with whatever it was up to before they came along. Fortunately not all good ideas wither. Some are kept alive by enthusiasts, who seize every opportunity to show that they really are good ideas. In some cases the world eventually catches up and the original premature good idea, honed by its period of isolation, bursts forth as the new normality (sometimes with its original critics claiming it was all their idea in the first place!).
Formal methods (and I’ll outline in more detail what I mean by ‘formal methods’ shortly) are a classic example of early oppression followed by later resurgence. They arrived on the scene at a time when developers were preoccupied with trying to squeeze complex functionality into hardware wit
Introduction I suspected that the type of stuff that gets posted in Rationality Quotes reinforces the mistaken way of throwing about the word rational. To test this, I set out to look at the first twenty rationality quotes in the most recent RQ thread. In the end I only looked at the first ten because it was taking more time and energy than would permit me to continue past that. (I'd only seen one of them before, namely the one that prompted me to make this comment.)
A look at the quotes
In our large, anonymous society, it's easy to forget moral and reputational pressures and concentrate on legal pressure and security systems. This is a mistake; even though our informal social pressures fade into the background, they're still responsible for most of the cooperation in society.
There might be an intended, implicit lesson here that would systematically improve thinking, but without more concrete examples and elaboration (I'm not sure what the exact mistake being pointed to is), we're left guessing what it might be. In cases like this where it's not clear, it's best to point out explicitly what the general habit of thought (cognitive algorithm) is that should be corrected, and how...
So I have the typical of introvert/nerd problem of being shy about meeting people one-on-one, because I'm afraid of not being able to come up with anything to say and lots of awkwardness resulting. (Might have something to do with why I've typically tended to date talkative people...)
Now I'm pretty sure that there must exist some excellent book or guide or blog post series or whatever that's aimed at teaching people how to actually be a good conversationalist. I just haven't found it. Recommendations?
Responding to the interesting conversation context.
First, always bring pen a paper to any meeting/presentation that is in anyway formal or professional. Questions always come up at times when it is inappropriate to interrupt, save them for lulls.
Second, an an anecdote. I noticed I had a habit during meetings to focus entirely on absorbing and recording information, and then would process and extrapolate from it after the fact (I blame spending years in the structured undergrad large technical lecture environment). This habit of only listening and not providing feedback was detrimental in the working world, it took a lot of practice to start analyzing the information and extrapolating forward in real time. Once you start extrapolating forward from what you are being told, meaningful feedback will come naturally.
Here is another logic puzzle. I did not write this one, but I really like it.
Imagine you have a circular cake, that is frosted on the top. You cut a d degree slice out of it, and then put it back, but rotated so that it is upside down. Now, d degrees of the cake have frosting on the bottom, while 360 minus d degrees have frosting on the top. Rotate the cake d degrees, take the next slice, and put it upside down. Now, assuming the d is less than 180, 2d degrees of the cake will have frosting on the bottom.
If d is 60 degrees, then after you repeat this procedure, flipping a single slice and rotating 6 times, all the frosting will be on the bottom. If you repeat the procedure 12 times, all of the frosting will be back on the top of the cake.
For what values of d does the cake eventually get back to having all the frosting on the top?
Solution can be found in the comments here.
Someone was asking a while back for meetup descriptions, what you did/ how it went, etc. Figured I'd post some Columbus Rationality videos here. All but the last are from the mega-meetup.
Jesse Galef on Defense Against the Dark Arts: The Ethics and Psychology of Persuasion
Eric on Applications of Models in Everyday Life (it's good, but skip about 10-15 minutes when there's herding-cats-nitpicky audience :P)
Rita on Cognitive Behavioral Therapy
A question I'm not sure how to phrase to Google, and which has so far made Facebook friends think too hard and go back to doing work at work: what is the maximum output bandwidth of a human, in bits/sec? That is, from your mind to the outside world. Sound, movement, blushing, EKG. As long as it's deliberate. What's the most an arbitrarily fast mind running in a human body could achieve?
(gwern pointed me at the Whole Brain Emulation Roadmap; the question of extracting data from an intact brain is covered in Appendix E, but without numbers and mostly with hypothetical technology.)
I noticed recently that one of the mental processes that gets in the way of my proper thinking is an urge to instantly answer a question then spend the rest of my time trying to justify that knee-jerk answer.
For example, I saw a post recently asking whether chess or poker was more popular worldwide. For some reason I wanted to say "obviously x is more popular," but I realized that I don't actually know. And if I avoid that urge to answer the question instantly, it's much easier for me to keep my ego out of issues and to investigate things properly...including making it easier for me recognize things that I don't know and acknowledge that I don't know them.
Is there a formal name for this type of bias or behavior pattern? It would let me search up some Sequence posts or articles to read.
How do you know when you've had a good idea?
I've found this to actually be difficult to figure out. Sometimes you can google up what you thought. Sometimes checking to see where the idea has been previously stated requires going through papers that may be very very long, or hidden by pay-walls or other barriers on scientific journal sites.
Sometimes it's very hard to google things up. To me, I suppose the standard for "that's a good idea," is if it more clearly explains something I previously observed, or makes it easier or faster for me to do something. But I have no idea whether or not that means it will be interesting for other people.
How do you like to check your ideas?
An experiment with living rationally, by A J Jacobs, who wrote The Year of Living Biblically. I don't know how long he plans to try living rationally.
To illustrate dead-weight loss in my intro micro class I first take out a dollar bill and give it to a student and then explain that the sum of the wealth of the people in the classroom hasn't changed. Next, I take a second dollar bill and rip it up and throw it in the garbage. My students always laugh nervously as if I've done something scandalous like pulling down my pants. Why?
Because it signals "I am so wealthy that I can afford to tear up money" and blatantly signaling wealth is crass. And it also signals "I am so callous that I would rather tear up money than give it to the poor", which is also crass. And the argument that a one dollar bill really isn't very much money isn't enough to disrupt the signal.
A little bit of How An Algorithm Feels From Inside:
Why is the Monty Hall problem so horribly unintuitive? Why does it feel like there's an equal probability to pick the correct door (1/2+1/2) when actually there's not (1/3+2/3)?
Here are the relevant bits from the Wikipedia article:
...Out of 228 subjects in one study, only 13% chose to switch (Granberg and Brown, 1995:713). In her book The Power of Logical Thinking, vos Savant (1996, p. 15) quotes cognitive psychologist Massimo Piattelli-Palmarini as saying "... no other statistical puzzle comes so clo
Another datapoint is the counterintuitiveness of searching a desk: with each drawer you open looking for something, the probability of finding it in the next drawer increases, but your probability of ever finding it decreases. The difference seems to whipsaw people; see http://www.gwern.net/docs/statistics/1994-falk
Does anyone have any advice about understanding implicit communication? I regularly interact with guessers and have difficulty understanding their communication. A fair bit of this has to do with my poor hearing, but I've had issues even on text based communication mediums where I understand every word.
My strategy right now is to request explicit confirmation of my suspicions, e.g., here's a recent online chat I had with a friend (I'm A and they're B):
A: Hey, how have you been?
B: I've been ok
B: working in the lab now
A: Okay. Just to be clear, do you mean t...
Posts that have appeared since you last red a page have a pinkish border on them. It's really helpful when dealing with things like open threads and quote threads that you read multiple times. Unfortunately, looking at one of the comments makes it think you read all of them. Clicking the "latest open thread" link just shows one of the comments. This means that, if you see something that looks interesting there, you either have to find the latest open thread yourself, or click the link and have it erase everything about what you have and haven't read.
Can someone make it so looking at one of the comments doesn't reset all of them, or at least put a link to the open thread, instead of just the comments?
Does anyone have advice on how to optimize the expectation of a noisy function? The naive approach I've used is to sample the function for a given parameter a decent number of times, average those together, and hope the result is close enough to stand in for the true objective function. This seems really wasteful though.
Most of the algorithms I'm coming (like modelling the objective function with gaussian process regression) would be useful, but are more high-powered than I need. Any simple techniques better than the naive approach? Any recommendations among sophisticated approaches?
I've been reading critiques of MIRI, and I was wondering if anyone has responded to this particular critique that basically asks for a detailed analysis of all probabilities someone took into account when deciding that the singularity is going to happen.
(I'd also be interested in responses aimed at Alexander Kruel in general, as he seems to have a lot to say about Lesswrong/Miri.)
Is there anything specific that he's said that's caused you to lose your faith? I tire of debating him directly, because he seems to twist everything into weird strawmen that I quickly lose interest in trying to address. But I could try briefly commenting on whatever you've found persuasive.
Possibly of interest: Help Teach 1000 Kids That Death is Wrong. http://www.indiegogo.com/projects/help-teach-1000-kids-that-death-is-wrong
(have not actually looked in detail, have no opinion yet)
I'd like to know where I can go to meet awesome people/ make awesome friends. Occasionally, Yvain will brag about how awesome his social group in the Bay Area was. See here (do read it - its a very cool piece) and I'd like to also have an awesome social circle. As far as I can tell this is a two part problem. The first part is having the requisite social skills to turn strangers into acquaintances and then turn acquaintances into friends. The second part is knowing where to go to find people.
I think that the first part is a solved problem, if you want to l...
How To Be A Proper Fucking Scientist – A Short Quiz. From Armondikov of RationalWiki, in his "annoyed scientist" persona. A list of real-life Bayesian questions for you to pick holes in the assumptions of^W^W^W^W^W^Wtest yourselves on.
Richard Loosemore (score one for nominative determinism) has a new, well, let's say "paper" which he has, well, let's say "published" here.
His refutation of the usual uFAI scenarios relies solely/mostly on a supposed logical contradiction, namely (to save you a few precious minutes) that a 'CLAI' (a Canonical Logical AI) wouldn't be able to both know about its own fallability/limitations (inevitable in a resource-constrained environment such as reality), and accept the discrepancy between its specified goal system and the creators' actu...
He actually cites someone else who agrees with him in his paper, so this can't be true.
I said as far as I know. I had not read the paper because I don't have a very high opinion of Loosemore's ideas in the first place, and nothing you've said in your G+ post has made me more inclined to read the paper, if all it's doing is expounding the old fallacious argument 'it'll be smart enough to rewrite itself as we'd like it to'.
I personally chatted with people much smarter than me (experts who can show off widely recognized real-world achievements) who basically agree with him.
Name three.
Low priority site enhancement suggestion:
Would it be possible/easy to display the upvotes-to-downvotes ratios as exact fractions rather than rounded percentages? This would make it possible to determine exactly how many votes a comment required without digging through source, which would be nice in quickly determining the difference between a mildly controversial comment and an extremely controversial one.
SMBC on genies and clever wishers. Of course, the most destructive wish is hiding under the red button.
My eye doctor diagnosed closed-angle glaucoma, and recommends an iridectomy. I think he might be a bit too trigger-happy, so I followed up with another doctor, and she didn't find the glaucoma. She carefully stated that the first diagnosis can still be the correct one, the first was a more complete examination.
Any insights about the pros and cons of iridectomy?
Proof by contradiction in intuitionist logic: ¬P implies only that there is no proof that proofs of P are impossible.
What is the best textbook on datamining? I solemnly swear that upon learning, I intend to use my powers for good.
So, MtGox has declared bankruptcy. Does that make this a good time, or a bad time to invest in Bitcoins? And if a good time, where is the best place to buy them?
I’m basically exactly the kind of person Yvain described here, (minus the passive-aggressive/Machiavellian phase). I notice that that post was sort of a plea for society to behave a different way, but it did not really offer any advice for rectifying the atypical attachment style in the meantime. And I could really use some, because I’ve gotten al-Fulani’d. I’m madly in love in with a woman who does not reciprocate. I’ve actually tried going back on OkCupid to move on, and I literally cannot bring myself to message anyone new, as no one else approaches...
Note that I’m not looking for PUA-type advice ... I want is advice on a) how not to fall so hard/so fast for (a very small minority of) women, and b)how to break the spell the current one has over me without giving up her friendship.
Seems to me like you want to overcome your "one-itis" and stop being a "beta orbiter", but you are not looking for an advice which would actually use words like "one-itis" and "beta orbiter". I know it's an exaggeration, but this is almost how it seems to me. Well, I'll try to comply:
1) You don't have to maximize the number of sexual partners. You still could try to increase a number of interesting women you had interesting conversation with. I believe that is perfectly morally okay, and still could reduce the feeling of scarcity.
Actually, any interesting activity would be helpful. Anything you can think about, instead of spending your time thinking about that one person.
2) Regularly interacting the person you are obsessed with is exactly how you maximize the length of obsession. It's like saying that you want to overcome your alcohol addiction, but you don't want to stop drinking regularly. Well, if one is not...
One common rationality technique is to put off proposing solutions until you have thought (or discussed) a problem for a while. The goal is to keep yourself from becoming attached to the solutions you propose.
I wonder if the converse approach of "start by proposing lots and lots of solutions, even if they are bad" could be a good idea. In theory, perhaps I could train myself to not be too attached to any given solution I propose, by setting the bar for "proposed solution" to be very low.
In one couples counseling course that I went thr...
What do you do when you're low on mental energy? I have had trouble thinking of anything productive to do when my brain seems to need a break from hard thinking.
This is one of those times I wish LW allowed explicit politics. SB 1062 in AZ has me craving interesting, rational discussion on the implications of this veto.
Just a thought:
A paperclip maximizer is an often used example of AGI gone badly wrong. However, I think a paperclip minimizer is worse by far.
In order to make the most of the universe's paperclip capacity, a maximizer would have to work hard to develop science, mathematics and technology. Its terminal goal is rather stupid in human terms, but at least it would be interesting because of its instrumental goals.
For a minimizer, the best strategy might be wipe out humanity and commit suicide. Assuming there are no other intelligent civilizations within our cos...
Somebody outside of LW asked how to quantify prior knowledge about a thing. When googling I came across a mathematical definition of surprise, as "the distance between the posterior and prior distributions of beliefs over models". So, high prior knowledge would lead to low expected surprise upon seeing new data. I didn't see this formalization used on LW or the wiki, perhaps it is of interest.
Speaking of the LW wiki, how fundamental is it to LW compared to the sequences, discussion threads, Main articles, hpmor, etc?
I'm curious about usage of commitment tools such as Beeminder: What's the income distribution among users? How much do users usually wind up paying? Is there a correlation between these?
(Selfish reasons: I'm on SSI and am not allowed to have more than $2000 at any given time. Losing $5 is all but meaningless for someone with $10k in the bank who makes $5k each month, whereas losing $5 for me actually has an impact. You might think this would be a stronger incentive to meet a commitment, but really, it's an even stronger incentive to stay the hell away from...
My psychologist said today, that there is some information that should not be known. I replied that rationalists believe in reality. There might be information they don't find interesting (e.g. not all of you would find children interesting), but refusing to accept some information would mean refusing to accept some part of reality, and that would be against the belief in reality.
Since I have been recently asking myself the question "why do I believe what I believe" and "what would happen if I believed otherwise than what I believe" (I'...
Spritzing got me quite excited! The concept isn't new, but the variable speed (pauses after punctuation marks) and quality visual cues really work for me, in the demo at least. Don't let your inner voice slow you down!
Disclaimer: No relevant disclosures about spritzing (the reading method, at least).
An experiment with living rationally, by A J Jacobs, who wrote The Year of Living Biblically. I don't know how long he plans to try living rationally.
Maybe CfAR should invite him to a workshop.
(I suspect that if CfAR should invite him to a workshop they should do it themselves in some official capacity and don't think random Less Wrongers ought to contact Mr. Jacobs.)
ETA: Ah, rats, the article is from 2008. He's probably lost interest.