Open Thread August 31 - September 6

5 Post author: Elo 30 August 2015 09:26PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (326)

Comment author: Panorama 31 August 2015 05:25:03PM 12 points [-]

Hilary Putnam, one of the most famous philosophers of the twentieth century, has a blog

Comment author: Lumifer 31 August 2015 04:14:33PM 7 points [-]

An interesting paper by the name of Fuck nuance.

Abstract:

Seriously, fuck it.

No, I'm not kidding, this is the actual abstract at the beginning of the paper.

Technically, it's about sociological theories, but I feel the general principle applies much more widely.

(Normally I would quote a teaser chunk of the paper here, but this PDF file seems unusually resistant to copy-and-paste-as-text and I don't feel like manually inserting back all the spaces between the words...)

Comment author: PhilGoetz 31 August 2015 05:36:25PM *  6 points [-]

Nancy Leibowitz was quoting this. Having spent the weekend reading 20th century French philosophers, this was refreshing. From the paper:

To make a loose statistical analogy, [asking for more nuance] is a little like continuing to add variables to a regression on the grounds that the explained variance keeps going up.

It's not a loose analogy. It's a literal description of an example of the sort of thing that should happen in the reality underlying the theory.

There is another aspect to nuance that I don't yet see mentioned in the paper. In French philosophy, the nuance is nuance of interpretation, not an attempt to handle more cases. Many theories are presented without having any cases at all that they handle! Jacques Lacan, for instance, only described one case history during his entire career; he presented detailed theories of personality development with no citations or data.

This happens with many who descend academically from Hegel: Marx, Lacan, Derrida. The model is not "nuanced" in the sense of handling many cases; it is never demonstrated to handle any data at all, or at best one over-simplified case (a general claim, or a particular sentence which the philosopher made up to illustrate the model). The nuance is all in the interpretation. It complexifies the theory without enabling it to handle any more cases--the worst of both worlds.

Connoisseurship gets its aesthetic bite, and a little kick of symbolic violence, from the easy insinuation that the person trying to simplify things is, sadly, a bit less sophisticated a thinker than the person pointing out that things are more complicated.

Comment author: NancyLebovitz 31 August 2015 05:50:44PM 4 points [-]

Thanks for mentioning that I'd already brought up the paper. I've got three quotes here.

My last name is Lebovitz.

I think of the way people tend to get it wrong as a rationality warning. I know about those errors because I have an interest in my name, but the commonness of the errors suggests that people get a tremendous amount wrong. How much of it matters? How could we even start to find out?

Comment author: PhilGoetz 01 September 2015 04:18:17AM *  1 point [-]

Sorry for misspelling your name. I don't think memory errors are rationality errors.

Comment author: NancyLebovitz 02 September 2015 07:03:23PM 0 points [-]

Memory errors have a bearing on rationality because you need accurate data to think about, and one of the primary causes of not remembering something is not having noticed it.

I can say my name twice, spell it, and show people a business card, and still have them get it wrong.

If you want more about how little people perceive, I recommend Sleight of Mind, a book about neurology and stage magic.

Comment author: Panorama 04 September 2015 01:35:22PM 6 points [-]

Julian Savulescu: The Philosopher Who Says We Should Play God

Australian bioethicist Julian Savulescu has a knack for provocation. Take human cloning. He says most of us would readily accept it if it benefited us. As for eugenics—creating smarter, stronger, more beautiful babies—he believes we have an ethical obligation to use advanced technology to select the best possible children.

A protégé of the philosopher Peter Singer, Savulescu is a prominent moral philosopher at the University of Oxford, where he directs the Uehiro Centre for Practical Ethics. He also edits the Journal of Medical Ethics. Savulescu isn’t shy about stepping onto ethical minefields. He sees nothing wrong with doping to help cyclists climb those steep mountains in the Tour de France. Some elite athletes will always cheat to boost their performance, so instead of trying to enforce rules that will be broken, he claims we’d be better off with a system that allows low-dose doping.

So does Savulescu just get off being outrageous? “I actually think of myself as the voice of common sense,” he says, though he admits to receiving his share of hate mail. He’s frustrated by how hard it is to have reasoned arguments about loaded issues without getting flamed on the Internet. Savulescu thinks we need to become far more adept at sorting out difficult moral issues. Otherwise, he says, the human species will face dire consequences in the coming decades.

Comment author: username2 02 September 2015 02:14:51AM *  6 points [-]

Someone changed the password on the Username public throwaway account. It's a shame a troll finally got to it after several years.

Comment author: gjm 02 September 2015 11:08:13AM 2 points [-]

It's worth contacting a moderator and seeing whether they can do anything about it.

Comment author: ChristianKl 02 September 2015 11:54:43AM 0 points [-]

Even if they set the password it's a nature of a public account that the PW can always be set differently.

Comment author: username2 02 September 2015 12:32:53PM 2 points [-]

How about make the password reset automatically every X minutes ?

Comment author: Dahlen 02 September 2015 02:15:35PM 1 point [-]

I actually meant to ask at some point whether the Username account would have protection against people changing passwords willy-nilly, but I didn't because, you know... information hazards and all that. Didn't want to give people the idea. But now that it's happened, I suppose I could ask retrospectively: how come nobody ensured some protection against that?

Comment author: ChristianKl 02 September 2015 02:29:42PM 2 points [-]

But now that it's happened, I suppose I could ask retrospectively: how come nobody ensured some protection against that?

Because in general a forum that's designed to allow anonymous comments would allow anonymous comments and not let people go through the hack of using a separate account for it. The account wasn't created by any moderator but simple by a using who thinks that such an account would be good to have.

While being in infohazard territory: It's not only possible to change passwords. It's also possible to delete accounts.

Comment author: MrMind 07 September 2015 10:18:01AM 0 points [-]

Then nuke the account and recreate it with the old password.

Comment author: moridinamael 02 September 2015 05:31:36PM 0 points [-]

I always assumed that was just one person. I feel like someone died. (Not really. But, how was I supposed to know it was an open account?)

Comment author: username2 02 September 2015 06:45:16PM 4 points [-]

The beauty of the account laid in the fact that it was not publicized, so only people who were long-time lurkers would know about it.

Comment author: Clarity 02 September 2015 07:18:25AM *  0 points [-]

Far out, that was an excellent account and several people had clearly used it to make important contributions.

It would be nice if there was a way to memorialize the posts or something externally. Or, perhaps the moderators could implement an 'official' throw-away to protect against this.

I have been a beneficiary of comments from the Username account and believe it does...or did a true service to the community. Thank you for taking it upon yourself to report this and making a new account.

Comment author: entirelyuseless 02 September 2015 12:56:50PM 3 points [-]

While I had no objection to the existence of the account and in fact used it several times myself, it was a bit annoying to me that someone was using it as his personal account rather than bothering to create his own.

Comment author: Panorama 31 August 2015 05:21:38PM *  6 points [-]

Solving a Non-Existent Unsolved Problem: The Critical Brachistochrone

During my research I came across an obscure mathematical physics problem whose established answer was wrong. I attempted to solve this unsolved problem, and eventually found out that I was the one who was wrong.

As part of my paper on falling through the centre of the Earth, I studied something called the brachistochrone curve....

Comment author: Elo 31 August 2015 09:44:34PM 5 points [-]

I think you were the person using the username account to post in this style. Thank you for making an account and welcome :)

Comment author: RichardKennaway 02 September 2015 03:41:06PM 5 points [-]

Could a moderator please nuke the swidon account and all of its posts?

Comment author: NancyLebovitz 02 September 2015 07:56:47PM 5 points [-]

The account is nuked. I need to find out how to remove posts.

Comment author: Elo 02 September 2015 06:41:13PM 0 points [-]

agreed.

Comment author: Gunnar_Zarncke 01 September 2015 07:43:58PM 5 points [-]

I'm looking for for a high quality parenting blog, one with relatively frequent well written content and which might accept guest contribution - or one with a discussion forum that's not just gossiping. Can be English speaking or German. I'd like to try my hand on some posts before opening my own blog. Any ideas?

Comment author: Clarity 01 September 2015 11:13:59AM 5 points [-]

This was a productive use of my time - a panel with Peter Thiel, Audrey de Grey (who I don't know) and Eliezer Yudhowsky.

Comment author: Panorama 31 August 2015 05:27:59PM 5 points [-]

The macro/micro validity tradeoff

Many economists insist that the realism of their assumptions is not important - the only important thing is that at the end of the day, the model fits the data of whatever phenomenon it's supposed to be modeling. This is called an "as if" model. For example, maybe individuals don't have rational expectations, but if the economy behaves as if they do, then it's OK to use a rational expectations model.

So I realized that there's a fundamental tradeoff here. The more you insist on fitting the micro data (plausibility), the less you will be able to fit the macro data ("as if" validity). I tried to write about this earlier, but I think this is a cleaner way of putting it: There is a tradeoff between macro validity and micro validity.

How severe is the tradeoff? It depends. For example, in physical chemistry, there's barely any tradeoff at all. If you use more precise quantum mechanics to model a molecule (micro validity), it will only improve your modeling of chemical reactions involving that molecule (macro validity). That's because, as a positivist might say, quantum mechanics really is the thing that is making the chemical reactions happen.

In econ, the tradeoff is often far more severe. For example, Smets-Wouters type macro models fit some aggregate time-series really well, but they rely on a bunch of pretty dodgy assumptions to do it. Another example is the micro/macro conflict over the Frisch elasticity of labor supply.

Comment author: Clarity 03 September 2015 06:27:13AM 4 points [-]

What hypothesis are you testing, or is gnawing at the back of your mind, in relation to LessWrong, as you surf LessWrong right now? Or perhaps you're just surfing idly.

For me its: Has anyone experimented with replacing their socialising with friends time with LessWrong exclusively? I wonder if the benefits associated with socialising such as increased well-being can be substituted for interaction in online communities.

Though, I suspect the nature of the community would be a strong determinant of the outcome. For instance, facebook would probably be unhealthy, as would IRC exclusively, but the LessWrong community as a whole excl. the IRL meeting community may be great! I feel like I've basically outgrown all my friends who I don't have some sort of professional relationship with anyway, or who I have a codependent/insecure-attachment towards.

Comment author: [deleted] 31 August 2015 08:46:21PM *  4 points [-]

Tweet Sized Insight Porn

Hope LW likes it. Open for tweet suggestions.

Comment author: Panorama 31 August 2015 05:46:46PM 4 points [-]

Inability and Obligation in Moral Judgment

It is often thought that judgments about what we ought to do are limited by judgments about what we can do, or that “ought implies can.” We conducted eight experiments to test the link between a range of moral requirements and abilities in ordinary moral evaluations. Moral obligations were repeatedly attributed in tandem with inability, regardless of the type (Experiments 1–3), temporal duration (Experiment 5), or scope (Experiment 6) of inability. This pattern was consistently observed using a variety of moral vocabulary to probe moral judgments and was insensitive to different levels of seriousness for the consequences of inaction (Experiment 4). Judgments about moral obligation were no different for individuals who can or cannot perform physical actions, and these judgments differed from evaluations of a non-moral obligation (Experiment 7). Together these results demonstrate that commonsense morality rejects the “ought implies can” principle for moral requirements, and that judgments about moral obligation are made independently of considerations about ability. By contrast, judgments of blame were highly sensitive to considerations about ability (Experiment 8), which suggests that commonsense morality might accept a “blame implies can” principle.

Comment author: Panorama 31 August 2015 05:32:37PM 4 points [-]

Famous neurologist and science popularizer Oliver Sacks has died. Which of his books are your favorites?

Comment author: CellBioGuy 31 August 2015 06:14:23PM *  5 points [-]

Awakenings is a perennial favorite, a cohort of people with severe Parkinsonism given levodopa all at once (and going through the several month long process of becoming nearly completely functional with the quirks that come from excess dopamine, then their brains slowly losing homeostasis in the face of the exogenous uncontrolled neurotransmitters).

Seeing Voices, a look into the perceptions of the deaf and the nuances of signed languages, was fascinating to me.

Comment author: RichardKennaway 04 September 2015 08:04:12PM 3 points [-]

"Do Artificial Reinforcement-Learning Agents Matter Morally?" Yes, says Brian Tomasik, even present-day ones (by a very small but nonzero amount). He foresees their ethical significance increasing in the near future, and he isn't talking about strong AI, but an increase in the ordinary applications of reinforcement learning to our technology.

The argument is, briefly: for various claims about what consciousness physically is, RL programs display these features to some extent as well. Therefore they have a nonzero degree of consciousness, and so a nonzero degree of moral standing. Enough that we should be thinking now about guidelines for the ethical creation of such software.

He suggests that, paralleling guidelines for the use of animals in research, RL algorithms should be replaced by others whenever possible, or if they must be used, reduced in number, and driven through rewards, not punishments.

He considers the idea of an organisation of People for the Ethical Treatment of Reinforcement Learners, and the embedding of RL algorithms in humanoid bodies and videogame characters as ways of persuading the public to the idea that they have moral significance.

Comment author: Manfred 05 September 2015 06:05:29AM 0 points [-]

driven through rewards, not punishments.

I would be much more morally concerned about reinforcement learning agents if this were a functional distinction.

Comment author: RichardKennaway 05 September 2015 08:26:16AM 0 points [-]

I would be much more morally concerned about reinforcement learning agents if this were a functional distinction.

He discusses that point in the paper.

Comment author: taygetea 02 September 2015 01:06:12PM 3 points [-]

Is anyone willing to share an Anki deck with me? I'm trying to start using it. I'm running into a problem likely derived from having never, uh, learned how to learn. I look through a book or a paper or an article, and I find it informative, and I have no idea what parts of it I want to turn into cards. It just strikes me as generically informative. I think that learning this by example is going to be by far the easiest method.

Comment author: Vaniver 02 September 2015 04:42:16PM 3 points [-]

There are many shared Anki decks. In my experience, the hardest thing to get correct in Anki is picking the correct thing to learn, and seeing someone else's deck doesn't work all that well for it because there's no guarantee that they're any good at picking what to learn, either.

Most of my experience with Anki has been with lists, like the NATO phonetic alphabet, where there's no real way to learn them besides familiarity, and the list is more useful the more of it you know.

What I'd recommend is either picking selections from the source that you think are valuable, or summarizing the source into pieces that you think are valuable, and then sticking them as cards (perhaps with the title of the source as the reverse). The point isn't necessarily to build the mapping between the selection and the title, but to reread the selected piece in intervals determined by the forgetting function.

Comment author: taygetea 02 September 2015 05:34:41PM *  2 points [-]

Alright, I'll be a little more clear. I'm looking for someone's mixed deck, on multiple topics, and I'm looking for the structure of cards, things like length of section, amount of context, title choice, amount of topic overlap, number of cards per large scale concept.

I am really not looking for a deck that was shared with easily transferrable information like the NATO alphabet, I'm looking for how other people do the process of creating cards for new knowledge.

I am missing a big chunk of intuition on learning in general, and this is part of how I want to fix it. I also don't expect people to really be able to answer my questions on it, and I don't expect that I've gotten every specification. Which is why I wanted the example deck.

Edit: So I can't pull a deck off Ankiweb because I want the kind of decks nobody puts on Ankiweb.

Comment author: eeuuah 07 September 2015 03:07:32AM 0 points [-]

I could send you some of my anki cards, but I don't know that you'll get useful structural information out of them. They tend to be pretty random bits that I think I'll want to know or phrases I want to build associations between. For most things, I take actual notes (I find that writing things down helps me remember the shape of the idea better, even if I never look at them), and only make flashcards for the highest value ideas.

It took me several months of starting and quitting anki to start to get the hang of it, and I'm still learning how to better structure cards to be easier to remember and transmit useful information.

I found this blog post and the two it links to at the top to be useful descriptions of an approach to learning, which incorporates anki among other things

Comment author: Barry_Cotter 05 September 2015 05:53:01AM 0 points [-]

Based on my own experience I strongly suspect the only way to do this is to fail repeatedly until you succeed. That said the following rules are very, very good.

If you really, really want an example I can send you my Developmental Psychology and Learning and Behaviour Deck. It consists of the entirety of a Cliff's Notes kind of Developmental Psychology book, a better dev psych's summary section and an L&B book's summary section. In retrospect the Cliff's Notes book was a mistake but I've invested enough in it now that I may as well continue it, most of the cards are mature anyway. I would recommend finding a decent book on the topic you're learning, and writing your own summaries or heavily rewording their summaries and using lots and lots of cloze deletions.

I just found this guide to using Anki.

http://alexvermeer.com/anki-essentials/

It's possible it may be worth looking at.

If you really want my deck pm me your email address.

http://super-memory.com/articles/20rules.htm

Summary

Here again are the twenty rules of formulating knowledge. You will notice that the first 16 rules revolve around making memories simple! Some of the rules strongly overlap. For example: do not learn if you do not understand is a form of applying the minimum information principle which again is a way of making things simple:

Do not learn if you do not understand Learn before you memorize - build the picture of the whole before you dismember it into simple items in SuperMemo. If the whole shows holes, review it again! Build upon the basics - never jump both feet into a complex manual because you may never see the end. Well remembered basics will help the remaining knowledge easily fit in Stick to the minimum information principle - if you continue forgetting an item, try to make it as simple as possible. If it does not help, see the remaining rules (cloze deletion, graphics, mnemonic techniques, converting sets into enumerations, etc.) Cloze deletion is easy and effective - completing a deleted word or phrase is not only an effective way of learning. Most of all, it greatly speeds up formulating knowledge and is highly recommended for beginners Use imagery - a picture is worth a thousand words Use mnemonic techniques - read about peg lists and mind maps. Study the books by Tony Buzan. Learn how to convert memories into funny pictures. You won't have problems with phone numbers and complex figures Graphic deletion is as good as cloze deletion - obstructing parts of a picture is great for learning anatomy, geography and more Avoid sets - larger sets are virtually un-memorizable unless you convert them into enumerations! Avoid enumerations - enumerations are also hard to remember but can be dealt with using cloze deletion Combat interference - even the simplest items can be completely intractable if they are similar to other items. Use examples, context cues, vivid illustrations, refer to emotions, and to your personal life Optimize wording - like you reduce mathematical equations, you can reduce complex sentences into smart, compact and enjoyable maxims Refer to other memories - building memories on other memories generates a coherent and hermetic structure that forgetting is less likely to affect. Build upon the basics and use planned redundancy to fill in the gaps Personalize and provide examples - personalization might be the most effective way of building upon other memories. Your personal life is a gold mine of facts and events to refer to. As long as you build a collection for yourself, use personalization richly to build upon well established memories Rely on emotional states - emotions are related to memories. If you learn a fact in the sate of sadness, you are more likely to recall it if when you are sad. Some memories can induce emotions and help you employ this property of the brain in remembering Context cues simplify wording - providing context is a way of simplifying memories, building upon earlier knowledge and avoiding interference Redundancy does not contradict minimum information principle - some forms of redundancy are welcome. There is little harm in memorizing the same fact as viewed from different angles. Passive and active approach is particularly practicable in learning word-pairs. Memorizing derivation steps in problem solving is a way towards boosting your intellectual powers! Provide sources - sources help you manage the learning process, updating your knowledge, judging its reliability, or importance Provide date stamping - time stamping is useful for volatile knowledge that changes in time Prioritize - effective learning is all about prioritizing. In incremental reading you can start from badly formulated knowledge and improve its shape as you proceed with learning (in proportion to the cost of inappropriate formulation). If need be, you can review pieces of knowledge again, split it into parts, reformulate, reprioritize, or delete. See also: Incremental reading, Devouring knowledge, Flow of knowledge, Using tasklists

Comment author: Elo 02 September 2015 06:40:34PM *  0 points [-]

I don't know if this question will help:

What is the least-bad way of doing the thing you want to do that you can think of?

(apologies I can be no help because I don't anki; but I wonder if answering this question will help you)

Comment author: Clarity 01 September 2015 12:57:18PM 3 points [-]

In digital markets with extremely quick liquidity like the stock exchange, Is investing based on macroeconomic factors and megatrends foolhardy? Is it only sensible to invest when one has privellaged information including via analysis of public data at a level no one else has done?

Comment author: Dagon 01 September 2015 02:46:01PM 2 points [-]

Unpack the question. What do you mean by "foolhardy"? What is your next-best option for your money?

In almost all cases, you should opt not to make a wager on a topic where you are at an information disadvantage. However, investments are not purely a wager - they're also direction of capital and sharing of risk (and reward) with for-profit organizations. It's quite possible that you can lose the wager part of your investment and still do fairly well on the long-term rewards of corporate shared ownership.

Comment author: UtilonMaximizer 01 September 2015 01:36:32PM *  2 points [-]

In digital markets with extremely quick liquidity like the stock exchange, Is investing based on macroeconomic factors and >megatrends foolhardy?

One shouldn't expect to systematically beat the market without privileged information. But even "trying to beat the market" (depending on what exactly that strategy entails) or doing what you describe is often better than what most people do in terms of actually growing their savings. Financial securities (especially stocks) have high enough long-run expected returns such that a "strategy" of routinely accidentally slightly overpaying for them and holding them still results in a lot more money than not investing at all.

Is it only sensible to invest when one has privellaged information including via analysis of public data at a level no one else >has done?

Not investing is far worse than shoving your money into random stocks and committing to reinvest all dividends for the next 50 years.

Comment author: Clarity 01 September 2015 01:45:32PM 1 point [-]

Is there absolute utilitty maximisation in portfolio diversification or is that just a risk control mechanism? Could I pick one random stock and put a whole lot of money in it? I suspect I may be commiting the law of large numbers here (or the gambler's fallacy).

Comment author: SilentCal 01 September 2015 04:53:35PM 1 point [-]

If you're not familiar with it, you should check out www.bogleheads.com for investment/finance advice.

(Not trying to discourage you from discussing this here... just that if you don't know bogleheads, it's quite valuable)

Comment author: Dagon 01 September 2015 02:50:30PM 1 point [-]

Look at Kelly Betting for some information on why "risk control" is utility maximization.

Presuming you have declining marginal utility for money, picking one random stock gives you the same average/expected monetary outcome, but far lower utility.

Comment author: cousin_it 31 August 2015 03:02:19PM *  8 points [-]

Tumblr user su3su2u1 (probably most known to LWers for his critiques of HPMOR's scientific claims, and subsequent fallout with Eliezer) has an interesting post about MIRI's research strategy. I think it has some really good ideas. What do other folks think?

Comment author: Gavin 31 August 2015 04:18:36PM *  9 points [-]

It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.

The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.

If you have outside-view criticisms of an organization and you're suddenly put in charge of them, the first thing you have to do is check the new inside-view information available and see what's really going on.

Comment author: cousin_it 31 August 2015 05:19:53PM *  15 points [-]

Ever since I started hanging out on LW and working on UDT-ish math, I've been telling SIAI/MIRI folks that they should focus on public research output above all else. (Eliezer's attitude back then was the complete opposite.) Eventually Luke came around to that point of view, and things started to change. But that took, like, five years of persuasion from me and other folks.

After reading su3su2u1's post, I feel that growing closer to academia is another obviously good step. It'll happen eventually, if MIRI is to have an impact. Why wait another five years to start? Why not start now?

Comment author: IlyaShpitser 02 September 2015 01:52:38PM 1 point [-]

+1

Comment author: Stingray 31 August 2015 07:05:46PM 11 points [-]

The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.

Just because MIRI researchers' incentives aren't distorted by "publish or perish" culture, it doesn't mean they aren't distorted by other things, especially those that are associated with lack of feedback and accountability.

Comment author: IlyaShpitser 02 September 2015 01:50:37PM *  10 points [-]

If MIRI doesn't publish reasonably frequently (via peer review), how do you know they aren't wasting donor money? Donors can't evaluate their stuff themselves, and MIRI doesn't seem to submit a lot of stuff to peer review.

How do you know they aren't just living it up in a very expensive part of the country doing the equivalent of freshman philosophizing in front of the white board. The way you usually know is via peer review -- e.g. other people previously declared to have produced good things declare that MIRI produces good things.

Comment author: gwern 03 September 2015 04:38:37PM *  12 points [-]

If MIRI doesn't publish reasonably frequently (via peer review), how do you know they aren't wasting donor money?

How did science get done for the centuries before peer review? Why do you place such weight on such a recently invented construct like peer review (you may remember Einstein being so enraged by the first and only time he tried out this new thing called 'peer review' that he vowed to never again submit anything to a 'peer reviewed' journal), a construct which routinely fails anytime it's evaluated and has been shown to be extremely unreliable where the same paper can be accepted and rejected based on chance? If peer-review is so good, why do so many terrible papers get published and great Nobel-prize-winning work rejected repeatedly? If peer review is such an effective method of divining quality, why do many communities seem to get along fine with desultory use of peer review where it's barely used or left as the final step long after the results have been disseminated and evaluated and people don't even bother to read the final peer-reviewed version (particularly in economics, I get the impression that everyone reads the preprints & working papers and the final publication comes as a non-event; which has caused me serious trouble in the past in trying to figure out what to cite and whether one cite is the same as another; and of course, I'm not always clear on where various statistics or machine learning papers get published, or if they are published in any sense beyond posting to ArXiv)? And why does all the real criticism and debate and refutations seem to take place on blogs & Twitter if peer-review is such an acid test of whether papers are gold or dross, leading to the growing need for altmetrics and other ways of dealing with the 'post-publication peer review' problem as journals increasingly fail to reflect where scientific debates actually are?

I've said it before and I'll said it again: 'peer review' is not a core element of science. It's barely even peripheral and unclear it adds anything on net. For the most part, calls for 'peer review' are cargo culting. What makes science work is replication and putting your work out there for community evaluation. Those are the real review by peers.

If you are a donor who wants to evaluate MIRI, whether some arbitrary reviewers pass or fail its papers is not very important. There are better measures of impact: is anyone building on their work? have MIRI-specific claims begun filtering out? are non-affiliated academics starting to move into the AI risk field? Heck, even citation counts would probably be better here.

Comment author: Viliam 07 September 2015 07:30:59AM 3 points [-]

Peer review seems like a form of costly signalling. If you pass peer review, it only demonstrates that you have the ability to pass peer review. On the other hand, if you don't pass peer review, it signals that you don't have even this ability. (If so much crap passes peer review, why doesn't your research? Is it even worse than the usual crap?)

This is why I recommend to treat "peer review" simply as a hoop you have to jump through, otherwise people will bother you about it endlessly. To remove the suspicion that your research is even worse than the stuff that already gets published.

Comment author: Lumifer 03 September 2015 06:39:37PM 3 points [-]

How did science get done for the centuries before peer review?

Mostly by well-off people satisfying their personal curiosity. Other than that, by finding a rich and/or powerful patron and keeping him amused :-D

I agree that the cult of peer review is overblown. But does MIRI produce any relevant and falsifiable output at all?

Comment author: IlyaShpitser 07 September 2015 05:08:18PM *  3 points [-]

How did science get done for the centuries before peer review? Why do you place such weight on such a recently invented construct like peer review

Is this an "arguments as soldiers" thing? Compare an isomorphic argument: "how did medicine get done for the centuries before antibiotics."

(you may remember Einstein being so enraged by the first and only time he tried out this new thing called 'peer review' that he vowed to never again submit anything to a 'peer reviewed' journal),

Leaving aside that this an argument from authority, there is also selection bias here: peer review may well not be crucial -- if you happen to be of Einstein's caliber. But: "they also laughed at Bozo the Clown." I am sure plenty of Bozos are enraged at peer review too, unjustly rejecting their crap.

a construct which routinely fails anytime it's evaluated and has been shown to be extremely unreliable where the same paper can be accepted and rejected based on chance?

There is a stochastic element to peer review, but in my experience it works remarkably well, given what it is. Good papers are very likely to get a fair shake and get published. I routinely get very penetrating comments that greatly improve the quality of the final paper. I almost always get help with scholarship from reviewers (e.g. this is probably a good paper to cite.) A bigger issue I saw was not chance, but ideology from reviewers. I very occasionally get bad reviews (<5% chance) and associate editors (people who handle the paper and assign reviewers) are almost always helpful in such cases.

I asked you this before, gwern, how much experience with actual peer review (let's say in applied stats journals, as that is closest to what you do) do you have?

If peer-review is so good, why do so many terrible papers get published and great Nobel-prize-winning work rejected repeatedly?

Absolute numbers are kind of useless here. Do you have some work in mind on false positive and false negative rates for peer review?

why do many communities seem to get along fine with desultory use of peer review where it's barely used or left as the final step long after the results have been disseminated and evaluated and people don't even bother to read the final peer-reviewed version

I don't think we disagree here, I think this is a form of peer review. I routinely do this with my papers, and am asked to look over preprints by others. I think this is fine for certain types of papers (generally very specialized or very large/weighty ones).

The worry is MIRI's conception of what a "peer" is basically ignores the wider academic community (which has a lot of intellectual firepower), so they end up in a bubble. The other worry is people who worry about getting tenured are incentivized to be productive (albeit imperfectly). MIRI is not incentivized to be productive except in some vague "saving the world" sense. And indeed, MIRI appears to be remarkably unproductive by academic standards. The guy who really calls the shots at MIRI, EY, has not internalized academic norms and appears to be fairly hostile to them.

I've said it before and I'll said it again: 'peer review' is not a core element of science.

Honestly, you sound a bit angry about peer review.

Comment author: gwern 08 September 2015 12:54:24AM *  9 points [-]

Is this an "arguments as soldiers" thing? Compare an isomorphic argument: "how did medicine get done for the centuries before antibiotics."

That's not isomorphic. To put it bluntly, medicine didn't. It only started becoming net beneficial extremely recently (and even now tons of medicine is harmful or a pure waste), based on copying a tremendous amount of basic science like biology and bacteriology and benefitting from others' discoveries, and importing methodology like randomized trials (which it still chafes at) and not by importing peer review. Up until the very late 1800s or so, you would have been better off often ignoring doctors if you were, say, an expecting mother wondering whether to give birth in a hospital pre-Semmelweiss. You can't expect too much too much help from a field which published its first RCT in 1948 (on, incidentally, an antibiotic).

Leaving aside that this an argument from authority,

I include it as a piquant anecdote since you seem to have no interest in looking up any of the statistical evidence on the unreliability and biases (in the statistical senses) of peer review, or the absence of any especial evidence that it works.

But: "they also laughed at Bozo the Clown."

That is not what I am saying. I am saying, 'if you think MIRI is Bozo the Clown, get a photograph of its leader and see if he has a red nose! See if his face is suspiciously white and the entire MIRI staff saves a remarkable amount on gas purchases because they can all fit into one small car to run their errands! Don't deliberately look away and simply listen for the sound of laughter! That's a terrible way of deciding!'

Good papers are very likely to get a fair shake and get published.

No, they're not, or at the very least, you need to modify this to, 'after being forced to repeatedly try solely thanks to the peer review process, a good paper may still finally be published'. For example, in the NIPS experiment, most accepted papers would not have been accepted given a different committee. Unsurprisingly! given low inter-rater reliabilities for tons of things in psychology far less complicated, and enormous variability when n=1 or 3.

Absolute numbers are kind of useless here. Do you have some work in mind on false positive and false negative rates for peer review?

Yes, any of it. They all say that peer review is not a little but highly stochastic. This isn't a new field by any means.

I asked you this before, gwern, how much experience with actual peer review (let's say in applied stats journals, as that is closest to what you do) do you have?

I have little first-hand experience; my vitriol comes mostly from having read over the literature showing peer-review to be highly unreliable, and biased, from the unthinking respect and overestimation of it that most people give it, being shocked at how awful many published studies are despite being 'peer reviewed', and from talking to researchers and learning how pervasive bias is in the process and how reviewers enforce particular cliques & theories (some politically-motivated) and try to snuff opposition in the cradle.

The first represents a huge waste of time; the second hinders scientific progress directly and contributes to one of the banes of my existence as a meta-analyst, publication bias (why do we have a 'grey literature' in the first place?); the third is seriously annoying in trying to get most people to wake up and think a little about the research they read about ('but it's peer-reviwed!'); and the fourth is simply enraging as the issue moves from an abstract, general science-wide problem to something I can directly perceive specifically harming me and my attempts to get accurate beliefs.

(Well, actually I think my analysis of Silk Road 2 listings is supposed to be peer-reviewed, but the lead author is handling the bureaucracy so I can't say anything directly about how good or bad the reviewers for that journal are, aside from noting that this was a case of problem #4: the paper we were responding too is so egregiously, obviously wrong that the journal's reviewers must have either been morons or totally ignorant of the paper topic they were supposed to be reviewing. I'm still shocked & baffled about this: how does an apparently respectable journal wind up publishing a paper claiming, essentially, that Silk Road 2 did not sell drugs? This would have been caught in a heartbeat by any kind of remotely public process - even one person who had actually used Silk Road 1 or 2 peeking in on the paper could have laughed it out of the room - but because the journal is 'peer reviewed'... Pace the Gell-Man Effect, it makes me wonder about all the papers published about topics I am not so knowledgeable about as I am on Silk Road 2 and wonder if I am still not cynical enough.)

I don't think we disagree here, I think this is a form of peer review. I routinely do this with my papers, and am asked to look over preprints by others. I think this is fine for certain types of papers (generally very specialized or very large/weighty ones).

Yes, I have no objection to 'peer review' if by what you mean is all the things I singled out as opposed to, and prior to, and afterwards, the institution of peer review: having colleagues critique your work, having many other people with different perspectives & knowledge check it over and replicate it and build on it and post essays rebutting it - all this is great stuff, we both agree. I would say replication is the most important of those elements, but all have their place.

What I am attacking is the very specific formal institutional practice of journals outsourcing editorial judgment to a few selected researchers and effectively giving them veto power, a process which hardly seems calculated to yield very good results and which does not seem to have been institutionalized because it has been rigorously demonstrated to work far better than the pre-existing alternatives (which of course it wasn't, any more than medical proposals at that time were routinely put through RCTs first, even though we know how many good-sounding proposals in psychology & sociology & economics & medicine go down in flames when they are rigorously tested), but - to go off on a more speculative tangent here - whose chief purpose was to simply make the bureaucracy of science scale to the post-WWII expansion of science as part of the Cold War/Vannevar Bush academic-military-government complex.

The worry is MIRI's conception of what a "peer" is basically ignores the wider academic community (which has a lot of intellectual firepower), so they end up in a bubble.

If this is the problem with MIRI, I think there are far more informative ways to criticize them. For example, I don't think you need to rely on any proxies or filters: you should be able to evaluate their work directly and form your own critique of whether it's any good or if it seems like a good research avenue for their stated goals.

Honestly, you sound a bit angry about peer review.

Science is srs bsns. (I find it hard to see why other people can't get worked up over things like publication bias or aging or p-hacking. They're a lot more important than the latest outrage du jour. This stuff matters!)

Comment author: IlyaShpitser 25 September 2015 04:27:02PM *  1 point [-]

That's not isomorphic. To put it bluntly, medicine didn't.

Medicine was often harmful in the past, with some occasional parts that helped, e.g. amputating gangrenous limbs was dangerous and people died, but probably was still a benefit on net. Admiral Nelson had multiple surgeries and was in serious danger of infection and death afterwards, but he would have been a goner for sure without surgery.

Science was pretty similar, it was mostly nonsense with occasional islands of sense. It didn't really get underway until, what, Francis Bacon wrote about biases and empiricism? That is not very long ago. The early "gentlemen scholars" all did informal peer review by sending their stuff to each other (they also hid discoveries from each other due to competition and egos, but this stuff happens today too).


you seem to have no interest...

Gwern, peer review is my life. My tenure case will be decided by peer review, ultimately. I do peer review myself as a service, constantly. I know all about peer review.

get a photograph of its leader and see if he has a red nose!

The burden of proof is on MIRI, not on me. MIRI is the one that wants funding and people to save the world. It's up to MIRI to use all available financial and intellectual resources out there, which includes engaging with academia.

I have little first-hand experience; my vitriol comes mostly from having read over the literature showing peer- review to be highly unreliable, and biased, from the unthinking respect and overestimation of it that most people give it, being shocked at how awful many published studies are despite being 'peer reviewed', and from talking to researchers and learning how pervasive bias is in the process and how reviewers enforce particular cliques & theories (some politically-motivated) and try to snuff opposition in the cradle.

I really think you should moderate your criticism of peer review. Peer review for data analysis papers is very different from peer review for mathematics or theoretical physics. Fields are different and have vastly different cultural norms. Even in the same field, different conferences/journals may have different norms.

I find it hard to see why other people can't get worked up over things like publication bias or aging or p-hacking.

I do a lot of theory. When I do data analysis, my collabs and I try to lead by example. What is the point of being angry? Angry outsiders just make people circle the wagons.

Comment author: Vaniver 25 September 2015 07:00:05PM *  1 point [-]

Admiral Nelson had multiple surgeries and was in serious danger of infection and death afterwards, but he would have been a goner for sure without surgery.

This argument seems exactly identical to the argument for trepanning, even including the survivorship bias. (One of the suspected uses of trepanning was to revive people otherwise thought dead.)

While we're looking at anecdotes, this bit of Nelson's experience with surgery seems relevant:

Although surgeons had been unable to remove the central ligature in his amputated arm, which had caused considerable inflammation and poisoning, in early December it came out of its own accord and Nelson rapidly began to recover.

I'm not sure I'd count that as a win for surgery, or evidence that he couldn't have survived without it!

Gwern, peer review is my life. My tenure case will be decided by peer review, ultimately. I do peer review myself as a service, constantly. I know all about peer review.

But this means that, unless you're particularly good at distancing yourself from your work, you should expect to be worse at judging it than a disinterested observer. The classic anecdote about "which half?" comes to mind, or the reaction of other obstetricians to Semmelweis's concerns.

Regardless, we would expect that, if studies are better than anecdotes, studies on peer review will outperform anecdotes on peer review, right?

Comment author: IlyaShpitser 28 September 2015 05:26:17PM *  0 points [-]

This argument seems exactly identical to the argument for trepanning, even including the survivorship bias.

It's not identical because we know, with benefit of hindsight, that amputating potentially gangrenous limbs is a good idea. The folks in the past had solid empirical basis for amputations, even if they did not fully understand gangrene. Medicine was mostly, but not always nonsense in the past. A lot of the stuff was not based on the scientific method, because they had no scientific method. But there were isolated communities that came up with sensible things for sensible reasons. This is one case when standard practices were sensible (there are other isolated examples, e.g. honey to disinfect wounds).

But this means that, unless you're particularly good at distancing yourself from your work, you should expect to be worse at judging it than a disinterested observer.

studies on peer review will outperform anecdotes on peer review, right?

Ok, but isn't this "incentive tennis?" Gwern's incentives are clearer than mine here -- he's not a mainstream academic, so he loses out on status. So a "low motive" interpretation of the argument is: "your status castle is built on sand, tear it down!" Gwern is also pretty angry. Are we going to stockpile argument ammunition [X] of the form "you are more biased when evaluating peer review because of [X]"?

For me, peer review is a double edged sword -- I get papers rejected sometimes, and at other times I get silly reviewer comments, or editors that make me spend years revising. I have a lot of data both ways. The point with peer review is I sleep better at night due to extra sanity checking. Who sanity-checks MIRI's whiteboard stuff?

A "low motive" argument for me would be "keep peer review, but have it softball all my papers, they are obviously so amazing why can't you people see that!"

A "low motive" argument for MIRI would be "look buddy, we are trying to save the world here, we don't have time for your flawed human institutions. Don't you worry about our whiteboard content, you probably don't know enough math to understand it anyways." MIRI is doing pretty theoretical decision theory. Is that a good idea? Are they producing enough substantive work? In standard academia peer review would help with the former question, and answering to the grant agency and tenure pressure would help with the second. These are not perfect incentives, but they are there. Right now there are absolutely no guard rails in place preventing MIRI from going off the deep end.

Your argument basically says not to trust domain experts, that's the opposite of what should be done.


Gwern also completely ignores effect modification (e.g. the practice of evaluating conditional effects after conditioning on things like paper topic). Peer review cultures for empirical social science papers and for theoretical physics papers basically have nothing to do with each other.

Comment author: Vaniver 28 September 2015 08:50:16PM *  1 point [-]

The folks in the past had solid empirical basis for amputations, even if they did not fully understand gangrene.

I would put the start of solid empirical basis for gangrene treatment at Middleton Goldsmith during the American Civil War (dropping mortality from 45% to 3%), about sixty years after Nelson.

This is one case when standard practices were sensible (there are other isolated examples, e.g. honey to disinfect wounds).

I think this is putting too much weight on superficial resemblance. Yes, gangrene treatment from Goldsmith to today involves amputation. But that does not mean amputation pre-Goldsmith actually decreased mortality over no treatment! My priors are pretty strong that it would increase it, but going into details on my priors is perhaps a digression. (The short version is that I take a very Hansonian view of medicine and its efficacy.) I'm not aware of (but would greatly appreciate) any evidence on that question.

(To see where I'm coming from, consider that there is a reference class that contains both "trepanning" and "brian surgery" that seems about as natural as the reference class that includes amputation before and after Goldsmith.)

The point with peer review is I sleep better at night due to extra sanity checking.

But this only makes sense if peer review actually improves the quality of studies. Do you believe that's the case, and if so, why?

Your argument basically says not to trust domain experts, that's the opposite of what should be done.

I think my argument is domain expert tennis. That is, I think that in order to evaluate whether or not peer review is effective, we shouldn't ask scientists who use peer review, we should ask scientists who study peer review. Similarly, in order to determine whether a treatment is effective, we shouldn't ask the users of the treatment, but statisticians. If you go down to the church/synagogue/mosque, they'll say that prayer is effective, and they're obviously the domain experts on prayer. I'm just applying the same principles and same level of skepticism.

Gwern also completely ignores effect modification (e.g. the practice of evaluating conditional effects after conditioning on things like paper topic). Peer review cultures for empirical social science papers and for theoretical physics papers basically have nothing to do with each other.

I am not sure what the relevance of either of these are. If anything, the latter suggests that we need to make the case for peer review field by field, and so proponents have an even harder time than they do without that claim!

Comment author: Vaniver 03 September 2015 02:31:25PM 0 points [-]

The way you usually know is via peer review -- e.g. other people previously declared to have produced good things declare that MIRI produces good things.

I think this isn't really cutting to the heart of things--which seems to be 'reputation among intellectuals,' which is related to 'reputation among academia,' which is related to 'journal articles survive the peer review process.' It seems to me that the peer review process as it exists now is a pretty terrible way of capturing reputation among intellectuals, and that we could do something considerably better with the technology we have now.

Comment author: Lumifer 03 September 2015 03:43:54PM 0 points [-]

Anyone suggested a system based on blockchain yet? X-)

Comment author: Viliam 07 September 2015 07:35:52AM *  1 point [-]

I imagine a system where new Sciencecoins could be mined by doing valid scientific research, but then they could be used as a usual cryptocurrency. That would also solve the problem of funding research. :D

Comment author: [deleted] 31 August 2015 05:21:36PM 4 points [-]

It seems like a lot of focus on MIRI giving good signals to outsiders.

I think there's definitely not enough thought given to this, especially when they say one of the main constraints is getting interested researchers.

Comment author: Viliam 02 September 2015 07:29:38AM *  2 points [-]

the first thing you have to do is check the new inside-view information available and see what's really going on.

Isn't it "cultish" to assume that an organization could do anything better than the high-status Academia? :P

Because many people seem to worry about publishing, I would probably treat it as another form of PR. PR is something that is not your main reason to exist, but you do in anyway, to survive socially. Maximizing the academic article production seems to fit here: it is not MIRI's goal, but it would help to get MIRI accepted (or maybe not) and it would be good for advertising.

Therefore, AcademiaPR should be a separate department of MIRI, but it definitely should exist. It could probably be done by one person. The job of the person would be to maximize MIRI-related academic articles, without making it too costly for the organization.

One possible method that didn't require even five minutes of thinking: Find smart university students who are interested in MIRI's work but want to stay in academia. Invite them to MIRI's workshops, make them familiar with what MIRI is doing but doesn't care about publishing. Then offer them to become co-authors by taking the ideas, polishing them, and getting them published in academic journals. MIRI gets publications, the students get a new partially explored topic to write about; win/win. Also known as "division of labor".

Comment author: IlyaShpitser 02 September 2015 01:51:54PM 3 points [-]

Because many people seem to worry about publishing, I would probably treat it as another form of PR.

Really? You can't think of another reason to publish than PR?

Comment author: Viliam 03 September 2015 07:22:22AM 0 points [-]

I can.

But PR also plays a role here, and this is how to fix it relatively cheaply. And it would also provide feedback about what people outside of MIRI think about MIRI's research.

Comment author: IlyaShpitser 03 September 2015 01:07:04PM 1 point [-]

I think the primary purpose of peer review isn't PR, but sanity checking. Peer reviewed publications shouldn't be a concession to outsiders, but the primary means of getting work done.

Comment author: ChristianKl 02 September 2015 02:10:58PM 1 point [-]

AcademiaPR should be a separate department of MIRI, but it definitely should exist. It could probably be done by one person.

It seems that writing publishable papers isn't easy.

Comment author: IlyaShpitser 03 September 2015 01:40:13PM 0 points [-]

Yes, GP's is an extremely myopic and dangerous attitude.

Comment author: passive_fist 02 September 2015 10:04:44AM *  2 points [-]

A very reasonable suggestion, and I'm not just saying that because I have a PhD. I'm saying it because it's so easy to reinvent the wheel and think you're doing original research when you're really just re-discovering other people's work in a different context. It's very hard to root out these sorts of errors; when I was doing a PhD I thought the work I was doing in developmental biology was new and unique until about a year later I found that the 'new' mathematical problems I had solved had actually been widely used in polymer science for years. I just wasn't able to find the research because none of the search terms matched.

A link to the wider academic community would do a lot to help in MIRIs goals, and a very good way to do this would be undertaking PhDs. It should be a snap for the MIRI folks...

Comment author: NancyLebovitz 02 September 2015 07:00:01PM 0 points [-]

Do you have any ideas about how it could be made easier to find out whether you're just rediscovering previous work?

Comment author: passive_fist 03 September 2015 12:22:13AM 0 points [-]

Eliminate context, reduce problems to their abstract fundamentals, collaborate with other people who might have a chance of having been exposed to similar problems in other domains.

Comment author: NancyLebovitz 04 September 2015 11:39:01AM 5 points [-]

Gwern rubbishes longevity research.

I think he's taking about the dream of achieving indefinite numbers of healthy years.

However, there are some people who live into their 90s in pretty good health, and they're far from the majority. What's the likelihood of just making good health into one's 90s much more likely? I'm not talking about lifestyle improvement-- I'm talking about some technological fix.

Comment author: Vaniver 04 September 2015 05:02:58PM 6 points [-]

So, he's specifically talking about the failures of previous longevity research. It seems to me that modern longevity research has portions that are considerably better (among other things, the reductionistic view appears to be the dominant view among the top researchers). Consider this section in particular:

I could wish Stambler made more of an effort to evaluate researchers on scientific grounds and give a better idea of where ideas have been vindicated or refuted by subsequent work.

That Stambler spent too little time on whether or not they actually got the science right / pushed in the right or wrong direction, and spent too much time focusing on their political persuasion, strikes me as highly relevant and interesting when it comes to scientific history (and the modern versions--namely, choosing who to fund or not, and what experiments to pursue or not).

Comment author: NancyLebovitz 04 September 2015 08:11:41PM 1 point [-]

Gwern also makes a more general claim that aging is too complex for any simple solution to be plausible.

Comment author: knb 05 September 2015 03:52:40AM 1 point [-]

I don't think SENS is one of the simple approaches Gwern was referring to in context. The simple approaches are things like turning off a genetically coded "mortality switch," lengthening telomeres, calorie-restriction mimetics, or just getting tons of antioxidants in your diet. Here's a recent Aubrey de Grey interview.

Comment author: hg00 06 September 2015 03:12:48AM *  2 points [-]

Here's one for the "life pro tips" category since Less Wrong users are mostly male. It seems as though the best way to deal with balding is to catch it as early as possible, because that's the time drug treatments (well Finasteride at least) are most effective. Of the "big 3" baldness treatments, ketoconazole shampoo is available over the counter and has few side effects reported online. (It's also used as an anti-dandruff shampoo.) (EDIT: Looks like it is not recommended to take orally, although I don't see anyone saying that topical application carries risks. Here's a study saying it's about as effective as minoxidil?) I recently noticed that my hairline has receded ever so slightly... after doing some research, I bought some ketoconazole shampoo and am planning to start using it. This brand seems to have fewer bad experience reports and fewer shill reviews on Amazon than other brands. Thoughts? (BTW although it's the safest, ketoconazole also seems to be the least effective of the balding treatments... you should probably hop on the Finasteride if you have a serious problem. More info.)

Comment author: Romashka 06 September 2015 09:17:41AM 1 point [-]

BTW, there's the 'Boring advice repository', consider cross-posting or linking to this there, so that it would not get lost.

Comment author: knb 07 September 2015 09:21:25AM *  0 points [-]

Catching it early is important for sure. I've been using minoxidil for 3 years since my early twenties and my hairline has not receded at all since then, but it also hasn't recovered much. The generic minoxidil is quite cheap, I pay about 40 dollars a year.

Edit: I haven't tried Finasteride as I hear rumors of awful sexual symptoms.

Comment author: Elo 05 September 2015 09:21:54PM 2 points [-]

Update on the Slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/

A list of our topics:

  • AI
  • Film making
  • Goals of lesswrong (and purposes)
  • Human Relationships
  • media
  • parenting
  • philosophy
  • political talk
  • programming
  • real life
  • Resources and links
  • science
  • travelling
  • and some admin channels; the "welcome", "misc", and "RSS" feed from the lw site

These are expected to grow and change as we need them. I count 58 people who have joined so far today. Feel free to PM me as well.

It's worth noting that parenting just opened up.

Comment author: Panorama 05 September 2015 07:22:32PM *  2 points [-]

A Defense of the Rights of Artificial Intelligences by Eric Schwitzgebel and Mara [official surname still be decided]

There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.

Full version available here.

As always, comments warmly welcomed -- either by email or on this blog post. We're submitting it to a special issue of Midwest Studies with a hard deadline of September 15, so comments before that deadline would be especially useful.

Comment author: Clarity 04 September 2015 09:35:43AM *  2 points [-]

How to perform surgery on yourself with Clarity

I do irrational things. The other day I bought a flight interstate, somewhat impulsively, to a conference I knew next to nothing about for complicated reasons. Instantregret, but the cancellation fee is about half the price of the ticket. I also got some art professionally designed for a few hundred dollars, that I didn't need or want. I've also lost thousands gambling and on the stock exchange. I'm stupid in many ways, but I'm also capable enough to be able to share insights from the other side of sanity with the real world, or so I'd like to think. There are some things which I do that aren't rational, for which the term irrational isn't very useful, in the same what that people can be 'not even wrong', perhaps. But enough self-indulgent psychopity and self-handicapping.

I'm finding it hard recently to concentrate on anything other than surgery - particular self surgery and how and why I ought to perform it. But, Im not a surgeon. And, for this to be rational I ought to have a terminal goal. I don't have one. In fact, at best I can rationalise that in case I get in a survival situation and have no one to help, I can do it myself. But, that's extremely unlikely. It's not even rationalisation since I haven't made the decision, it's merely optimism. Being crazy is hard, so looking on the bright side keeps me from feeling like killing myself. At least this new found interest is somewhat amusing and something that is somewhat learnable. Sometimes I get interested in areas for which I have no where near the pre-requisite knowledge to understand, often some technical something in economics or computer science. In those cases, I just end up learning things incorrectly. At least with surgery, it's somewhat of a practical skill and medical students are often taught things superficially (this leads to this, or this is connected to that) rather than say, (this is proven by that the rem, or demonstrated by this experiment). To celebrate my 100 karma (and it was a difficult journey!) I just thought I would document this experience and what I'm compelled to research to give the more rational among you some insight in what its like to be on the far other side of rationality, and aware of it.

  1. See examples of self-surgery for inspiration. Examples

  2. people who do it are heroic. Don't be half assed

  3. desensitise yourself by snooping on actual surgeries. From experience in psychiatric wards, it shouldn't be very hard to sneak into surgical viewing theatres. Minimal social engineering required. Hospitals are shocking with security. Note: Don't actually do this. Remember, this is just to explain my thinking process which as I mentioned is off the beaten path of sensisbility)

  4. read this guide which is the only guide to self-surgery I can find. Though it suggests reading textbooks, the medical textbooks in the surgery section of my local university's library don't seem to be very useful at all in actually how to do surgery. Maybe one has to learn how to do it by watching.

Ok. At this point. Looks like I've somehow managed to overcome this little excursion from sensibility. I don't really care for self-surgery anymore. My testicles feel kinda sore for no apparent reason, but it feels good knowing that at least they're there and not in a medical waste bin instead.

In the spirit of radical honesty, I'm going to be posting this highly embarrassing comment then try not to think about it. Certainly won't be my most embarrassing post so far.

Comment author: NancyLebovitz 05 September 2015 01:18:48PM 1 point [-]

Voted up for honesty.

Do you know anything about the difference between the times when your irrational impulses fade and the times when you act on them?

Comment author: Clarity 06 September 2015 05:25:48AM *  1 point [-]

Ahh, the miracle question. I had forgotten about those. Thank you for asking.

My answer is currently no.

Here's what I currently suspect, but I don't have the present of mind to be confident in this assessment. I'm particularly vulnerable to gambling and sexual and aesthetic impulses like compulsively listening to music, or staring at art. For instance, just I recently signed up for an international share trading account because I intended to bet about 1/4 of my assets (yes, I still am not convinced by either the kelly criterion nor modern portfolio theory since no free lunches!) on this one stock where I had very little knowledge of. Luckly for me, it takes 5 days to process the int. trading account application and I found it hard to get my mind of the stock so I started looking up more in depth information and realised it's not the undervalued, cheap, super awesome stock I thought it would be.

When I'm with people, I also tend to be less goal-oriented and give into impulses more readily. Another consideration for me is whether these impulses are the same class as say the surgical impulse, since that sounds more delusional than impulsive. None of these categorisations are clear. You've inspired me to sit down properly in the near future and map out different behaviours then try to summarise underling commonalities and potential control measures note to self.

The times when irrational impulse fades, in contrast, is times when I can use strict decision theoretic tools to explain to myself why it's irrational. That's why LessWrong is my scaffold out of insanity. If I can analyse a particular scenario and see that one particular choice dominates another, or I can model a particular impulse as my tendency to compensate for a sunk cost when I ought to be thinking at the margin, for instance, I can grit my way out of it.

Perhaps things are hardest when I'm dealing with extremely high subjective value options (e.g. jerking off to porn when I'm really horny), or betting a whole lot of money, I get carried away. Temporally, I discount at several orders of magnitude above hyperbolic, perhaps. But honestly, I don't really know. I'm just chucking intuitions into this comment box. I'll probably add to this answer at some point for my own reference.

As an aside, I saw your comment this morning and was thinking about it in the shower. Recalling the 'miracle question' approach to problem solving made me feel empowered. Later, I listened to a song I hadn't heard in a while just before going into the shower and realised that it would motivate me to linger less in there cause I anticipated the joy of continuing to listen to it after I got out. Then I thought about how I could suggest that approach to others who had trouble limiting their shower time, and grateful that there are places that I could share that information. At that point, I realised that my mood and anxiety had lifted a bit which I attributed to that sequence of events, cascading from you. I suspect increased self-trust in my ability to handle problems is at the heart of this (so I'll add that to my mental health checklist in the other thread sometime). So thank you! I'm going to be investigating how I can replicate this again.I did mess it up a bit by feeling very self-congratulatory then rumminating for a while and ultimately not getting out of the shower as promptly as perhaps possible, but hopefully that wouldn't occur in the future.

Comment author: gjm 06 September 2015 08:10:49AM 1 point [-]

How do you get from "no free lunches" to disagreement with either Kelley or portfolio theory?

Comment author: Clarity 12 September 2015 06:22:06AM 0 points [-]

No free lunches & MPT

I could enunciate it, but wikipedia has an explanation. I honestly don't understand the Wikipedia explanation, but I would expect that it explains my intuitions in a more technical way than I do. If you have a specific point of disagreement, I'm happy to map out my logic and explore the evidence with you. I vaguely remember reading an article on the topic, too.

Optimal bet sizing and expected utility

I'd expect a theorem to maximise utility via diversification would entail some prediction that the utility of subsequent/other/more investments will be greater than the utility of the first/reference investment. If that isn't the case, it will lower the average expected utility of one's portfolio. I don't see the rationale behind the Kelly criterion as it related to any of my existing knowledge about maximising utility.

Comment author: gjm 12 September 2015 07:02:57AM *  0 points [-]

MPT: How can I have a specific point of disagreement with something as nonspecific as "I am not convinced by modern portfolio theory because no free lunches"? The particular but of the Wikipedia article you linked to actually says (correctly, so far as I can see) that minimising unsystematic risk through diversification (as indicated by MPT) is "one of the few free lunches available" because unsystematic risk isn't associated with higher expected returns.

Kelley: Actually most of the paragraph ostensibly about this seems to be still about MPT. Anyway, I'm afraid your expectation is just wrong. Diversifying can be a win even if what you diversify with is (on its own) lower-utility. Suppose someone offers you a bet that will pay you $1M if some event E occurs and cost you $900k if not, and suppose you reckon E very close to 50% likely. You probably don't take that bet because losing $900k would hurt you more than gaining $1M would help you. Now someone else offers you another bet, where you stand to gain $950k and lose $900k. Clearly you don't take that bet either, and clearly it's whose than the first. But now suppose the first bet party's you when E happens and the second very party's you when not-E happens. The two bets together are a guaranteed >=$50k gain; provided you trust your counterparties you should absolutely take them. So aging the second bet helped you even though on its own it was worse than the first.

Kelley, really: again I'm not sure what I can say to something as unspecific as "I don't see the rationale". I suppose I can briefly explain the rationale, so here goes. 1: if the utility you get from your money is proportional to log (amount), which may or may not be roughly true for you (I think it is for me) then placing a Kelley-sized bet is higher expected-utility than placing a bet of any other size at the same odds. (Assuming your utility I'd unaffected by the event the bet I'd on other than through its effect on your wealth.) 2: your long-term wealth is maximized (with high probability, not just in expectation) by making all your bets Kelley-sized, so if your utility is strongly affected by your wealth in the long term and indifferent to the short term then (almost regardless of exactly how utility depends on long-term wealth) you should place Kelley-sized bets.

Most people are more risk-averse than utility proportional to log wealth would justify. If you are, then your bets should be smaller than Kelley. Most people care about the short term as well as the long. If you do, then again your bets should generally be smaller than Kelley.

[EDITED some time after writing when I noticed a bunch of mobile-device autocorrect errors. Sorry.]

Comment author: Elo 05 September 2015 09:16:06AM 0 points [-]

There was a guide online about all factors to consider when being a medical professional in a place with no medical infrastructure. it was basically a "how to do everything" guide. I can't recall the keyword name to find it now, but it was online and free. Not sure if I should encourage you; but reading a lot more will probably satisfy your interest in the topic.

Comment author: Clarity 12 September 2015 06:31:22AM 0 points [-]

I'm interested in the guide and haven't found it without several related google searches. Are you sure the guide wasn't on a tangential topic?

Comment author: Lumifer 03 September 2015 08:51:01PM *  2 points [-]

An argument by Stephen Hsu that boosted-IQ humans will appear before Artifical Intelligence and will co-evolve with AI after that.

Comment author: Viliam 07 September 2015 08:31:58AM *  1 point [-]

Seems to me these two things are incomparable in speed. Imagine that research in genetic engineering will allows us to make each generation have IQ 20 points higher than the previous one. Could even such IQ-boosted humans compete with a superhuman AI which can rewrite its own source code?

Of course I am making many assumptions here, but the idea is that biological humans will probably still have to go through the cycle of birth and maturation, and face various biological constraints, while AI will not have these obstacles.

Comment author: ciphergoth 03 September 2015 12:50:44PM *  2 points [-]

I've never heard of this book or author before, anyone read it? How does it compare to eg "Smarter Than Us" or "Our Final Invention"?

Calum Chace, "Surviving AI"

Comment author: OrphanWilde 01 September 2015 06:12:33PM *  2 points [-]

Something which may prove interesting to somebody here:

A tentative list of internal states (certainly incomplete), divided into emotions and mental states. I distinguish between emotions and mental states on the basis of something I can't quite put my finger on, but I'm reasonably certain there -is- a difference, something like the difference between color photographs and black-and-white photographs. (It's quite fuzzy in some places, though, so not everything neatly fits in one or the other. Suspicious/paranoid, for example, I quibble about the placement of.) I've done a few passes at combining emotions I suspect are identical except for context and intensity. You'll notice emotions like "Happy" and "Angry" aren't present - unless somebody can correct me, I think these aren't distinct emotions in and of themselves, but simplifications of a broad range of more complex emotions. (A couple permutations of "Angry" show up under "Rage"). Some words show up multiple times, where the word appears to refer to more than one emotional state, with clarifications.

Out of the emotions listed, I experience somewhere around a third of them, which makes it hard to evaluate how distinct they actually are, and in other places leads me to incorrectly consider them separate internal states. Of the mental states, I experience most of them (which is why I think the sorting criteria isn't -entirely- arbitrary). Of the uncertain - I have no idea whether those things are actually distinct feelings, or just ways people describe other people's behavior, so it's safe to say, if they are experiencable, they're in those things I don't experience.

The list is largely comprised of entries from the following list: https://robbsdramaticlanguages.files.wordpress.com/2014/07/vocabulary-expand.jpg.

Some I've omitted as being, as far as I can tell, embellishments. I've added others, as well.

Emotions:

  • Abandoned/Alienated/Rejected/Discarded/Deserted (Distinct?)
  • Abused/Put-Upon/Exploited/Used (Distinct?)
  • Acceptance
  • Appreciated
  • Appreciative
  • Apalled/Disturbed/Horrified (Distinct?)
  • Amorous/Horny
  • Amusement
  • Anxious/Tense
  • Ascendant/Transcendant
  • Ashamed/Shameful (Distinct?)
  • Assured/Reassured (Distinct?)
  • Awkward
  • Bittersweet
  • Burdened
  • Cheated/Deceived/Betrayed
  • Cheery
  • Compassionate
  • Condemned/Doomed
  • Confident/Self-Certain
  • Controlled/Constrained/Trapped/Smothered/Stifled/Coerced/Dominated (Distinct?)
  • Craving/Attraction/Desire (Generalized)
  • Crushed/Defeated
  • Delight/Joy (Distinct?)
  • Degraded/Defiled
  • Demoralized
  • Depressed/Dejected/Dispirited (All-encompassing negativity)
  • Desperate
  • Despised/Hated (Distinct?)
  • Determined
  • Disappointed
  • Disenchanted
  • Disgusted/Repulsed (Distinct?)
  • Disgraced
  • Disheartened/Discouraged (Distinct?)
  • Divinity/Inspiration
  • Doubtful
  • Dread
  • Elation
  • Embarrassed
  • Empty
  • Enchanted
  • Encouraged
  • Ennui/Lacking direction (Distinct?)
  • Enthusiastic
  • Envy
  • Fear/Fearful/Averse (Distinct?)
  • Fortunate/Lucky (Distinct?)
  • Frustrated (Limited)
  • Frustrated (Exasperated)
  • Fulfilled
  • Grateful
  • Grief/Mourning
  • Harassed
  • Helpless
  • Hopeful
  • Hopeless
  • Humbled (Awed)
  • Humbled (Intimidated)
  • Humbled (Status drop-ish)
  • Humbled/Insecure (Unworthy)
  • Humiliated
  • Hurt/Wounded (Distinct?)
  • Ill-will
  • Inadequate
  • Indignant
  • Indulged/Gratified/Satisfied
  • Irritated/Annoyed/Provoked (Distinct?)
  • Isolated/Lonely
  • Jealous
  • Lost
  • Loved
  • Love (Towards others)
  • Love (Towards self)
  • Misunderstood
  • Neglected/Uncared for/Unappreciated (Distinct?)
  • Nervous/Tense/Panicked (Distinct?)
  • Offended
  • Optimistic
  • Peaceful (At peace)
  • Perplexed/Confused/Puzzled (Distinct?)
  • Pessimistic
  • Pitiful (Others)
  • Pitiful/Litost (Self)
  • Protective
  • Proud
  • Rage (Righteous/Outrage)
  • Rage (Seething)
  • Rage (Vengeful)
  • Reckless
  • Rebellious
  • Regretful
  • Relieved
  • Resentful
  • Resigned
  • Resolved
  • Respected/Admired
  • Respectful/Admiring
  • Restless
  • Revolationary/Inspired
  • Schadenfreude
  • Scheming
  • Sorry/Apologetic
  • Spiteful
  • Suspicious/Paranoid
  • Thrilled
  • Torn
  • Uncertain/Unsure/Undecided (Distinct?)
  • Undesired/Unwanted (Distinct?)
  • Unloved
  • Uncomfortable/Unsettled
  • Vulnerable/Threatened/Timid (Distinct?)
  • Worthless

Mental States:

  • Alarmed
  • Apprehensive
  • Ambivalent
  • Amused
  • Bewildered/Confused
  • Defensive/Guarded
  • Depressed (Low-emotion)
  • Distant
  • Distracted
  • Drained
  • Energetic
  • Excited
  • Exhausted
  • Equanimous
  • Flustered
  • Frantic
  • Manic (High-emotion)
  • Overwhelmed/Petrified/Stunned (Distinct?)
  • Reluctant
  • Shocked/Startled/Shaken (Distinct?)
  • Skeptical
  • Surprised/Startled

Uncertain:

  • Apathetic
  • Deprived
  • Dismayed
  • Disrespected/Slighted
  • Distressed
  • Exuberance
  • Flattered
  • Hesitant
  • Jubilant
  • Patronized
  • Patronizing
  • Pleased
  • Shy
  • Tolerant
  • Wasted
Comment author: Elo 02 September 2015 12:46:51AM *  2 points [-]

I would like to point out a concept that has recently entered into my life.

Sometimes these emotions are generated internally and often the word for the emotion is one that is about an emotion that "pulls" you to feel that way. An example is; "Appreciated" where something else gives you a feeling of being appreciated. It's not an emotion you can give to yourself. (only recognise it) where distress can be from yourself; or hesitation.

Not sure how that adds to the list exactly.

I make a spreadsheet of how often I think I experience each one - https://docs.google.com/spreadsheets/d/1lkOftycrnhjSdbC6cExawoiyX-Jbn9wuxg2GlCjGeh4/edit?usp=sharing on a scale of 1-10, nothing is 9 or 10 because that would imply I experience it all the time.

Comment author: OrphanWilde 02 September 2015 12:27:53PM 0 points [-]

Scheming! That emotion definitely belongs on the list. WRT Disappointment/Disheartened/Discouraged, which would you separate? (Or are all three distinct?)

There is a sense that some of these are... very self-inflicted. I suspect some people have a fine degree of control over that, and others have no control over the distress, or hesitation, they experience. (I don't feel "Appreciated", so I can't comment on that example, but there are similar external emotions I do, such as annoyance, which is one I'm incapable of feeling towards myself, in pretty much exactly the same way I couldn't tickle myself.)

Equanimity is... a bit broader than "cool and collected", at least in my personal experience. Cool and collected is a good description for the outer-state of it - what is directly experienced in most situations. There's an inner component to it, too - it's... a capacity for dealing with emotions. It's the capacity to remain cool and collected, whatever emotions are hurled at you. When my equanimity is low, I feel like I'm on the top of an immensely tall column that is swaying haphazardly, and will topple in the slightest emotional breeze. When my equanimity is high, there's an inner stability, like a hurricane of emotion couldn't budge it - I describe that state as "centered".

Comment author: Elo 02 September 2015 06:38:08PM 0 points [-]

Equanimity I mainly didn't have the word in my vocab. (the first google search gave "cool and collected")

I would separate Disappointment from Discouraged. As distinctly things that don't have to have each other there to happen. Disappointment also doesn't have to be disheartening. Dishearten/Discourage are similar and could probably be left close by.

Looking good. Not sure how to use it; but if it stays up - I will think about it...

Comment author: OrphanWilde 02 September 2015 08:57:42PM 0 points [-]

Done!

No idea what any of those three are supposed to feel like. I imagine the inverse of relief?

Comment author: Elo 02 September 2015 11:34:56PM 0 points [-]

Disheartened ~= "soulcrushing" Discouraged ~= I am running a race against my peers and I don't seem to be able to keep up. After a month of training; they seem to be getting faster and I seem to not be keeping up at the same grade. "All this effort for nothing" Disappointed ~= I was expecting chocolate spread on my sandwich but it was only jam. (slightly in the direction of "something I expected but did not quite estimate right")

Comment author: moridinamael 02 September 2015 05:34:27PM 0 points [-]

This is useful. Do you have experience with Focusing? Part of the workflow is to sit with your emotional state and gently try to discern what label applies to it. This can be hard because sometimes the feeling is complex or unclear, but I expect part of the difficulty lies in a simple lack of vocabulary with which to label the feeling.

Comment author: OrphanWilde 02 September 2015 09:00:58PM 0 points [-]

The biggest issue from my perspective is that the labels don't immediately connect to any kind of easily-communicable qualia, so even if you know the correct label, you don't necessarily have a good way of connecting the label to the feeling. (That said, the only emotion I required outside assistance to identify was a generalized anxiety, which didn't feel at all like I expected it to. I expected anxiety to be definitively unpleasant, and it was merely ambiguously so.)

Comment author: WhyAsk 31 August 2015 05:34:03PM 2 points [-]

My conscience is as hypertrophied as the next person, but how is a balance struck between avoiding cognitive biases, logical fallacies, etc., and enjoying life?

Comment author: [deleted] 31 August 2015 05:38:58PM 4 points [-]

This is a broad question, and it will get broad answers.

Can you give some examples when avoiding biases made life less enjoyable?

Comment author: WhyAsk 01 September 2015 04:59:50PM 2 points [-]

For me, avoiding biases means a cognitive load which means I have to be vigilant which means I can't relax. Perhaps when and if avoiding all/most of the foibles becomes second nature then it will be less of a load. I hope! :)

Comment author: NancyLebovitz 03 September 2015 10:02:21AM 0 points [-]

Would it be bad if you gave yourself time off for specific durations and/or activities?

Comment author: Strangeattractor 03 September 2015 06:59:11AM 0 points [-]

One approach could be to set priorities. "How important is it if I do this not-optimally? What are the consequences of cognitive biases leading me to a poor choice here?" and to be vigilant on the most important stuff, and let it go for lower priority things.

However, practice can help, and sometimes it is easier to catch oneself on tasks or issues of a smaller scale than on the big importart ones. So practicing on the lower priority ones can be useful.

Vigilance takes energy. Awareness...not as much. Maybe a shift toward developing awareness rather than vigilance could help.

Comment author: Elo 31 August 2015 09:56:33PM 2 points [-]

I think I know what you are talking about.

There are almost two modes of functioning. "Never thinking hard and going with the flow"; and "thinking hard about what happened". I would suggest that these processes are like system1/system2 processes about living. where if you only play in system 2 you have an exhausting life where you feel like you never get far because you didn't actually do the washing; you just thought really hard about it. You never really had fun; you just thought hard about it. etc etc.

The important thing to note is that we need both system 1 and system 2 to go about getting things done. You are concerned about the balance; Absolutely!

In my post here; http://lesswrong.com/lw/mj7/3_classifications_of_thinking_and_a_problem/ Slider suggested a heuristic for producing results in the area of knowing how to balance.

in this case because you are balancing "hard thinking about the problem" and "enjoying life" If you are finding you are not enjoying life; reduce the time you spend hard-thinking. Iif you are finding you are making mistakes; or needing more planning time to make things work the way you want them to; increase hard-thinking time. If you want to increase both at once - take a break; work on a problem of no consequence.

Comment author: James_Miller 31 August 2015 04:49:25AM *  2 points [-]

Dilbert creator Scott Adams, who has a fantastic rationalist-compatible blog, is giving Donald Trump a 98% of becoming president because Trump is using advanced persuasion techniques. We probably shouldn't get into whether Trump should be president, but do you think Adams is correct, especially about what he writes here. See also this, this, and this.

Comment author: Lumifer 31 August 2015 05:07:45AM 10 points [-]

I think Scott Adams has taken to trolling the readers of his blog.

Comment author: drethelin 31 August 2015 07:37:51AM 3 points [-]

Taken to? He's been doing it for like a decade at this point.

Comment author: JoshuaZ 02 September 2015 01:51:50AM 6 points [-]

Why do so many people see Adams as being rationality-compatible? I've seen very little that he has to say that sounds at all rational or helpful. Cynical != rational.

Comment author: James_Miller 02 September 2015 02:04:00AM 2 points [-]
Comment author: gjm 02 September 2015 11:17:51AM 2 points [-]

Having written a rationality-compatible book isn't the same thing as writing a rationality-compatible blog. (It surely indicates being able to write a rationality-compatible blog, but his actual goals may be different.)

Comment author: passive_fist 31 August 2015 09:52:50AM 6 points [-]

I wouldn't put it at 98%, but I definitely wouldn't put it at Nate Silver's 2%, which I think comes from an analysis that is just way too simplistic.

Comment author: IlyaShpitser 31 August 2015 01:33:34PM 6 points [-]

I would take Silver's analysis over Adams' any day. Look at their respective prediction track records.

Comment author: passive_fist 31 August 2015 09:46:25PM *  5 points [-]

It was because of Nate Silver's track record that I initially had high confidence in his estimate. Then as I read his justification my confidence in his estimate decreased. I think he's just being lazy in his justification, here, when he says things like:

So, how do I wind up with that 2 percent estimate of Trump’s nomination chances? It’s what you get if you assume he has a 50 percent chance of surviving each subsequent stage of the gantlet.

To be fair to Silver, when he wrote the article he might not have considered Trump's campaign plausible enough to give serious thought. I suspect that if Trump continues to perform well in the polls Silver will give a more thoughtful and realistic analysis later on.

Comment author: roystgnr 01 September 2015 06:29:57PM *  3 points [-]

Were any of Silver's previous predictions generated by making a list of possibilities, assuming each was a coin flip, multiplying 2^N, and rounding? I get the impression that he's not exactly employing his full statistical toolkit here.

Comment author: IlyaShpitser 01 September 2015 06:53:01PM *  1 point [-]

Isolated demands for rigor -- what do you think Adams is doing? (I think he's generating traffic.)


But sure, I agree, that's more of a reasonable prior than an argument. There's more info on the table now.

Comment author: tut 02 September 2015 01:07:14PM 2 points [-]

What Adams does is that he looks at Silver's estimate, says that it is way too low and then takes 1 minus Silver's estimate as his own estimate just to make a point. He does not attempt any statistical analysis and the 98% figure should not be taken seriously.

Comment author: Vaniver 02 September 2015 01:30:57PM 0 points [-]

what do you think Adams is doing?

What Adams has said he's doing is simulating the future along the mainline prediction--i.e. nothing too weird happens--and under his model, Trump is guaranteed to win. Then he says "well, maybe something weird will happen" and drops that confidence by 2%, instead of a more reasonable 30% (or 50%).

Comment author: Vaniver 31 August 2015 04:21:42PM 3 points [-]

Does Adams have a track record at predicting this sort of thing? I am not aware of any instances he's said "here is a master persuader trying to do X, they will succeed" and them having failed, but I can't remember more than one instance of him saying that and it being correct (and I don't remember the specifics), but I don't follow Adams closely enough to have a good count.

I think that Adams is raising the sort of challenge that Silver is weakest against: Trump's tactics are a "black swan" in the technical sense that no candidate in Silver's dataset has run with a similar methodology. That Silver thinks Herman Cain's campaign is the right reference class for Trump's campaign seems to me like a very strong argument for Silver not getting what's going on.

Comment author: Lumifer 31 August 2015 04:36:31PM 4 points [-]

Does Adams have a track record at predicting this sort of thing?

He has an excellent track record of saying outrageous things -- that's what he is optimizing for, I think.

Comment author: D_Alex 01 September 2015 06:37:06AM 4 points [-]

Well... Scott Adams has a lot of money. I am willing to bet that Trump will NOT become president, at EVEN ODDS. Scott, if you read this, how about a wager? I propose a $10,000 stake.

Comment author: Vaniver 01 September 2015 01:52:21PM 6 points [-]

Scott, if you read this, how about a wager?

Despite his frequent comments that he's "betting" on Trump and that Silver is "betting" against Trump, Adams's position is that gambling is illegal when pressed to actually bet. This means one of the big feedback mechanisms preventing outlandish probabilities is not there, so don't take his stated probabilities as the stated numbers.

(In general, remember how terrible people are at calibration: a 98% chance probably corresponds to about a 70% chance in actuality, if Adams is an expert in the relevant field.)

Comment author: D_Alex 02 September 2015 02:24:25AM 4 points [-]

Despite his frequent comments that he's "betting" on Trump and that Silver is "betting" against Trump, Adams's position is that gambling is illegal when pressed to actually bet.

How convenient for him.

Comment author: satt 02 September 2015 01:48:26AM 1 point [-]

And Adams himself says the "smart money" is on Silver's prediction! I think Adams's prediction is more performative than prognostic, even allowing for ordinary unconsciously bad calibration.

Comment author: satt 01 September 2015 12:00:59AM 4 points [-]

Forgetting what I know (or think I know) about Scott Adams, Donald Trump, Nate Silver, Jeb Bush, whoever, and going straight to the generic reference class forecast — I'm very sceptical someone could predict US presidential elections with 98% accuracy 14 months in advance.

Comment author: UtilonMaximizer 01 September 2015 02:47:26PM *  11 points [-]

Actuarial tables give him a roughly 2% chance of dying before the election.

Comment author: Good_Burning_Plastic 01 September 2015 06:33:46PM *  5 points [-]

Well, he's very likely substantially healthier than the average 69-year-old American man, so I'd be willing to bet at 1/50 odds that he will survive to the election.

Comment author: MrMind 07 September 2015 12:13:23PM 2 points [-]

I think Scott Adams wildly overestimate the power of conversational hypnosis.
First of all, yes, there have been prominent public figures who are well versed in the art. But that's no argument at all: how many people are trained in conversational hypnosis (or NLP, or what have you), and how many of those are hyper-successful? And how many hyper-successful people are not trained in Ericksonian hypnosis? You could even make the point that Steve Jobs and Bill Clinton were successful despite being trained in that art.

There's also something to be said about linear return on persuasion. If you are 2X more persuasive than your opponent, would you gain twice the supporter? I'm not very confident in this hypothesis too.

Comment author: James_Miller 07 September 2015 03:43:39PM 1 point [-]

There might be a network externality effect with persuasion, where the more people I persuade the more persuasive I become because of social proof issues. In this situation, the returns to persuasion are exponential.

Comment author: NancyLebovitz 01 September 2015 02:46:12AM 2 points [-]

Did Adams praise Obama for skillful use of vagueness? "Hope" seems to be in the same category as "take your country back".

Comment author: knb 01 September 2015 12:17:54AM 2 points [-]

I think Adams is right that Trump has played the media exceedingly well and he has clearly surprised a lot of people. Some Republican pollsters have focus-grouped Trump supporters and found an extreme level of antipathy among them toward "establishment" Republicans. So it is unlikely his current supporters will abandon him in a sudden collapse, which is the failure mode a lot of Trump-skeptics have been describing. That means Trump will likely stay in the race for a long time--unless he gets bored and drops out. I doubt Trump will actually drop out though, he seems to enjoy the fray and clearly hates many establishment conservatives enough to stay in just to have a platform to keep attacking them.

Most likely Trump will split the anti-establishment vote with Ben Carson and eventually most of the establishment candidates will drop out and throw their support to an establishment survivor, who will manage to beat Trump with solid but not huge majorities and take the nomination. If Trump does manage to win the nomination, it is unlikely he will win the white house--odds are less than even, maybe 2:1 against him. Overall I would estimate a ~10% chance Trump wins the presidency.

Comment author: Elo 30 August 2015 09:27:15PM 2 points [-]

Meta: in posting the open thread at this time I note that it is Monday where I am in Sydney Australia; even if this is roughly 6-12 hours earlier than usual to start the open thread. (hope you all have a good week ahead)

Comment author: Gunnar_Zarncke 30 August 2015 09:37:03PM 2 points [-]

I like Comic Sans too, but is it intended?

Comment author: Elo 30 August 2015 10:06:16PM 2 points [-]

apologies again! (same as last OT)

Comment author: ike 06 September 2015 02:14:56PM *  2 points [-]

One of my professors claimed that postmodernism, and particularly its concept of "no objective truth", is responsible for much of the recent liberalism of society, through the idea of "live and let live". (Specific examples given were attitudes towards legalization of gay marriage and drugs.) I pointed out that libertarianism and liberalism predated postmodernism historically, and they said that that's true, but you can still trace the popularity back to postmodernism.

Is this historically accurate? If not, is there something I can point to that would convince them? It seems to me that the shift in society is much more a shift on the object level questions than on the meta level "should we ban things we disagree with", but I don't know very much recent history of philosophy (it isn't strictly their field either, so I'm justified in not taking them at face value).

Edit: re-asked on latest OT here

Comment author: btrettel 02 September 2015 05:35:24PM *  2 points [-]

Does anyone know of a good life expectancy calculator? Preferably one which has good justification behind the model, and also has been tested.

I tried this calculator, but I noticed a few issues. First, it sells me I should start doing conditioning exercise... when I did check that off. I think that part of the calculator is broken. It also seems to think that taller people live longer, when from what I understand it's well accepted that the opposite is true. Some of its other features seem unjustified to me, for example, it seems to think you get a life expectancy boost from eating less than 10% of your calories from fat, but I can't find any evidence for that.

Good life expectancy calculators seem very valuable to those interested in longevity. Perhaps some people at LessWrong should create some sort of model. Though I have little experience with these sorts of statistical models, I think the Monte Carlo method might be useful here to get a distribution. If we put the code on GitHub then others can take a look at its guts and submit corrections/improvements/pull requests if they want to.

Comment author: Lumifer 02 September 2015 05:54:12PM *  3 points [-]

A good life expectancy calculator implies a good model of which factors drive longevity. I don't believe such a model exists (for healthy people -- the effects of various illnesses on your life expectancy are known much better). There are a lot of correlation studies but correlations and causality are not quite the same thing.

Perhaps some people at LessWrong should create some sort of model.

"Some sort of a model" is a very low bar -- presumably you would like the model to be good. People who will be able to make a good comprehensive model of how various health/diet/lifestyle/etc. interventions affect longevity will probably be in the running for a Nobel.

It's like saying that you found online some investment advice which doesn't look too good, perhaps some LW people would like to construct a model of the markets that will give better advice. Well...

Comment author: btrettel 02 September 2015 06:24:27PM *  1 point [-]

Fair points. I'm don't think what we understand about longevity is as bad as what we understand about investments.

I suppose what I'm looking for is a model which 1) doesn't have any obvious bugs, 2) doesn't contradict anything we do know, and 3) has at least some evidence behind the model. If it produces a fairly wide distribution because that represents the (poor) state of our knowledge, I think that's fine.

The issue of correlation vs. causation also is important, and I'm not sure what we could do about it short of allowing someone to turn off certain features of the model if they believe them to be untrustworthy. For example, I've seen a fair bit about how marriage is correlated with an increase in longevity, and it seems obvious to me that any similar sort of social structure where one has frequent socialization and possibly receives feedback and care is probably where the real benefit is. So I think you can say you are married if you believe your situation is equivalent in some way. Obviously these details need to be shown more rigorously, but this is the basic argument.

Comment author: [deleted] 02 September 2015 09:03:48AM *  2 points [-]

Do western civilizations owe something to those civilizations that were disadvantaged as a result of imperialism? A common reaction of national conservatives to this idea is that what happened during imperialism is time-barred and each country is responsible for their citizens.

Comment author: knb 03 September 2015 12:41:48AM 6 points [-]

How much does Mongolia owe Russia? How much do North African countries owe Europe for the millions of Europeans kidnapped and sold into the Arab slave trade in north Africa? The notion is itself ridiculous.

Comment author: Sarunas 02 September 2015 08:23:43PM *  6 points [-]

It is relatively easy to understand the situation when one person owes money to another person, having borrowed it before. It is also not much more difficult to understand the situation when one person owes another person a compensation for damages after being ordered by court to pay it. Somewhat more vague is a situation when there is no court involved, but the second person expects the first one to pay for damages (e.g. breaking a window), because it is customary to do so. All these situations involve one person owing a concrete thing, and the meaning of the word "owes" is (disregarding edge cases) relatively clear.

Problems arise when one tries to go from singular to plural but we still want to use intuition from the usage of singular verb. Quite often, there are many ways to extend the meaning of a singular verb to a plural verb in a way that is still compatible with the meaning of the former. For example, one can extend the singular verb "decides" to a many different group decision making procedures (voting, lottery, one person deciding for everyon, etc.), saying "a group decides" simply obscures this fact.

Concerning the word "owe", even when we have a well defined group of people, we usually prefer to either deal with them separately (e.g. customers may owe money for services) or create a juridical person which helps to abstract a group of people as one person and this allows us to use the word "owe" in its singular verb meaning. There are more ways to extend the meaning of the word "owe" from singular to plural, but they are quite often contentious.

"Western civilizations" is a very abstract group of people. It is not a well defined group of people. It is not a juridical person. It is not a country. It is not a clan. The singular verb "owes" is clearly inapplicable here, and if one wants to use it here, one must extend its meaning from singular to plural. But there seems to be a lot of possible extensions. Therefore one has to resort to other kinds of arguments (e.g. consequentialist arguments, arguments about incentives, etc.) to decide which meaning one prefers. But if that is the case, one can bypass the word "owe" entirely and go to those arguments instead, because that is essentially what one is doing, because words whose meanings one knows only very vaguely probably do not do much in actually shaping the overall argument.

In addition to that "being disadvantaged as a result of imperialism" is very dissimilar from "having a window broken by a neighbour", it is not a concrete thing. The central example of "owing something" is "owing a concrete and well defined thing". Whenever we have a definition that works well for a central example and we want to use it for a noncentral one, we again must extend it and there are often more than one way to extend it (Schelling points sometimes help to choose between all possible extensions, but often there are more than one of them and choice of the extension becomes a subject of debate).

In general, I would guess that if someone argues that an entity as abstract as "western civilizations" owes something to someone, most likely they are either unknowingly rationalizing the conclusion they came to by other means or simply sloppily using an intuition from the usage of the singular verb "owes". I think that the meaning of the word can be extended in many ways, many of which would still be compatible with the meaning of the singular word and some of them would imply "new generations are not responsible for the sins of the past ones", while some of them wouldn't, therefore it is probably better to bypass them altogether and attempt to solve a better defined problem.

Other words where trying to go from singular to plural often causes problems are: "owns", "chooses", "decides", "prefers" (problem of aggregation of ordinal utilities), etc.

Comment author: RichardKennaway 03 September 2015 09:39:43AM 4 points [-]

Is anywhere on Earth inhabited by the descendants of the humans who first moved in?

Comment author: Good_Burning_Plastic 05 September 2015 03:03:08PM 4 points [-]

Off the top of my head Iceland for sure, Māori-inhabited areas, and possibly the Basque Country. But yes, that's pretty much the exception.

Comment author: NancyLebovitz 06 September 2015 06:30:21AM 0 points [-]

I'm not sure about "first moved in" but there are families in England who have been there for a very long time.

Comment author: ChristianKl 02 September 2015 12:30:50PM 4 points [-]

If you focus on utilitarianism the question doesn't come up. The important thing isn't who "owes" but how we can produce utility. If that means the best way is to give betnets to African's than that's the thing to do, regardles of the concept of "owing".

Comment author: VoiceOfRa 07 September 2015 07:50:26PM *  3 points [-]

Do western civilizations owe something to those civilizations that were disadvantaged as a result of imperialism?

Do all other civilizations owe something to western civilization for the benefits they gained stemming from western science and technology?

Comment author: polymathwannabe 02 September 2015 02:33:20PM 2 points [-]

I would only count debts toward the specific peoples directly affected; e.g. the Spanish Empire lived off Bolivian silver, the Belgians worked the Congolese to death, and the United States is literally built on stolen Native land. Those examples and many others allow for a case in favor of reparations.

However, the passage of time sometimes blurs the effects of exploitation and aggression. Should the UK sue Denmark for the Norman Conquest? Should Italy sue Germany because Germanic tribes destroyed the Roman Empire? Should Hungary sue Mongolia for what the Golden Horde did to them? I admit I don't know how to answer to that in a way that is consistent with my first paragraph.

Comment author: polymathwannabe 02 September 2015 06:31:21PM 0 points [-]
Comment author: Lumifer 02 September 2015 03:12:33PM 2 points [-]

Do western civilizations owe something to those civilizations that were disadvantaged as a result of imperialism?

No.

Comment author: [deleted] 02 September 2015 06:49:36PM -1 points [-]

Could you explain why you see it this way? Our wealth is partly based on exploitation. Wouldn't it be fair to fix the damage we've done to exploited people? This could perhaps be also justified in terms of utilitarianism, as fairness might bring people closer together which prevents wars.

Comment author: knb 03 September 2015 12:55:12AM 6 points [-]

Our wealth is partly based on exploitation.

Not to any significant extent. Most colonized places were net money-losers for the colonizer for most of their history. In addition, I doubt most western-colonized countries were made substantially worse off compared to non-colonized countries, since the Europeans introduced some level of infrastructure, medicine, etc.

Wouldn't it be fair to fix the damage we've done to exploited people?

First of all, who is this "we" you speak of? More importantly, there are a few "control-group" countries which were not colonized while their neighbors were, like Siam (modern Thailand) and Ethiopia, and they don't seem better off than their neighbors. Unlike most African countries, which abolished slavery when the Europeans took control, Ethiopia banned slavery only in 1942--under pressure from the British, who were a bit embarrassed to be allied with a slave state.

Comment author: Lumifer 02 September 2015 07:58:30PM 1 point [-]

I don't see any basis for this claim. More explicitly, I don't see any reasonable and consistent legal/moral theory which would justify such a claim. Note that I do not consider the popular "deep pockets" legal theory to be reasonable.

Comment author: Strangeattractor 03 September 2015 06:23:34AM 1 point [-]

I think that framing "Imperialism" as belonging to the past is inaccurate.

Many of the problemmatic behaviours grouped together into the term "Imperialism" have not actually stopped. There are Western developed countries that are doing horrible things to non-Western developing countries right now, and doing horrible things to their own people too.

I think a good first step would be to stop doing the horrible stuff now. If the problemmatic behaviour stopped, the topic of redress for past wrongs could be considered from a better vantage point. "I'm sorry I killed your ancestors and stole their stuff 100 years ago" tastes like ashes when coming from someone who is killing your family and stealing your things now, or who is doing something more subtle but equally awful.

"Disadvantaged" is a word that glosses over the damage done. Also, the whole question could benefit from being more specific and defining terms better.

Comment author: RichardKennaway 07 September 2015 11:29:29AM 0 points [-]

A common reaction of national conservatives

I get the feeling that "national conservatives" is the name of some specific political movement or affiliation in your own country. It is not a phrase I have heard before. What specifically does it refer to? The movement discussed in the Wiki article appears to be of significance mainly in the former-communist European countries, and even there consists mainly of minority parties. These countries are not the ones for which an argument is being made for post-imperial reparations.

Comment author: Clarity 01 September 2015 06:47:32AM 1 point [-]

I hypothesise that there are several topics for which you can reliably expect upvotes or downvotes depending your position, regardless of your content.

  • cryonics
  • effective altruism
  • synthetic biology
  • libertarianism
  • meta
  • regular threads (e.g. open thread)
  • posts by top posters
  • posts referring to rationality blogs
  • conceding a mistake
Comment author: Clarity 01 September 2015 02:04:40PM 1 point [-]

Are there any advocacy groups with sex buyers or 'johns'? They're an affluent bunch, and their interests include easily influenced poor settings, and they're not neccersarily constrained by the scrupolosity that advocates for say sex worker's rights may have. It suprises me that they don't exist, when advocacy groups for smokers and other vices exist, when only advocacy groups for the suppliers and workers in the sex trade seem to exist.

Comment author: Jiro 01 September 2015 03:32:02PM 5 points [-]

Being a sex buyer is low status. Being in an oppressed group such as sex workers is high status in many political contexts.

Comment author: Lumifer 01 September 2015 04:36:32PM 2 points [-]

Being a sex buyer is low status.

That depends. Being a john is low-status. Inviting girls over to your yacht for champagne and caviar is high-status.

Being in an oppressed group such as sex workers is high status in many political contexts.

That really depends. A whore is not a high-status professsion.

Comment author: Jiro 01 September 2015 05:59:50PM *  4 points [-]

Inviting girls over to your yacht for champagne and caviar is high-status.

That's not being a "sex buyer" within the context of needing advocacy for sex buying.

That really depends. A whore is not a high-status professsion.

Thus, "in many political contexts".

Comment author: Viliam 02 September 2015 07:42:18AM 1 point [-]

I wonder what is the lesson here.

"If you want to buy sex for money, you better have a lot of money, or it will reflect poorly on you."

Or perhaps:

"Doing things in a way which demonstrates that you have a lot of money can make almost anything high-status."

Comment author: Lumifer 02 September 2015 02:39:13PM *  1 point [-]

If you want to buy sex for money, you better have a lot of money, or it will reflect poorly on you.

Or: be classy, not crass. Form and style matter.

It is, of course, easier to be classy when you have a yacht stocked with champagne and caviar on hand... X-/

Doing things in a way which demonstrates that you have a lot of money can make almost anything high-status.

Counter-example: Donald Trump. A dictionary counter-example: nouveau riche :-)

Comment author: NancyLebovitz 01 September 2015 03:53:58PM *  2 points [-]

From memory: Amnesty International has come out in favor of legalizing prostitution. They were grudging about admitting that, while they aren't going to call it human rights, they have to support something like human rights for prostitutes' customers and agents.

Comment author: ChristianKl 01 September 2015 10:06:56PM 3 points [-]

They were grudging about admitting that, while they aren't going to call it human rights, they have to support something like human rights for prostitutes' customers and agents.

I read the Amnesty paper and it didn't said something about rights for customers or agents.

Comment author: James_Miller 01 September 2015 05:50:45PM 0 points [-]

Hence the term "status whore."

Comment author: WhyAsk 03 September 2015 07:32:57PM 1 point [-]

In view of this http://essay.utwente.nl/66307/1/Bolle%20Colin%20-s%201246933%20scriptie.pdf did the smartphone makers anticipate addiction, as did the tobacco companies in the U.S.?

Certainly both are profiting from it.

For me it seems like some version of The Tulip Mania.

Comment author: evand 01 September 2015 02:59:41PM 1 point [-]

I'm looking for a good demonstration of Aumann's Agreement Theorem that I could actually conduct between two people competent in Bayesian probability. Presumably this would have a structure where each player performs some randomizing action, then they exchange information in some formal way in rounds, and eventually reach agreement.

A trivial example: each player flips a coin in secret, then they repeatedly exchange their probability estimates for a statement like "both coin flips came up heads". Unfortunately, for that case they both agree from round 2 onwards. Hal Finney has a version that seems to kinda work, but his reasoning at each step looks flawed. (As soon as I try to construct a method for generating the hints, I find that at each step when I update my estimate for my opponent's hint quality, I no longer get a bounded uniform distribution.)

So, what I'd like: a version that (with at least moderate probability) continues for multiple rounds before agreement is reached; where the information communicated is some sort of simple summary of a current estimate, not the information used to get there; where the math at each step is simple enough that the game can be played by humans with pencil and paper at a reasonable speed.

Alternate mechanisms (like players alternate communication instead of communicating current states simultaneously) are also fine.

Comment author: Lumifer 02 September 2015 07:38:57PM *  7 points [-]

Bridge, the card game. Bidding is the process of two players exchanging information about the cards they hold via the very limited communications channel (bids). The play itself is also used to transfer more information about which cards remain in the hand.

I don't know if that will work as a demonstration of the Aumann's Theorem, though, bridge gets very complicated very fast :-/

Comment author: evand 02 September 2015 07:48:34PM 2 points [-]

That's an excellent practical example, though it doesn't really have the explicit probability math I was hoping for.

In particular, I like that you'll see stuff like which player thinks the partnership has the better contract flips back and forth, especially around auctions involving controls, stops, or other specific invitational questions. The concept of evaluating your hand within a window ("My hand is now very weak, given that I opened") is also explicitly reasoning about what your partner infers based on what you told them.

I think the most important thing here might be that bridge requires multiple rounds because bidding is limited bandwidth, whereas giving a full-precision probability estimate is not.

Comment author: Lumifer 02 September 2015 08:04:49PM *  3 points [-]

If you want explicit probability math, you might be able to construct some kind of cooperative poker (for example, allow two partners to exchange one card from their hands following some very restricted negotiations). The probabilities in poker are much more straightforward and amenable to calculation.

Comment author: gjm 02 September 2015 11:20:51AM 2 points [-]

The two-coins example might be useful as a first step, even if you then present a more difficult one.

Comment author: polymathwannabe 01 September 2015 04:26:00PM 2 points [-]

How about some variation on Bulls and Cows?

Comment author: Elo 02 September 2015 12:09:41AM 0 points [-]

Based on simple coin flip; other games:

  • Several coins;
  • scissors paper rock (and then iterated)

I am sure there are more small games that have a similar "known" problem space.

Comment author: evand 02 September 2015 06:39:12PM 0 points [-]

What change would you make that results in multiple rounds being required?

For example, if each player flips multiple coins, and then we share probability estimates for "all coins heads" or "majority of coins heads" or expectations for number of heads, in each case the first time I share my summary, I am sharing info that exactly tells the other player what information I have (and vice versa). So we will agree exactly from the second round onwards.

Comment author: Elo 02 September 2015 07:06:33PM *  0 points [-]

example I was thinking:

each player flips 3(? 10) coins of their own. (giving them various possibilities on what they think the whole coin-space looks like) They present their 90%, 99% confidence intervals on there being more than 4 (9) heads. Round 2 repeat. (also make statements based on what they think the state of play is ++ try to get to the answer before the other person. So make statements that can be misleading maybe?)

Not sure how easy it is to tease out that information for a human. maybe a computer could solve it. but not so much a human...

"I flipped 10 coins; My 90% confidence that there are at least 7 of each heads and tails is 90%. 99% confidence is 60%."

confidence for "at least 10 heads and 6 tails" etc.

Comment author: evand 02 September 2015 07:39:43PM 0 points [-]

Here's how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for "there are 4+ heads total" is now 4/8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0/8) (1H, 1/8) (2H, 4/8) (3H, 7/8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.

(Also, you're not using "confidence interval" in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)

I still don't see any version of this that's simpler than Finney's that actually makes use of multiple rounds, and when I fix the math on Finney's version it's decidedly not simple.

Comment author: OrphanWilde 04 September 2015 06:50:11PM 1 point [-]

Regarding prediction markets and regulation, does anyone know whether a betting market wherein the payout for the betting contract goes to the winner's choice in charities (as opposed to going to the winner) would avoid most or all of the legal issues involved?

Comment author: gwern 06 September 2015 10:24:33PM 1 point [-]

So, Long Bets? Betting for charity has always been legal AFAIK.

Comment author: Lumifer 04 September 2015 07:13:55PM 0 points [-]

Ask a lawyer, it probably depends on the exact wording of anti-gambling laws. The answer also is likely to depend on whether the betting market collects any fees in process.

Comment author: fubarobfusco 04 September 2015 06:48:17PM 1 point [-]

A summary of rather counterintuitive results of the effect of priming on raising people's performance on various tests of cognitive abilities, and the ability to negate (or enhance) the effects of stereotype threat through priming:

"Picture yourself as a stereotypical male"

(It's not all about gender, either. Some of it is about race! How exciting!)

Comment author: username2 06 September 2015 09:53:28PM 2 points [-]
Comment author: Douglas_Knight 06 September 2015 08:36:20PM 1 point [-]

Yes, effects that raise performance are good because they rule out a number of problematic mechanisms. However, this experiment has no control group and thus it does not have this benefit.

Comment author: Lumifer 03 September 2015 07:13:37PM 1 point [-]

Killer robots about to be released into the world's oceans!!eleven!

So says Auntie Beeb.

Comment author: WalterL 03 September 2015 08:11:37PM 0 points [-]
Comment author: Clarity 03 September 2015 01:50:36PM 1 point [-]

I bought a $200 prepaid debit card to precommit to getting a beeminder account that won't fuck up my bank balance. I plan to use it to give up pornography and excessive masturbation (<or=1 a week is my goal). However, $200 doesn't have a lot of marginal value to me. I'm thinking of exploiting my irrationality and warm fuzzies by precommiting to donate it to a warm fuzzies charity, or maybe I'll put the money towards potential dates so I can get a girlfriend as substitute if I'm successful in nofapping or watching porn. Ideally there would be a system whereby I could donate to people who would be incentivised to help me stay on the yellow road, at the end of passing the Beeminder test. I hope beeminder will let me do that. Any tips or comments? I've never done beeminder before.

Comment author: Dorikka 03 September 2015 04:00:04PM 2 points [-]

I am curious about your terminal goal here.

Comment author: btrettel 03 September 2015 08:54:20PM *  0 points [-]

I'm confused. Do you want to use the $200 to pay people, charities, or a dating fund when you derail? Beeminder does not allow that directly, but you are free to do additional things when you derail if you want. However, Beeminder does have "supporters", who will get an email when you derail, and you could use this to do something similar (like get them to bug you to pay them).

Comment author: Clarity 12 September 2015 06:23:16AM 0 points [-]

Yes, either, and, or.

Comment author: Clarity 03 September 2015 11:57:22AM 1 point [-]

I have yet to see a treatise, for strategic managers or from academics of any domain, on the game theoretic implications of data science and data-driven firm behaviour in general.

I for one would expect data driven organisations to be act more rationally and therefore predictable, meaning that game-theoretic optimal strategic behaviour, or rather an approximation of it because many data driven organisations will be stupid like many poker players forming a nash equilbirum would maximise expected utility. However, I don't see how machine learning provides an avenue for firms to inform their strategic multi-agent decisions. They instead need to consider artificial intelligence techniques more broadly and to be able to frame machine learning in that context. This, I suspect, will lead to the goldrush for AGI development. As soon as the potential for this becomes common knowledge, linkedin losers will start 'hailing AI experts as the sexiest job in the 21st century. MIRI, take head of my warning that if you are not more transparent with your research agenda (which to those who don't know, is still secret in part) you may find yourself developing FAI solutions way too slow.

Release your agenda and let others work on your problems cooperateively. Maybe you'll even get a more heterogenous audience at the Intelligent Agents Forum. Maybe mainstream researchers can craft work you can actually use on the mathematical foundations of AI or UAI. I suspect the reason that this community blog, albeit devoted to human rationality and not machine rationality, devolves into topics like 'polygamy' is that we don't have shared problems to solve.

Human rationality is a very, very awkward construct and the problem space is unclear and tangential, albeit related to MIRI's work which let's admit, is the very reason this please exists. Let us run wild and perhaps LessWrongers will start alternative agendas like developing criminal networks and intelligence networks so potential hostile AI could be detected in advance and stopped coersively. I'm just giving the first example I could think of.

My point is, you don't have any significant proprietary hard assets, why shouldn't I or any other particular funder instead create a prize on award for a more transparent FAI research organisation to pivet off your incredible work? I'm not in a position to judge whether or not your ongoing contributions are essential, but this could also be good opportunity for the community to discuss what will happen if or when you die or become incapable of contributing to the community. Same goes for other critical members of the community. Are their intellectual succession proceses in place?

Comment author: Clarity 02 September 2015 11:18:36AM -2 points [-]

If you think you have been infected or potentially infected with HIV, IMMEDIATELY go to an emergency department and explain your situation. You can get a treatment that can stop you getting HIV! Here's more information relevant to Australians. Yes, science has come this far!

Also, if you are engaging in risky sexual behaviour like having sex without a condom, guys get some of your foreskin chopped off. It reduces your HIV risk. Women note, it doesn't reduce your risk of getting infected from an infected male.

Comment author: Clarity 06 September 2015 11:37:21AM *  0 points [-]

Cause prioritization looks at the Importance, tractability, and neglectedness of causes.

  • “What is the problem?” = importance

  • “What are possible interventions?” = tractability

  • “Who else is working on it?” = neglectedness

Where are neglected causes found?

Comment author: Clarity 04 September 2015 02:28:42PM *  0 points [-]

Recently a friend told me that values are important to relationship success. I met a business person the same day who claimed knowing his values got him to where he is as a social entrepreneur. Long ago, my psychiatrist asked me about my values, and I didn't know. Psychologist have tried to help me know my values on several occasions but I forget.

Just now I looked up how to find my values and found this article

Since their technique isn't any good for me since my memory is shoddy, I just selected from the list of values. Hope I haven't unconsciously chose socially desirable options.

Grouped by theme after choosing, I chose:

leadership::

  • boldness
  • legacy
  • uniqueness
  • vision
  • empathy (and the lack thereof, 'ruthlessness')

rationality:

  • accuracy
  • prudence
  • intelligence
  • efficiency
  • truth-seeking
  • temperance
Comment author: Clarity 03 September 2015 01:28:09PM 0 points [-]

Are there any good reasons to use hotmail (outlook.com online) instead of gmail apart from switching costs if you already use outlook? Outlook is associated with business and therefore carries higher status and formality, perhaps?

Comment author: Clarity 03 September 2015 09:44:47AM *  0 points [-]

Why aren't more LW's public intellectuals in the conventional sense - making appearances on radio or television news bulletins? The benefits seem obvious, if you're okay with fame. And, It's a position of influence and seems relatively easy to contact news organisations to say you have original research for a reputable organisation. Many of us are academics so that's probably true. Perhaps there is even an easier way to contact many news distributes at once to get your name out there and get offers coming to you. Something easier than say manually sending out press releases for instance. Though, they are probably paid PR services, but I mean there's probably a free service somewhere too.

The only existing ways I know are to get listed in expert databases like this one for Australia or for the world. I vaguely remember one run by an institute in Australia that requires experts to have completed meta-analyses or systematic reviews in their area, but it's for consulting work not journalists and the institute gets a cut (but they are prestigious, so it's good affiliation). Their name starts with K if I remember correctly. Don't know why I tend to remember the first names of things, but I tend to be pretty accurate with it. There's probably a menmotechnical explanation out there that some cogpsy LW will inform me about.

Comment author: drethelin 04 September 2015 07:49:09PM *  1 point [-]

In general, having weird beliefs and politics makes it very dangerous to speak on live TV. Interviewers and editors are incentivized to make you seem crazy, scary, or ridiculous depending on where you appear. Eliezer is especially leery (justifiably so given his experiences with journalists) of this sort of thing, and he's the most prominent LW public intellectual.

There's also a question of need: Given that Elon Musk knows about and has given millions of dollars to the cause of AI risk, does MIRI really need to do TV publicity?

Comment author: Clarity 02 September 2015 11:13:09AM *  0 points [-]

CBT is becoming less effective and (by the article author' insinuation) is creating disability

For SSC fans, here's an article that's probably about the same thing, but I can't bring myself to read the inane story at the start.

According the the first article's author, the declining efficacy effect is seen in psychiatry in particularly, but also in medicine more generally. Interesting.

How does one deliver Interpersonal psychotherapy? It's just as effective as CBT without the psychobabble. I can't find information on what is actually done, however.

Comment author: ChristianKl 02 September 2015 12:33:17PM *  3 points [-]

How does one deliver Interpersonal psychotherapy? It's just as effective as CBT without the psychobabble. I can't find information on what is actually done, however.

If you can't find information on what's done why do you think there less psychobabble than in CBT?