You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, July 29-August 4, 2013

3 Post author: David_Gerard 29 July 2013 10:26PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Of course, for "every Monday", the last one should have been dated July 22-28. *cough*

Comments (381)

Sort By: Controversial
Comment author: Epiphany 03 August 2013 04:26:19AM *  1 point [-]

I'm looking for a reading recommendation on the topic of perverse incentives, especially incentives that cause people to do unethical things. Yes, I checked "The Best Textbooks on Every Subject" thread and have recorded all the economics recommendations of interest. However, as interested as I am in reading about economics in general, my specific focus is on perverse incentives, especially ones that cause people to do unethical things. I was wondering if anyone has explored this in depth or happens to know a term for "perverse incentives that cause people to do unethical things", (regardless of whether it's part of economics or some other subject), as I can't seem to find one.

Comment author: NancyLebovitz 04 August 2013 11:59:37PM *  5 points [-]

Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management has a fair amount about the limits of incentive plans.

From memory: incentives can work for work that's well-defined and can be done by one person. Otherwise, the result is people gaming the system and not cooperating with each other.

I don't remember whether the book covered something I heard about in the 70s or 80s about a car company which had incentives for teams assembling cars rather than an assembly line.

I was told about a shop owned by partners which had an incentive system for bringing in sales for the shifts the partners worked. The result was that the partners wouldn't tell customers to come back if it might be on someone else's shift.

Comment author: shminux 03 August 2013 05:25:33AM -2 points [-]

perverse incentives that cause people to do unethical things

For example...?

Comment author: wedrifid 03 August 2013 03:55:51PM *  5 points [-]

For example...?

For example allocating funds to fire departments based on how many fires they put out. That encourages them to stop putting work into fire prevention and, at the extreme, creates an incentive for outright arson.

The medical system. (Does that even need explaining?)

Comment author: David_Gerard 04 August 2013 03:27:05PM 2 points [-]
Comment author: Tenoke 31 July 2013 11:12:29AM *  13 points [-]

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it. Please don't let my potential harm discourage you.

Comment author: Rukifellth 31 July 2013 11:24:33PM *  0 points [-]

You magnificent, magnanimous son of a bitch.

Comment author: Benito 31 July 2013 11:29:39PM 3 points [-]

Well that escalated quickly.

Comment author: Rukifellth 31 July 2013 11:33:19PM 1 point [-]

I think a level of gaiety and excitement is appropriate given the subject.

Comment author: sixes_and_sevens 31 July 2013 02:00:14PM 0 points [-]

Can you tell us what you're trying to achieve with this?

Comment author: Tenoke 31 July 2013 02:16:39PM 5 points [-]

Interested in the responses since I actually think I can learn some useful things if anyone actually shares something good. Also, I assign significantly less than 1% chance that anyone will actually tell me anything 'dangerous' - for example I think roko's is as dangerous as pie. I don't plan to release memetic hazards on unsuspecting citizens if that's your fear.

Comment author: sixes_and_sevens 31 July 2013 02:38:54PM 0 points [-]

It's more that soliciting information hazards seems like really odd behaviour. Even if no-one sends you an Interactive Suicide Rock, you might still receive some horrible or annoying stuff you don't want to be carrying around in your head.

I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.

Comment author: David_Gerard 31 July 2013 10:02:16PM -1 points [-]

The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

Comment author: gwern 31 July 2013 10:10:27PM 5 points [-]

(Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

And even more obviously, epilepsy. Yet, I don't understand why you would except them.

'You see, X does not exist, since I choose to ignore all the cases in which X does exist; I hope you'll agree that this argument is watertight once you grant my premises.'

Comment author: David_Gerard 02 August 2013 11:51:40AM *  -1 points [-]

Gwern, this thread is about the Basilisk. Conflating that with epilepsy is knowing equivocation. Don't be dense, thanks.

Comment author: gwern 02 August 2013 02:35:27PM *  0 points [-]

No denser than thou, David:

The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

Who was it who brought up the Motif of Harmful Sensation, which is not limited to Roko's basilisk? Who was it who brought up in order to define away examples of depression or OCD? Thou, David, thou.

Comment author: David_Gerard 02 August 2013 08:15:55PM -1 points [-]

The fictional trope is of one you wouldn't expect to be harmful. That's the literary point of it, and of the Basilisk: the surprise factor.

Comment author: gwern 02 August 2013 08:53:53PM 2 points [-]

The fictional trope is of one you wouldn't expect to be harmful.

And surely the animators who made that Pokemon episode expected it to be harmful and they made those kids seize because they're simply evil.

No denser than thou, David.

Comment author: asr 31 July 2013 10:29:49PM *  4 points [-]

I think David has a point here.

The cases you two have mentioned of sensory hazards all affect people who have identifiable susceptibilities that those people usually know about in advance and that affect relatively small minorities.

Somebody might have a high confidence that they are non-depressed, non-OCD, non-epileptic, etc. Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?

Comment author: gwern 31 July 2013 11:18:26PM 4 points [-]

Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?

But this is a different question. You have quietly redefined the question "are there harmful sensations to people?" - to which the answer is overwhelmingly, resoundingly, yes, there absolutely are - to 'are there harmful sensations to a newly redefined subset of people which we will immediately update if anyone produces further examples, so actually what I meant all along was "are there harmful sensations which we don't yet know about?"'

Or to put it more simply: 'Can you provide an example of a harmful sensation we don't yet know about?' Well... If I could produce a harmful sensation, you and David would simply say something like 'ah, well, I guess we now have a recognized medical problem, because look, we [commit suicide / collapse in convulsions / cease functioning / become obsessed with useless actions] if you expose us to X! That's a pretty serious psychiatric problem! But, are there examples of sensory hazards that apply to people who do not have a recognized medical problem?'

To which I can only shake my head no.

Comment author: asr 01 August 2013 04:56:19AM 2 points [-]

I hear you and I'm not trying to play the definition game or wriggle out of this. The way I conceptualized the question -- which I think the original poster had in mind and what I think is relevant to hazard risk assessment -- is more like one of these:

A) "What fraction of the public is seriously vulnerable to sensory hazards",

B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."

My hunch is that the answers are "less than 20%" and "close to zero." The example of epilepsy didn't shift my beliefs about either; epilepsy is rare and is rarely adult-onset for the non-elderly.

Comment author: gwern 01 August 2013 02:41:20PM 9 points [-]

B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."

So you're asking, what new medical sensory hazards may be developed in the future.

Well, the example of photosensitive epilepsy, where no trigger is mentioned which could have existed before the 19th century or so, suggests you should be very wary of thinking the risk of new sensory hazards is close to zero. Flash grenades are another visual example of a historically novel sensation which badly damages ordinary people. Infrasound is another plausible candidate for future deliberate or accidental weaponization. And so on...

epilepsy is rare and is rarely adult-onset for the non-elderly.

There, see, you're doing it again! Why would you exclude the elderly? Keep in mind that you yourself should aspire to become elderly one day (after all, consider the most likely alternative...).

Comment author: FourFire 01 August 2013 09:08:54AM *  2 points [-]

I think that most of the general examples have been mentioned: Religion among others, which has the rather mildly harmful "fear of hell" and it's own propagation.

I think that any majorly harmful hazard which the general population was susceptible to would cause them to all shortly win darwin awards and remove themselves from the genepool.

As such we only have minority groups which are vulnerable to specific stimuli.

Comment author: Leonhart 05 August 2013 07:17:33PM *  1 point [-]

the rather mildly harmful "fear of hell"

The Typical Mind Fallacy is strong with this one.

remove themselves from the genepool

It's a good thing that isn't a mortal sin! Oh no wait.

Comment author: Tenoke 31 July 2013 02:53:22PM 3 points [-]

I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.

Other people expressed a similar view and since I don't mind, I can at least help with satisfying people's curiosity in a way that would cause minimal harm. However, I have found nothing worth talking about after some fairly extensive google searches so I am currently trying to think if there is anyone knowledgeable that I can e-mail (already have a few people on the list) or if there are any good search terms that I haven't tried yet.

Comment author: sixes_and_sevens 31 July 2013 03:09:46PM 1 point [-]

It's probably worth clarifying what you consider a basilisk, as that might reduce any unpleasant-yet-irrelevant submissions.

Comment author: drethelin 31 July 2013 01:25:13PM 0 points [-]

Some basilisks are potentially contagious.

Comment author: Tenoke 31 July 2013 01:27:22PM 8 points [-]

Please give me examples.

Comment author: linkhyrule5 31 July 2013 09:54:20PM 1 point [-]

Ever seen one of those "If you don't forward this email to five friends, your (relation) will DIE!!1!!!one!" emails?

Comment author: drethelin 31 July 2013 08:22:34PM 7 points [-]

I think the most obvious semi-basilisk example is certain strains of religion. Insofar as it makes you believe you might go to hell, and all your friends are going to hell, these religions will make you feel bad an also make you want to spread them to everyone you know. Feeling bad is not the same as death or mental breakdown or other theoretical actual basilisk consequences but in essence there are meme complexes that contain elements that demand you spread the whole complex. If someone's in possession of such a concept but has defeated it or is in some way immune it may still be correct for them not to tell you for fear you are not and will spread it to others once it has worked it's will on you.

Comment author: Rukifellth 03 August 2013 02:32:39AM 0 points [-]

What do Christians do with the idea of "you're not spreading His Word fast enough"? It would be the same kind of scenario if there's nothing restraining Christian evangelical obligation.

Comment author: drethelin 03 August 2013 04:04:07PM 0 points [-]

Depends on the sect and person

Comment author: HungryHippo 31 July 2013 04:46:10PM *  1 point [-]

Please let us know if you recieve anything interesting.

Comment author: Username 31 July 2013 08:24:41PM *  3 points [-]

Could you post how many you receive and your realistic estimation on whether any are actually dangerous? Without specifics of course. (If you take these things seriously, I suppose you should have a dead-man's switch.)

Though for the record I think the LW policy on not being able to discuss basilisks is ridiculous - a big banner at the top of a post saying for example 'Warning - Information Hazard to those who have suffered anxiety at the thought of AI acting acausally' should be fine. I strongly disagree with outright banning of discussion about specific basilisks/medusas, especially seeing as LW is one of the only places where one could have a meaningful conversation about them.

Comment author: wedrifid 01 August 2013 06:08:44AM 11 points [-]

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it.

We almost need a list for this. This makes half a dozen people I've seen making the same declaration.

Please don't let my potential harm discourage you.

Without endorsing the reasoning at all I note that those with information suppressing inclinations put only a little weight on harm caused to you and even less on your preferences. If they believe that the basilisk is worthy of the name they will expect giving it to you to result in you spreading it to others and thereby causing all sorts of unspeakable misery and soforth. It'd be like infecting a bat with ebola.

Comment author: pinyaka 31 July 2013 03:38:38PM 5 points [-]

You are using basilisk in a manner that I don't understand. I assume you're not asking if anyone has a lizard that will literally turn you into stone, so what does basilisk mean in this context?

Comment author: Tenoke 31 July 2013 03:43:49PM *  6 points [-]

Memetic/Information Hazards - the term comes from here. Basically anything that makes you significantly worse off after you know it than before. Giving someone wrong instructions for how to build a bomb wouldn't count for example as I can just never build a bomb or just use other instructions etc.

Warning: Could be dangerous to look into it

Comment author: Lumifer 31 July 2013 04:44:58PM 6 points [-]

Memetic/Information Hazards

They really should be called Medusas -- since it's you looking at them, not them looking at you.

Comment author: Rukifellth 31 July 2013 11:39:24PM 2 points [-]

I think they both need to make eye contact.

Comment author: Tenoke 31 July 2013 05:06:17PM -1 points [-]

Yup, Medusa is what some blogposts use to describe them.

Comment author: HungryHippo 31 July 2013 04:47:28PM 1 point [-]

Do you of anyone claiming to be in possession of such a fact?

Comment author: Tenoke 31 July 2013 05:05:12PM -1 points [-]

I know some basilisks, yes. Although, there is nothing I regard as actually dangerous. However, sharing things like this publicly is considered bad etiquette on LessWrong.

Comment author: MixedNuts 01 August 2013 09:58:35AM -1 points [-]

Can you send me yours? Please PM me here or on IRC. I already know the most famous one here.

Comment author: Rukifellth 31 July 2013 11:57:41PM *  0 points [-]

I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.

Comment author: wedrifid 01 August 2013 06:14:05AM *  1 point [-]

I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.

Not just glib reassurance. There is also the outright mockery of those who advocate taking (the known pseudo-examples of) them seriously.

Comment author: Rukifellth 01 August 2013 10:40:13AM 0 points [-]

I can't imagine that anyone is advocating taking them seriously.

Comment author: pinyaka 02 August 2013 07:08:57PM 1 point [-]

If it's not dangerous, how does it constitute a hazard?

Comment author: Rukifellth 31 July 2013 11:28:48PM 0 points [-]

I know one.

Also I think you're missing the word "know"

Comment author: RichardKennaway 01 August 2013 11:48:48AM 3 points [-]

Eliezer is in possession of a fact that he considers to be highly dangerous to anyone who knows it, and who does not have sufficient understanding of exotic decision theory to avoid being vulnerable to it. This is the original basilisk that drew LessWrong's attention to the idea. Whether he is right is disputed (but the disputation cannot take place here).

In HPMOR, he has fictionally presented another basilisk: Harry cannot tell some other wizards, including Dumbledore, about the true Patronus spell, because that knowledge would render them incapable of casting the Patronus at all, leaving them vulnerable to having their minds eaten by Dementors.

Comment author: John_Maxwell_IV 06 August 2013 04:32:57PM 1 point [-]

Recent effective altruism job openings (all close within the next 10 days):

More info.

Comment author: Jayson_Virissimo 03 August 2013 02:56:39AM 1 point [-]

I'm planning on taking Algorithms Part 1 and Part 2 through Coursera to complement my first year computer science (software engineering) courses. I am very much interested in collaborating with other LWers. The first course in the sequence starts August 23. Please let me know if you are interested and what form of collaboration you would be most comfortable with (weekly "book club" posts in Discussion, IRC studyhall, etc... ) if you are.

About the course:

An introduction to fundamental data types, algorithms, and data structures, with emphasis on applications and scientific performance analysis of Java implementations. Specific topics covered include: union-find algorithms; basic iterable data types (stack, queues, and bags); sorting algorithms (quicksort, mergesort, heapsort) and applications; priority queues; binary search trees; red-black trees; hash tables; and symbol-table applications.

Recommended Background:

All you need is a basic familiarity with programming in Java. This course is primarily aimed at first- and second-year undergraduates interested in engineering or science, along with high school students and professionals with an interest (and some background) in programming.

Suggested Readings:

Although the lectures are designed to be self-contained, students wanting to expand their knowledge beyond what we can cover in a 6-week class can find a much more extensive coverage of this topic in our book Algorithms (4th Edition) , published by Addison-Wesley.

Course Format:

There will be two lectures (75 minutes each) each week. The lectures are each broken into about 4-6 pieces, separated by interactive quiz questions for you to to help you process and understand the material. In addition, there will be a problem set and a programming assignment each week and there will be a final exam.

Comment author: gwern 03 August 2013 02:23:59AM 10 points [-]

Question: where can I upload jailbroken PDFs that is public & Google-visible?

For a job, I compiled ~100MB of lipreading research, some of them extremely obscure & hard to find (I also have some Japanese literature PDFs in a similar situation); while I have no personal interest in the topic and do not want to host indefinitely the PDFs on gwern.net, I feel it would be a massive waste to simply delete them.

I cannot simply put them in a Dropbox public folder because they wouldn't show up in Google, and Scribd is an abomination I despise.

(crosspost from Google+)

Comment author: Douglas_Knight 13 September 2013 05:45:51PM 2 points [-]

wordpress.com has 3gb quota and pdfs are visible to google.

Comment author: hg00 07 August 2013 05:24:38AM 1 point [-]

Scribd is an abomination I despise.

Hm? As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents. They have to make money somehow. Would you rather them insert full-page ads in documents the way YouTube now plays ads before video clips?

Anyway, one idea is to find people who run sites on topics related to the PDFs and suggest that they upload them to their sites. Should increase the google juice of both the documents and the sites of those who upload them, so win/win, right?

Comment author: gwern 07 August 2013 11:31:25PM 2 points [-]

As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents.

Money which they have zero right to collect and which breaks the implied contract they had with their previous users who uploaded those documents.

And their interface is butt-ugly with PDFs completely unreadable in their HTML version - but of course they don't let you download the PDFs because they're all behind the Scribd paywall.

Hosting documents. A pretty simple task, one would think, and yet Scribd manages to do it both scuzzily and poorly.

They have to make money somehow.

A fully-general excuse. But they are not owed a living.

Comment author: DanielLC 03 August 2013 03:57:02AM 1 point [-]

I'd guess Google Drive.

You could get a website that points to wherever the download actually is.

Comment author: gwern 03 August 2013 02:39:46PM 1 point [-]

That's one of the suggestions on G+ too. I didn't think that they would show up in Google proper and get indexed, but someone said they had for him, so maybe I will go with that. (Even if it doesn't work, I can always redownload and upload somewhere else, presumably.)

Comment author: Omid 02 August 2013 03:24:38PM *  2 points [-]

What's the most credible way to set up an information bounty?

Comment author: Qiaochu_Yuan 02 August 2013 10:14:35PM 2 points [-]

What's an information bounty? What kind of information are you looking for?

Comment author: Omid 03 August 2013 04:56:47AM 1 point [-]

Sorry, I guess the proper term is "truth bounty." . The Truth Seal originally offered to arbitrate truth bounties, but it quickly went defunct.

Comment author: linkhyrule5 02 August 2013 08:04:07AM 5 points [-]

Waffled between putting this here and putting this in the Stupid Questions thread:

Why is the default assumption that a superintelligence of any type will populate its light cone?

I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).

But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity - but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.

On top of this, I ... well, I can't say "can't imagine," but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn't be used for millenia, when it could instead focus on improving local life.

There is, of course, the intelligence-explosion argument, but I'd think even intelligence would hit diminishing marginal returns eventually.

So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities - in which case there's quite a bit more leeway in the Fermi Paradox.

Comment author: DanielLC 03 August 2013 03:26:45AM 3 points [-]

Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn't send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you'd want to leave early.

Also, as Oscar said, you don't want the resources you can easily reach to go to waste while you're putting off using them.

Comment author: Oscar_Cunningham 02 August 2013 11:22:48AM *  4 points [-]

It's because we want to secure as many resources as possible, before the aliens get to them.

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

Comment author: Lumifer 02 August 2013 08:04:12PM 1 point [-]

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

So maybe the Solar System has been secured by an alien-FAI and we're being saved for the aliens to use much later..?

Comment author: Oscar_Cunningham 02 August 2013 08:36:52PM 1 point [-]

It's totally possible, but they'd have to have a good reason for staying hidden for the reason nyan_sandwich gives.

Comment author: [deleted] 02 August 2013 08:16:15PM 1 point [-]

Most valuable of those resources is free energy. The sun is burning that into low grade light and heat at an incredible rate.

Comment author: Lumifer 02 August 2013 08:41:20PM 2 points [-]

So does that imply that a rapidly expanding resource-saving FAI would go around extinguishing stars?

Comment author: DanielLC 03 August 2013 03:23:38AM 1 point [-]

I suspect using them is more likely. They certainly aren't going to just let them keep wasting fuel. Not unless they have the opportunity to prevent even more waste. For example, they will send out probes to other systems before worrying too much about this system.

Comment author: Oscar_Cunningham 03 August 2013 12:16:39AM 1 point [-]

extinguishing stars

Is that even possible!? The FAI would want to somehow pause the burning of the star, allowing it to begin producing energy again when needed. For example collapsing it into a black hole wouldn't be what we want, since the energy would be wasted.

Would star lifting be enough to slow the burning of a star to a standstill?

Comment author: [deleted] 02 August 2013 10:10:58PM 8 points [-]

Seems prudent to do.

Unless it values the existence of stars more than it values other things it could do with that energy.

Comment author: Nisan 04 August 2013 04:03:26PM 4 points [-]

Upvoted for being the first instance I've seen of someone describing extinguishing all the stars in the night sky as being prudent.

Comment author: wadavis 02 August 2013 02:59:09PM 1 point [-]

Read up on the Dominion Lands Act and the Homestead Act for a historic human precedent.

Comment author: linkhyrule5 02 August 2013 07:32:32PM 1 point [-]

Right, but I'm not sure that's the right precedent to use. Space is big: it'd be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.

Comment author: wadavis 02 August 2013 08:19:43PM *  3 points [-]

Davy Jones: One Soul is not equal to another

Jack Sparrow: Aha! So we've established my proposal is sound in principle, now we're just haggling over price.

-- Pirates of the Caribbean: Dead Man's Chest

Or in this case, scope instead of price.

Jokes aside, the point being is the sponsored settlement of the prairies had an influence of the negotiations of the Canada / U.S.A. border. If an human civilization had belief that it may have future competition with aliens for territory in space it would make sense to them to secured as much as possible as a Schelling Point in negotiations / conflicts.

Comment author: linkhyrule5 03 August 2013 12:32:34AM 1 point [-]

Point granted.

... and once an FAI has sent out probes to claim territory anyway, it loses nothing by making those probes nanotech with a copy of the FAI loaded on it, so we would indeed expect to see lightspeed expansions of FAI-controlled civilizations. Fair enough, then.

Comment author: niceguyanon 01 August 2013 09:31:30PM *  1 point [-]

My priors tell me that statistical arbitrage opportunities in online poker to net 100k a year to be less than 2% for someone who has an IQ of 100. And likely to be diminishing quickly as the years go by.

A few reasons include: Bots are confirmed to be winning players, in full ring and NL games. Online poker is mature and has better players. Rake. New 'fish" to grinder ratio is getting smaller.

Does anyone have thoughts to the contrary? Perhaps more sophisticated software to catch botters? Or new regulations legalizing online poker to increase new fish?

Comment author: Tenoke 03 August 2013 08:57:07AM -1 points [-]

It's true and has been for years (since the early 00's boom). Except that bots (to my knowledge) are not really a big problem while the separation of countries (e.g. US players being able to play only with US players) from the general pool of players is. This is why I stopped playing in 2010.

Comment author: gothgirl420666 01 August 2013 06:16:38PM 1 point [-]

Is there a word processing program for Windows that's similar to TextEdit on a Mac? I always preferred TextEdit over programs like Microsoft Word or Pages because it loads quickly and you can easily fit it in a small window for writing quick notes. In other words, it's "small", I guess you would say.

Right now I'm using CopyWriter, which is pretty good, but it has two problems 1) no spell check and 2) no autosave. Mostly I just use Evernote and Google Docs though.

Any suggestions?

Comment author: Lumifer 01 August 2013 07:52:42PM 3 points [-]

WordPad is the built-in Windows light word processor. Other alternatives that come to mind are SciTE and Notepad++

Comment author: PECOS-9 01 August 2013 05:31:23PM 6 points [-]

I typed up the below message before discovering that the term I was looking for is "data dredging" or "hypothesis fishing." Still decided to post below so others know.

Is there a well-known term for the kind of error that pre-registrations of scientific studies is meant to avoid? I mean the error where an experiment is designed to test something like "This drug cures the common cold," but then when the results show no effect, the researchers repeatedly do the analysis on smaller slices of the data from the experiment, until eventually they have the results "This drug cures the common cold in males aged 40-60, p<.05," when of course that result is just due to random chance (because if you do the statistical tests on 20 subsets of the data, chances are one of them will show an effect with p<.05).

It's similar to the file drawer effect, except it's within a single experiment, not many.

Comment author: GuySrinivasan 01 August 2013 04:00:09PM *  4 points [-]

(link) Effective Altruism: Professionals donate expertise. Toyota sends some industrial engineers to improve NYC's Food Bank charity.

HT Hacker News

Comment author: pinyaka 01 August 2013 12:19:42PM *  8 points [-]

Does anyone know why GiveWell is registered with the IRS under a different name (Clear Fund)? I am including a link to their recommendation for the AMF on a wedding registry and have already gotten a question about about why their name differs.

Comment author: [deleted] 02 August 2013 11:47:06AM 1 point [-]

I had noticed that when I got a receipt for a donation I made to them, but I assumed “Clear Fund” was their former name and they hadn't bothered to legally change it or something and didn't worry too much about that.

Comment author: Douglas_Knight 05 August 2013 08:19:36PM 1 point [-]

This is extremely common, though the link pinyaka gave has a column for "doing business as," which should say GiveWell, but is left blank.

Comment author: CAE_Jones 01 August 2013 02:04:09AM 1 point [-]

A few months ago, I decided to try a "gather impossible problems, hold off on proposing solutions until we've thoroughly understood them, then solve them" 'campaign'. The problems I came up with focused on blindness, so I started the discussion here rather than LW. I was surprised when I looked it up today and found that it only lasted for four days--I had been sure it had managed to drag on a little longer than that.

I recall someone tried something similar on LW, though considerably less focused and more willing to take things they couldn't be expected to solve without many more resources. I also recall that little if anything came of it.

Something tells me we're doing it wrong.

Comment author: Benito 31 July 2013 11:29:01PM *  1 point [-]

Starting to write introductions to LW for friends; here's my fast-track.

Please comment with thoughts here (or there).

Comment author: Adele_L 01 August 2013 12:17:52AM *  1 point [-]

I got a 'page not found' error when I clicked on that link because of the period at the end.

Comment author: Benito 01 August 2013 06:02:48AM *  1 point [-]

Fixed.

Comment author: niceguyanon 31 July 2013 11:02:44PM 5 points [-]

If I believe that automation causing mass unemployment is around the corner (10-20 years), what do I do or invest in now to prepare for it?

Comment author: Username 02 August 2013 05:02:51PM *  1 point [-]

You can train yourself in one of the industries you expect to thrive. This could either be the high-tech route of being the one programming and developing the machines, or it could be in a job that never goes away, like plumbing/carpentry/welding. All of which can earn 6 figures, it's a matter of the type of work you like doing.

Comment author: gwern 31 July 2013 11:21:44PM *  7 points [-]

Acquire as much capital as you can, presumably. If the share of economic growth for labor is falling, that of capital must be rising. The topic has come up before but I'm not sure anyone had more concrete advice than index funds - it's tempting to try to invest in software or specific tech companies, except then you're basically being a VC and it's very hard to pick the winners.

Comment author: Jayson_Virissimo 31 July 2013 11:25:18PM 4 points [-]

Or land.

Comment author: NancyLebovitz 31 July 2013 06:49:23AM 6 points [-]

As I understand applying Bayes to science, the aim is to direct research into areas that make sense. However, sometimes valuable discoveries are made by accident.

Is there any way to tell whether your research is over-focused? To improve the odds of noticing valuable anomalies?

Comment author: Adele_L 31 July 2013 02:38:27AM 4 points [-]

What is the function of the karma awards page?

Comment author: Nornagest 31 July 2013 03:46:53AM *  4 points [-]

There's been some discussion about incentivizing people to do useful things for the community by putting up karma bounties, thus removing some of the uncertainty inherent in upvotes. The most comprehensive thread I could find is here; two years old, but LW development grinds slow.

That's my best guess, anyway.

Comment author: Adele_L 31 July 2013 03:57:45AM 1 point [-]

Ok, thanks! Seems like an interesting plan, I hope it can get implemented.

Comment author: Prismattic 31 July 2013 02:02:38AM 30 points [-]

Ugh. I am generally in the unsympathetic-to-PUA thinking camp, so I offer the following not to bring up a controversial subject again, but because I think publicly acknowledging when one encounters inconvenient evidence for one's priors is a healthy habit to be in...

Recently I added the following (truthful) text to my OkCupid! profile:

Note, July 2013 -- I can't claim to be in a relationship yet, but I have had a couple of dates with a someone who had me totally enthralled within 30 minutes of meeting her. I'm flattered by the wave of other letters that have come in the past month, but I've put responding to anyone else on hold while I devote myself to worshiping the ground she walks on.

Having noted that I am a)unavailable and b)getting lots of competing offers, a high status combination, the result is... in three days, the number of women rating my profile highly has gone from 61 to 113.

Comment author: Viliam_Bur 10 August 2013 09:32:24PM 2 points [-]

in three days, the number of women rating my profile highly has gone from 61 to 113.

There is this competing hypothesis, that the women upvoted you for being honest with them, or for being faithful to the lady you wrote about. (As opposed to just trying to bed as much women as possible.)

So... how about the number of women contacting you -- has it increased, decreased, or remained the same? Perhaps that could provide some evidence to discriminate between the "he is unavailable, therefore attractive" and "he is unavailable, upvoted for not wasting my hopes" hypotheses.

Comment author: [deleted] 06 August 2013 11:21:27AM 1 point [-]

in three days, the number of women rating my profile highly has gone from 61 to 113.

Wait a moment... How long did it take to go from 0 to 61? How long hadn't you logged into OkC before writing that? Maybe the increase is due to more people finding your profile when looking for people “Online today” or “Online this week”?

Comment author: Prismattic 06 August 2013 10:35:46PM -1 points [-]

Alas, there are no loopholes here. 0-61 took almost exactly a year (it would have been more like 10 months, but you lose the votes of people who deactivate their profiles), and I was logging in at least weekly, usually more, during that time.