You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, July 29-August 4, 2013

3 Post author: David_Gerard 29 July 2013 10:26PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Of course, for "every Monday", the last one should have been dated July 22-28. *cough*

Comments (381)

Comment author: John_Maxwell_IV 06 August 2013 04:32:57PM 1 point [-]

Recent effective altruism job openings (all close within the next 10 days):

More info.

Comment author: David_Gerard 04 August 2013 03:27:05PM 2 points [-]
Comment author: Epiphany 03 August 2013 04:26:19AM *  1 point [-]

I'm looking for a reading recommendation on the topic of perverse incentives, especially incentives that cause people to do unethical things. Yes, I checked "The Best Textbooks on Every Subject" thread and have recorded all the economics recommendations of interest. However, as interested as I am in reading about economics in general, my specific focus is on perverse incentives, especially ones that cause people to do unethical things. I was wondering if anyone has explored this in depth or happens to know a term for "perverse incentives that cause people to do unethical things", (regardless of whether it's part of economics or some other subject), as I can't seem to find one.

Comment author: NancyLebovitz 04 August 2013 11:59:37PM *  5 points [-]

Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management has a fair amount about the limits of incentive plans.

From memory: incentives can work for work that's well-defined and can be done by one person. Otherwise, the result is people gaming the system and not cooperating with each other.

I don't remember whether the book covered something I heard about in the 70s or 80s about a car company which had incentives for teams assembling cars rather than an assembly line.

I was told about a shop owned by partners which had an incentive system for bringing in sales for the shifts the partners worked. The result was that the partners wouldn't tell customers to come back if it might be on someone else's shift.

Comment author: shminux 03 August 2013 05:25:33AM -2 points [-]

perverse incentives that cause people to do unethical things

For example...?

Comment author: wedrifid 03 August 2013 03:55:51PM *  5 points [-]

For example...?

For example allocating funds to fire departments based on how many fires they put out. That encourages them to stop putting work into fire prevention and, at the extreme, creates an incentive for outright arson.

The medical system. (Does that even need explaining?)

Comment author: Zaine 04 August 2013 02:31:02AM *  0 points [-]

I gather Australia's medical system is just as notoriously bad as America's (as per Yvain's excoriations)?

Finland's Healthcare system and to a lesser extent the NHS seem to mostly have proper incentives in place, as uncured folk means less capacity to treat oneself. Surely medical care the world over isn't guided by perverse incentives? That is more a question than an assertion.

Comment author: Jayson_Virissimo 03 August 2013 02:56:39AM 1 point [-]

I'm planning on taking Algorithms Part 1 and Part 2 through Coursera to complement my first year computer science (software engineering) courses. I am very much interested in collaborating with other LWers. The first course in the sequence starts August 23. Please let me know if you are interested and what form of collaboration you would be most comfortable with (weekly "book club" posts in Discussion, IRC studyhall, etc... ) if you are.

About the course:

An introduction to fundamental data types, algorithms, and data structures, with emphasis on applications and scientific performance analysis of Java implementations. Specific topics covered include: union-find algorithms; basic iterable data types (stack, queues, and bags); sorting algorithms (quicksort, mergesort, heapsort) and applications; priority queues; binary search trees; red-black trees; hash tables; and symbol-table applications.

Recommended Background:

All you need is a basic familiarity with programming in Java. This course is primarily aimed at first- and second-year undergraduates interested in engineering or science, along with high school students and professionals with an interest (and some background) in programming.

Suggested Readings:

Although the lectures are designed to be self-contained, students wanting to expand their knowledge beyond what we can cover in a 6-week class can find a much more extensive coverage of this topic in our book Algorithms (4th Edition) , published by Addison-Wesley.

Course Format:

There will be two lectures (75 minutes each) each week. The lectures are each broken into about 4-6 pieces, separated by interactive quiz questions for you to to help you process and understand the material. In addition, there will be a problem set and a programming assignment each week and there will be a final exam.

Comment author: gwern 03 August 2013 02:23:59AM 10 points [-]

Question: where can I upload jailbroken PDFs that is public & Google-visible?

For a job, I compiled ~100MB of lipreading research, some of them extremely obscure & hard to find (I also have some Japanese literature PDFs in a similar situation); while I have no personal interest in the topic and do not want to host indefinitely the PDFs on gwern.net, I feel it would be a massive waste to simply delete them.

I cannot simply put them in a Dropbox public folder because they wouldn't show up in Google, and Scribd is an abomination I despise.

(crosspost from Google+)

Comment author: gwern 26 May 2014 02:22:56AM 0 points [-]

I'm currently trying http://pdf.yt/ for PDF hosting. It seems to talk the talk.

Comment author: Douglas_Knight 13 September 2013 05:45:51PM 2 points [-]

wordpress.com has 3gb quota and pdfs are visible to google.

Comment author: gwern 13 September 2013 06:31:40PM 0 points [-]

Interesting. I am giving it a try at http://gwern0.wordpress.com/ . We'll see in a month if any of the PDFs show up in Google.

Comment author: Douglas_Knight 14 September 2013 09:37:10PM 1 point [-]

Where are the links to the documents?

Comment author: gwern 14 September 2013 10:22:36PM 0 points [-]

I don't know. I uploaded the PDFs and 'attached' them to a post. I'm not sure what I'm supposed to do beyond that.

Comment author: Douglas_Knight 14 September 2013 11:30:08PM *  1 point [-]

How to use wordpress to upload and publicize files:

Files show up at gwern0.files.wordpress.com/2013/09/original_name.
There's also an "attachment page" at gwern0.wordpress.com/?attachment_id=##, but only after you publish the associated post, while the file is immediately world readable after upload, just secret.

To get wordpress to populate the post with links:

  1. Edit post
  2. "add media"
  3. (upload files via "upload files" pane)
  4. choose "media files" pane, if necessary
  5. select all files
  6. click "insert into post" at bottom.
Comment author: gwern 15 September 2013 06:24:43PM 0 points [-]

I see, thanks. It looks like that works - I see PDF links in both posts now.

Comment author: Douglas_Knight 15 September 2013 10:06:32PM 1 point [-]

OK, now I can find the links, but can google? It's not supposed to follow links from LW. I think WP advertises new accounts somewhere, but I don't think it's worth much. I suggest you link to it from gwern.net and/or google plus. Also that you link to your google drive public folder.

(I predict that if you don't link to the WP page, google will eventually find it and index it, but not index the pdfs. So if someone searches for the title of the article, google will produce the hit, but google scholar won't have it. And "eventually" might be more than month.)

Comment author: gwern 13 October 2013 10:25:35PM 0 points [-]

So, I just opened up the WP blog and did Scholar searches for 3 or 4 of the lipreading PDFs. Not a single hit.

Comment author: gwern 15 September 2013 10:16:30PM 0 points [-]

We'll see in a month.

Comment author: Douglas_Knight 14 September 2013 10:38:00PM 0 points [-]

Let me get back to you about wordpress, but I wonder if this explains why google drive didn't work for you, when it did work for WB? Google could find everything on the google drive, unlike wp, but maybe they only look via links.

Comment author: hg00 07 August 2013 05:24:38AM 1 point [-]

Scribd is an abomination I despise.

Hm? As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents. They have to make money somehow. Would you rather them insert full-page ads in documents the way YouTube now plays ads before video clips?

Anyway, one idea is to find people who run sites on topics related to the PDFs and suggest that they upload them to their sites. Should increase the google juice of both the documents and the sites of those who upload them, so win/win, right?

Comment author: gwern 07 August 2013 11:31:25PM 2 points [-]

As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents.

Money which they have zero right to collect and which breaks the implied contract they had with their previous users who uploaded those documents.

And their interface is butt-ugly with PDFs completely unreadable in their HTML version - but of course they don't let you download the PDFs because they're all behind the Scribd paywall.

Hosting documents. A pretty simple task, one would think, and yet Scribd manages to do it both scuzzily and poorly.

They have to make money somehow.

A fully-general excuse. But they are not owed a living.

Comment author: DanielLC 03 August 2013 03:57:02AM 1 point [-]

I'd guess Google Drive.

You could get a website that points to wherever the download actually is.

Comment author: gwern 03 August 2013 02:39:46PM 1 point [-]

That's one of the suggestions on G+ too. I didn't think that they would show up in Google proper and get indexed, but someone said they had for him, so maybe I will go with that. (Even if it doesn't work, I can always redownload and upload somewhere else, presumably.)

Comment author: Omid 02 August 2013 03:24:38PM *  2 points [-]

What's the most credible way to set up an information bounty?

Comment author: Qiaochu_Yuan 02 August 2013 10:14:35PM 2 points [-]

What's an information bounty? What kind of information are you looking for?

Comment author: Omid 03 August 2013 04:56:47AM 1 point [-]

Sorry, I guess the proper term is "truth bounty." . The Truth Seal originally offered to arbitrate truth bounties, but it quickly went defunct.

Comment author: linkhyrule5 02 August 2013 08:04:07AM 5 points [-]

Waffled between putting this here and putting this in the Stupid Questions thread:

Why is the default assumption that a superintelligence of any type will populate its light cone?

I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).

But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity - but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.

On top of this, I ... well, I can't say "can't imagine," but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn't be used for millenia, when it could instead focus on improving local life.

There is, of course, the intelligence-explosion argument, but I'd think even intelligence would hit diminishing marginal returns eventually.

So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities - in which case there's quite a bit more leeway in the Fermi Paradox.

Comment author: DanielLC 03 August 2013 03:26:45AM 3 points [-]

Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn't send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you'd want to leave early.

Also, as Oscar said, you don't want the resources you can easily reach to go to waste while you're putting off using them.

Comment author: Oscar_Cunningham 02 August 2013 11:22:48AM *  4 points [-]

It's because we want to secure as many resources as possible, before the aliens get to them.

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

Comment author: Lumifer 02 August 2013 08:04:12PM 1 point [-]

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

So maybe the Solar System has been secured by an alien-FAI and we're being saved for the aliens to use much later..?

Comment author: Oscar_Cunningham 02 August 2013 08:36:52PM 1 point [-]

It's totally possible, but they'd have to have a good reason for staying hidden for the reason nyan_sandwich gives.

Comment author: [deleted] 02 August 2013 08:16:15PM 1 point [-]

Most valuable of those resources is free energy. The sun is burning that into low grade light and heat at an incredible rate.

Comment author: Lumifer 02 August 2013 08:41:20PM 2 points [-]

So does that imply that a rapidly expanding resource-saving FAI would go around extinguishing stars?

Comment author: DanielLC 03 August 2013 03:23:38AM 1 point [-]

I suspect using them is more likely. They certainly aren't going to just let them keep wasting fuel. Not unless they have the opportunity to prevent even more waste. For example, they will send out probes to other systems before worrying too much about this system.

Comment author: Oscar_Cunningham 03 August 2013 12:16:39AM 1 point [-]

extinguishing stars

Is that even possible!? The FAI would want to somehow pause the burning of the star, allowing it to begin producing energy again when needed. For example collapsing it into a black hole wouldn't be what we want, since the energy would be wasted.

Would star lifting be enough to slow the burning of a star to a standstill?

Comment author: [deleted] 02 August 2013 10:10:58PM 8 points [-]

Seems prudent to do.

Unless it values the existence of stars more than it values other things it could do with that energy.

Comment author: Nisan 04 August 2013 04:03:26PM 4 points [-]

Upvoted for being the first instance I've seen of someone describing extinguishing all the stars in the night sky as being prudent.

Comment author: linkhyrule5 02 August 2013 07:35:35PM 0 points [-]

Hm. Point.

Comment author: wadavis 02 August 2013 02:59:09PM 1 point [-]

Read up on the Dominion Lands Act and the Homestead Act for a historic human precedent.

Comment author: linkhyrule5 02 August 2013 07:32:32PM 1 point [-]

Right, but I'm not sure that's the right precedent to use. Space is big: it'd be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.

Comment author: wadavis 02 August 2013 08:19:43PM *  3 points [-]

Davy Jones: One Soul is not equal to another

Jack Sparrow: Aha! So we've established my proposal is sound in principle, now we're just haggling over price.

-- Pirates of the Caribbean: Dead Man's Chest

Or in this case, scope instead of price.

Jokes aside, the point being is the sponsored settlement of the prairies had an influence of the negotiations of the Canada / U.S.A. border. If an human civilization had belief that it may have future competition with aliens for territory in space it would make sense to them to secured as much as possible as a Schelling Point in negotiations / conflicts.

Comment author: linkhyrule5 03 August 2013 12:32:34AM 1 point [-]

Point granted.

... and once an FAI has sent out probes to claim territory anyway, it loses nothing by making those probes nanotech with a copy of the FAI loaded on it, so we would indeed expect to see lightspeed expansions of FAI-controlled civilizations. Fair enough, then.

Comment author: niceguyanon 01 August 2013 09:31:30PM *  1 point [-]

My priors tell me that statistical arbitrage opportunities in online poker to net 100k a year to be less than 2% for someone who has an IQ of 100. And likely to be diminishing quickly as the years go by.

A few reasons include: Bots are confirmed to be winning players, in full ring and NL games. Online poker is mature and has better players. Rake. New 'fish" to grinder ratio is getting smaller.

Does anyone have thoughts to the contrary? Perhaps more sophisticated software to catch botters? Or new regulations legalizing online poker to increase new fish?

Comment author: Duke 04 August 2013 02:06:26AM 0 points [-]

Depending on you current skill level, I'd think that the less than 2% likelihood is a generous estimate. Online poker was a bubble back in the early to mid 00's. Presently, edges are razor thin and only a very elite group are making 100K+/year.

Players are highly skilled--and getting better all the time--and able to populate multiple tables simultaneously (as opposed to live poker where you can play only a single table at a time); rake is high; online poker legality is hazy in many parts of the world; transferring money off the site is problematic; you'll be paying taxes on your winnings; and, like you mentioned, fish are drying up.

Botting, player collusion and hacking certainly have negative effects on the game but it is unclear to what extent.

If you're an American and live near a casino, you're more likely to win $100k playing there in games with at least a $5 big blind. But, generally, playing poker for a living is a bitch for a lot of reasons, namely that you'll be spending a lot of your life in a casino with no windows. Also, statistical variance is difficult to handle emotionally--assuming that you become a winning player to begin with. For every story your read about some guy living high on his poker winnings, there are countless others who went broke and now are either hopeless degenerates scrounging around casinos or working square jobs.

If you do not have an obvious marketable skill set worth 100k/yr, might I suggest getting into sales of some sort. Generally, the barriers to entry are low, and while the success rates are small, the upper bounds of earning potential are very large.

Comment author: Tenoke 03 August 2013 08:57:07AM -1 points [-]

It's true and has been for years (since the early 00's boom). Except that bots (to my knowledge) are not really a big problem while the separation of countries (e.g. US players being able to play only with US players) from the general pool of players is. This is why I stopped playing in 2010.

Comment author: gothgirl420666 01 August 2013 06:16:38PM 1 point [-]

Is there a word processing program for Windows that's similar to TextEdit on a Mac? I always preferred TextEdit over programs like Microsoft Word or Pages because it loads quickly and you can easily fit it in a small window for writing quick notes. In other words, it's "small", I guess you would say.

Right now I'm using CopyWriter, which is pretty good, but it has two problems 1) no spell check and 2) no autosave. Mostly I just use Evernote and Google Docs though.

Any suggestions?

Comment author: Lumifer 01 August 2013 07:52:42PM 3 points [-]

WordPad is the built-in Windows light word processor. Other alternatives that come to mind are SciTE and Notepad++

Comment author: PECOS-9 01 August 2013 05:31:23PM 6 points [-]

I typed up the below message before discovering that the term I was looking for is "data dredging" or "hypothesis fishing." Still decided to post below so others know.

Is there a well-known term for the kind of error that pre-registrations of scientific studies is meant to avoid? I mean the error where an experiment is designed to test something like "This drug cures the common cold," but then when the results show no effect, the researchers repeatedly do the analysis on smaller slices of the data from the experiment, until eventually they have the results "This drug cures the common cold in males aged 40-60, p<.05," when of course that result is just due to random chance (because if you do the statistical tests on 20 subsets of the data, chances are one of them will show an effect with p<.05).

It's similar to the file drawer effect, except it's within a single experiment, not many.

Comment author: DanielLC 03 August 2013 03:28:12AM 0 points [-]
Comment author: GuySrinivasan 01 August 2013 04:00:09PM *  4 points [-]

(link) Effective Altruism: Professionals donate expertise. Toyota sends some industrial engineers to improve NYC's Food Bank charity.

HT Hacker News

Comment author: pinyaka 01 August 2013 12:19:42PM *  8 points [-]

Does anyone know why GiveWell is registered with the IRS under a different name (Clear Fund)? I am including a link to their recommendation for the AMF on a wedding registry and have already gotten a question about about why their name differs.

Comment author: Nisan 04 August 2013 03:53:59PM *  0 points [-]

I don't know, but I encourage you to ask them if you don't get an answer here.

Comment author: [deleted] 02 August 2013 11:47:06AM 1 point [-]

I had noticed that when I got a receipt for a donation I made to them, but I assumed “Clear Fund” was their former name and they hadn't bothered to legally change it or something and didn't worry too much about that.

Comment author: Douglas_Knight 05 August 2013 08:19:36PM 1 point [-]

This is extremely common, though the link pinyaka gave has a column for "doing business as," which should say GiveWell, but is left blank.

Comment author: CAE_Jones 01 August 2013 02:04:09AM 1 point [-]

A few months ago, I decided to try a "gather impossible problems, hold off on proposing solutions until we've thoroughly understood them, then solve them" 'campaign'. The problems I came up with focused on blindness, so I started the discussion here rather than LW. I was surprised when I looked it up today and found that it only lasted for four days--I had been sure it had managed to drag on a little longer than that.

I recall someone tried something similar on LW, though considerably less focused and more willing to take things they couldn't be expected to solve without many more resources. I also recall that little if anything came of it.

Something tells me we're doing it wrong.

Comment author: [deleted] 01 August 2013 01:32:56AM *  0 points [-]

This is a call for Less Wrong users who do not wish to personally identify as rationalists, or do not perfectly relate to the community at a cultural level:

What do you use Less Wrong for? And, what are some reasons for why you do not identify as a rationalist? Are there some functions that you wish the community would provide which it otherwise does not?

Comment author: Zaine 01 August 2013 02:04:28AM *  1 point [-]

Being 'part of a community' and having a term that defines one's identity are two different conditions. In the former, one's participation in a community is merely another aspect to one's personality or character, which can be all-expansive.

In the latter, one is tied to others who share the identifier. Even if 'rationalist' just means one who subscribes to the importance of instrumental and epistemic rationality in daily life, accepting and embracing that or any identifier can have negatives. The former condition, representing a choice rather than a fact of identity, is absent those negatives while retaining the positive aspects of communal connection.

Exempli gratia:
One is trying to appeal to some high status figure. This high-status figure encounters a 'rationalist', and perceives them as low-status. If One has identified themselves as also being a rationalist, then the high-status person's perception of the 'rationalist' may taint their perception of One.
If One has instead identified themselves as being part of a certain community, to which this 'rationalist' may also claim affiliation, One can claim that while they find the community worthwhile for many pursuits, not all who flock to the community are representative of its worth.

If someone thinks this a losing strategy, please speak up, as it's generally applicable. Notable exceptions to its applicability include claiming oneself as identifiable by their association with a friend group or extended family, as in, "I am James Potter, Marauder," rather than, "I am James Potter, member of the Marauders"; and, "I am a Potter," rather than the simple, "My name is James Potter."

Comment author: CAE_Jones 01 August 2013 01:50:50AM 2 points [-]

I think of "rationalist" as "one who applies rationality to real life". By that definition, I've identified as rationalist since age 2 at the latest (I said identified, not "been any good at it").

LW culture is hard to grasp. Politics is a minefield, there's apparently a terrible feminism problem, there seem to be two not so distinct factions: people who want more instrumental rationality, and people who get annoyed by this and only want to discuss philosophy. You have to read lots of things not optimized for keeping readers from falling asleep (I'm not talking about the sequences; I actually stay awake through those) in order to have the necessary background to participate in many discussions; I'm quite terrified of missteps (I make them quite often).

However, I know what I'm reading will be thoroughly vetted for truthfulness most of the time, and in spite of the utter failure to demonstrate rationality superpowers, applying science and reasoning to reality for good results is encouraged and seemingly the main thrust of the whole site. It's obviously far from optimal, otherwise we'd have tons of success stories rather than something trying very hard not to be a technoCult, but those aren't really detraction enough given the absence of a better alternative.

That, and solving CAPCHAs is quite inconvenient and so I'm kinda selective about where I register, so I registered here instead of Reddit and that means this is the only place I'm going to be able to talk about HPMoR. :P

(Also, I like emoticons an awful lot considering that I can't see them. I haven't encountered any emoticons on LW. In any other comment, I would have been much more wary of using one. ??? )

Comment author: Benito 31 July 2013 11:29:01PM *  1 point [-]

Starting to write introductions to LW for friends; here's my fast-track.

Please comment with thoughts here (or there).

Comment author: Adele_L 01 August 2013 12:17:52AM *  1 point [-]

I got a 'page not found' error when I clicked on that link because of the period at the end.

Comment author: Benito 01 August 2013 06:02:48AM *  1 point [-]

Fixed.

Comment author: niceguyanon 31 July 2013 11:02:44PM 5 points [-]

If I believe that automation causing mass unemployment is around the corner (10-20 years), what do I do or invest in now to prepare for it?

Comment author: Omid 03 August 2013 05:14:44AM 0 points [-]

Move to a socialist country.

Comment author: Username 02 August 2013 05:02:51PM *  1 point [-]

You can train yourself in one of the industries you expect to thrive. This could either be the high-tech route of being the one programming and developing the machines, or it could be in a job that never goes away, like plumbing/carpentry/welding. All of which can earn 6 figures, it's a matter of the type of work you like doing.

Comment author: gwern 31 July 2013 11:21:44PM *  7 points [-]

Acquire as much capital as you can, presumably. If the share of economic growth for labor is falling, that of capital must be rising. The topic has come up before but I'm not sure anyone had more concrete advice than index funds - it's tempting to try to invest in software or specific tech companies, except then you're basically being a VC and it's very hard to pick the winners.

Comment author: Jayson_Virissimo 31 July 2013 11:25:18PM 4 points [-]

Or land.

Comment author: Tenoke 31 July 2013 11:12:29AM *  13 points [-]

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it. Please don't let my potential harm discourage you.

Comment author: Oscar_Cunningham 12 August 2014 06:30:11PM 0 points [-]

Did anything come of this in the end? Were any of the basilisks harmful or otherwise interesting?

Comment author: Tenoke 12 August 2014 07:07:57PM 0 points [-]

I got some responses, but I wouldn't say they were.

Comment author: wedrifid 01 August 2013 06:08:44AM 11 points [-]

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it.

We almost need a list for this. This makes half a dozen people I've seen making the same declaration.

Please don't let my potential harm discourage you.

Without endorsing the reasoning at all I note that those with information suppressing inclinations put only a little weight on harm caused to you and even less on your preferences. If they believe that the basilisk is worthy of the name they will expect giving it to you to result in you spreading it to others and thereby causing all sorts of unspeakable misery and soforth. It'd be like infecting a bat with ebola.

Comment author: Rukifellth 31 July 2013 11:24:33PM *  0 points [-]

You magnificent, magnanimous son of a bitch.

Comment author: Benito 31 July 2013 11:29:39PM 3 points [-]

Well that escalated quickly.

Comment author: Rukifellth 31 July 2013 11:33:19PM 1 point [-]

I think a level of gaiety and excitement is appropriate given the subject.

Comment author: Username 31 July 2013 08:24:41PM *  3 points [-]

Could you post how many you receive and your realistic estimation on whether any are actually dangerous? Without specifics of course. (If you take these things seriously, I suppose you should have a dead-man's switch.)

Though for the record I think the LW policy on not being able to discuss basilisks is ridiculous - a big banner at the top of a post saying for example 'Warning - Information Hazard to those who have suffered anxiety at the thought of AI acting acausally' should be fine. I strongly disagree with outright banning of discussion about specific basilisks/medusas, especially seeing as LW is one of the only places where one could have a meaningful conversation about them.

Comment author: HungryHippo 31 July 2013 04:46:10PM *  1 point [-]

Please let us know if you recieve anything interesting.

Comment author: pinyaka 31 July 2013 03:38:38PM 5 points [-]

You are using basilisk in a manner that I don't understand. I assume you're not asking if anyone has a lizard that will literally turn you into stone, so what does basilisk mean in this context?

Comment author: Tenoke 31 July 2013 03:43:49PM *  6 points [-]

Memetic/Information Hazards - the term comes from here. Basically anything that makes you significantly worse off after you know it than before. Giving someone wrong instructions for how to build a bomb wouldn't count for example as I can just never build a bomb or just use other instructions etc.

Warning: Could be dangerous to look into it

Comment author: HungryHippo 31 July 2013 04:47:28PM 1 point [-]

Do you of anyone claiming to be in possession of such a fact?

Comment author: RichardKennaway 01 August 2013 11:48:48AM 3 points [-]

Eliezer is in possession of a fact that he considers to be highly dangerous to anyone who knows it, and who does not have sufficient understanding of exotic decision theory to avoid being vulnerable to it. This is the original basilisk that drew LessWrong's attention to the idea. Whether he is right is disputed (but the disputation cannot take place here).

In HPMOR, he has fictionally presented another basilisk: Harry cannot tell some other wizards, including Dumbledore, about the true Patronus spell, because that knowledge would render them incapable of casting the Patronus at all, leaving them vulnerable to having their minds eaten by Dementors.

Comment author: Rukifellth 31 July 2013 11:28:48PM 0 points [-]

I know one.

Also I think you're missing the word "know"

Comment author: Tenoke 31 July 2013 05:05:12PM -1 points [-]

I know some basilisks, yes. Although, there is nothing I regard as actually dangerous. However, sharing things like this publicly is considered bad etiquette on LessWrong.

Comment author: pinyaka 02 August 2013 07:08:57PM 1 point [-]

If it's not dangerous, how does it constitute a hazard?

Comment author: MixedNuts 01 August 2013 09:58:35AM -1 points [-]

Can you send me yours? Please PM me here or on IRC. I already know the most famous one here.

Comment author: Rukifellth 31 July 2013 11:57:41PM *  0 points [-]

I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.

Comment author: wedrifid 01 August 2013 06:14:05AM *  1 point [-]

I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.

Not just glib reassurance. There is also the outright mockery of those who advocate taking (the known pseudo-examples of) them seriously.

Comment author: Rukifellth 01 August 2013 10:40:13AM 0 points [-]

I can't imagine that anyone is advocating taking them seriously.

Comment author: Lumifer 31 July 2013 04:44:58PM 6 points [-]

Memetic/Information Hazards

They really should be called Medusas -- since it's you looking at them, not them looking at you.

Comment author: Rukifellth 31 July 2013 11:39:24PM 2 points [-]

I think they both need to make eye contact.

Comment author: Tenoke 31 July 2013 05:06:17PM -1 points [-]

Yup, Medusa is what some blogposts use to describe them.

Comment author: Rukifellth 03 August 2013 01:50:51AM 0 points [-]

Which blogposts are these?

Comment author: sixes_and_sevens 31 July 2013 02:00:14PM 0 points [-]

Can you tell us what you're trying to achieve with this?

Comment author: Tenoke 31 July 2013 02:16:39PM 5 points [-]

Interested in the responses since I actually think I can learn some useful things if anyone actually shares something good. Also, I assign significantly less than 1% chance that anyone will actually tell me anything 'dangerous' - for example I think roko's is as dangerous as pie. I don't plan to release memetic hazards on unsuspecting citizens if that's your fear.

Comment author: sixes_and_sevens 31 July 2013 02:38:54PM 0 points [-]

It's more that soliciting information hazards seems like really odd behaviour. Even if no-one sends you an Interactive Suicide Rock, you might still receive some horrible or annoying stuff you don't want to be carrying around in your head.

I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.

Comment author: David_Gerard 31 July 2013 10:02:16PM -1 points [-]

The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)