Open thread, July 29-August 4, 2013
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Of course, for "every Monday", the last one should have been dated July 22-28. *cough*
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Of course, for "every Monday", the last one should have been dated July 22-28. *cough*
Comments (381)
I'm looking for a reading recommendation on the topic of perverse incentives, especially incentives that cause people to do unethical things. Yes, I checked "The Best Textbooks on Every Subject" thread and have recorded all the economics recommendations of interest. However, as interested as I am in reading about economics in general, my specific focus is on perverse incentives, especially ones that cause people to do unethical things. I was wondering if anyone has explored this in depth or happens to know a term for "perverse incentives that cause people to do unethical things", (regardless of whether it's part of economics or some other subject), as I can't seem to find one.
Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management has a fair amount about the limits of incentive plans.
From memory: incentives can work for work that's well-defined and can be done by one person. Otherwise, the result is people gaming the system and not cooperating with each other.
I don't remember whether the book covered something I heard about in the 70s or 80s about a car company which had incentives for teams assembling cars rather than an assembly line.
I was told about a shop owned by partners which had an incentive system for bringing in sales for the shifts the partners worked. The result was that the partners wouldn't tell customers to come back if it might be on someone else's shift.
For example...?
For example allocating funds to fire departments based on how many fires they put out. That encourages them to stop putting work into fire prevention and, at the extreme, creates an incentive for outright arson.
The medical system. (Does that even need explaining?)
Not as rationalist a Harry Potter, but in the right direction.
After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it. Please don't let my potential harm discourage you.
You magnificent, magnanimous son of a bitch.
Well that escalated quickly.
I think a level of gaiety and excitement is appropriate given the subject.
Can you tell us what you're trying to achieve with this?
Interested in the responses since I actually think I can learn some useful things if anyone actually shares something good. Also, I assign significantly less than 1% chance that anyone will actually tell me anything 'dangerous' - for example I think roko's is as dangerous as pie. I don't plan to release memetic hazards on unsuspecting citizens if that's your fear.
It's more that soliciting information hazards seems like really odd behaviour. Even if no-one sends you an Interactive Suicide Rock, you might still receive some horrible or annoying stuff you don't want to be carrying around in your head.
I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.
The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)
And even more obviously, epilepsy. Yet, I don't understand why you would except them.
'You see, X does not exist, since I choose to ignore all the cases in which X does exist; I hope you'll agree that this argument is watertight once you grant my premises.'
Gwern, this thread is about the Basilisk. Conflating that with epilepsy is knowing equivocation. Don't be dense, thanks.
No denser than thou, David:
Who was it who brought up the Motif of Harmful Sensation, which is not limited to Roko's basilisk? Who was it who brought up in order to define away examples of depression or OCD? Thou, David, thou.
The fictional trope is of one you wouldn't expect to be harmful. That's the literary point of it, and of the Basilisk: the surprise factor.
And surely the animators who made that Pokemon episode expected it to be harmful and they made those kids seize because they're simply evil.
No denser than thou, David.
I think David has a point here.
The cases you two have mentioned of sensory hazards all affect people who have identifiable susceptibilities that those people usually know about in advance and that affect relatively small minorities.
Somebody might have a high confidence that they are non-depressed, non-OCD, non-epileptic, etc. Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?
But this is a different question. You have quietly redefined the question "are there harmful sensations to people?" - to which the answer is overwhelmingly, resoundingly, yes, there absolutely are - to 'are there harmful sensations to a newly redefined subset of people which we will immediately update if anyone produces further examples, so actually what I meant all along was "are there harmful sensations which we don't yet know about?"'
Or to put it more simply: 'Can you provide an example of a harmful sensation we don't yet know about?' Well... If I could produce a harmful sensation, you and David would simply say something like 'ah, well, I guess we now have a recognized medical problem, because look, we [commit suicide / collapse in convulsions / cease functioning / become obsessed with useless actions] if you expose us to X! That's a pretty serious psychiatric problem! But, are there examples of sensory hazards that apply to people who do not have a recognized medical problem?'
To which I can only shake my head no.
I hear you and I'm not trying to play the definition game or wriggle out of this. The way I conceptualized the question -- which I think the original poster had in mind and what I think is relevant to hazard risk assessment -- is more like one of these:
A) "What fraction of the public is seriously vulnerable to sensory hazards",
B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."
My hunch is that the answers are "less than 20%" and "close to zero." The example of epilepsy didn't shift my beliefs about either; epilepsy is rare and is rarely adult-onset for the non-elderly.
So you're asking, what new medical sensory hazards may be developed in the future.
Well, the example of photosensitive epilepsy, where no trigger is mentioned which could have existed before the 19th century or so, suggests you should be very wary of thinking the risk of new sensory hazards is close to zero. Flash grenades are another visual example of a historically novel sensation which badly damages ordinary people. Infrasound is another plausible candidate for future deliberate or accidental weaponization. And so on...
There, see, you're doing it again! Why would you exclude the elderly? Keep in mind that you yourself should aspire to become elderly one day (after all, consider the most likely alternative...).
I think that most of the general examples have been mentioned: Religion among others, which has the rather mildly harmful "fear of hell" and it's own propagation.
I think that any majorly harmful hazard which the general population was susceptible to would cause them to all shortly win darwin awards and remove themselves from the genepool.
As such we only have minority groups which are vulnerable to specific stimuli.
The Typical Mind Fallacy is strong with this one.
It's a good thing that isn't a mortal sin! Oh no wait.
Other people expressed a similar view and since I don't mind, I can at least help with satisfying people's curiosity in a way that would cause minimal harm. However, I have found nothing worth talking about after some fairly extensive google searches so I am currently trying to think if there is anyone knowledgeable that I can e-mail (already have a few people on the list) or if there are any good search terms that I haven't tried yet.
It's probably worth clarifying what you consider a basilisk, as that might reduce any unpleasant-yet-irrelevant submissions.
Some basilisks are potentially contagious.
Please give me examples.
Ever seen one of those "If you don't forward this email to five friends, your (relation) will DIE!!1!!!one!" emails?
I think the most obvious semi-basilisk example is certain strains of religion. Insofar as it makes you believe you might go to hell, and all your friends are going to hell, these religions will make you feel bad an also make you want to spread them to everyone you know. Feeling bad is not the same as death or mental breakdown or other theoretical actual basilisk consequences but in essence there are meme complexes that contain elements that demand you spread the whole complex. If someone's in possession of such a concept but has defeated it or is in some way immune it may still be correct for them not to tell you for fear you are not and will spread it to others once it has worked it's will on you.
What do Christians do with the idea of "you're not spreading His Word fast enough"? It would be the same kind of scenario if there's nothing restraining Christian evangelical obligation.
Depends on the sect and person
Please let us know if you recieve anything interesting.
Could you post how many you receive and your realistic estimation on whether any are actually dangerous? Without specifics of course. (If you take these things seriously, I suppose you should have a dead-man's switch.)
Though for the record I think the LW policy on not being able to discuss basilisks is ridiculous - a big banner at the top of a post saying for example 'Warning - Information Hazard to those who have suffered anxiety at the thought of AI acting acausally' should be fine. I strongly disagree with outright banning of discussion about specific basilisks/medusas, especially seeing as LW is one of the only places where one could have a meaningful conversation about them.
We almost need a list for this. This makes half a dozen people I've seen making the same declaration.
Without endorsing the reasoning at all I note that those with information suppressing inclinations put only a little weight on harm caused to you and even less on your preferences. If they believe that the basilisk is worthy of the name they will expect giving it to you to result in you spreading it to others and thereby causing all sorts of unspeakable misery and soforth. It'd be like infecting a bat with ebola.
You are using basilisk in a manner that I don't understand. I assume you're not asking if anyone has a lizard that will literally turn you into stone, so what does basilisk mean in this context?
Memetic/Information Hazards - the term comes from here. Basically anything that makes you significantly worse off after you know it than before. Giving someone wrong instructions for how to build a bomb wouldn't count for example as I can just never build a bomb or just use other instructions etc.
Warning: Could be dangerous to look into it
They really should be called Medusas -- since it's you looking at them, not them looking at you.
I think they both need to make eye contact.
Yup, Medusa is what some blogposts use to describe them.
Do you of anyone claiming to be in possession of such a fact?
I know some basilisks, yes. Although, there is nothing I regard as actually dangerous. However, sharing things like this publicly is considered bad etiquette on LessWrong.
Can you send me yours? Please PM me here or on IRC. I already know the most famous one here.
I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.
Not just glib reassurance. There is also the outright mockery of those who advocate taking (the known pseudo-examples of) them seriously.
I can't imagine that anyone is advocating taking them seriously.
If it's not dangerous, how does it constitute a hazard?
I know one.
Also I think you're missing the word "know"
Eliezer is in possession of a fact that he considers to be highly dangerous to anyone who knows it, and who does not have sufficient understanding of exotic decision theory to avoid being vulnerable to it. This is the original basilisk that drew LessWrong's attention to the idea. Whether he is right is disputed (but the disputation cannot take place here).
In HPMOR, he has fictionally presented another basilisk: Harry cannot tell some other wizards, including Dumbledore, about the true Patronus spell, because that knowledge would render them incapable of casting the Patronus at all, leaving them vulnerable to having their minds eaten by Dementors.
Recent effective altruism job openings (all close within the next 10 days):
Careers analyist at 80,000 Hours
Director of Communications, Director of Community and Director of Research at Giving What We Can
Researcher and Community Manager at Effective Animal Altruism
More info.
I'm planning on taking Algorithms Part 1 and Part 2 through Coursera to complement my first year computer science (software engineering) courses. I am very much interested in collaborating with other LWers. The first course in the sequence starts August 23. Please let me know if you are interested and what form of collaboration you would be most comfortable with (weekly "book club" posts in Discussion, IRC studyhall, etc... ) if you are.
About the course:
Recommended Background:
Suggested Readings:
Course Format:
Question: where can I upload jailbroken PDFs that is public & Google-visible?
For a job, I compiled ~100MB of lipreading research, some of them extremely obscure & hard to find (I also have some Japanese literature PDFs in a similar situation); while I have no personal interest in the topic and do not want to host indefinitely the PDFs on gwern.net, I feel it would be a massive waste to simply delete them.
I cannot simply put them in a Dropbox public folder because they wouldn't show up in Google, and Scribd is an abomination I despise.
(crosspost from Google+)
wordpress.com has 3gb quota and pdfs are visible to google.
Hm? As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents. They have to make money somehow. Would you rather them insert full-page ads in documents the way YouTube now plays ads before video clips?
Anyway, one idea is to find people who run sites on topics related to the PDFs and suggest that they upload them to their sites. Should increase the google juice of both the documents and the sites of those who upload them, so win/win, right?
Money which they have zero right to collect and which breaks the implied contract they had with their previous users who uploaded those documents.
And their interface is butt-ugly with PDFs completely unreadable in their HTML version - but of course they don't let you download the PDFs because they're all behind the Scribd paywall.
Hosting documents. A pretty simple task, one would think, and yet Scribd manages to do it both scuzzily and poorly.
A fully-general excuse. But they are not owed a living.
I'd guess Google Drive.
You could get a website that points to wherever the download actually is.
That's one of the suggestions on G+ too. I didn't think that they would show up in Google proper and get indexed, but someone said they had for him, so maybe I will go with that. (Even if it doesn't work, I can always redownload and upload somewhere else, presumably.)
What's the most credible way to set up an information bounty?
What's an information bounty? What kind of information are you looking for?
Sorry, I guess the proper term is "truth bounty." . The Truth Seal originally offered to arbitrate truth bounties, but it quickly went defunct.
Waffled between putting this here and putting this in the Stupid Questions thread:
Why is the default assumption that a superintelligence of any type will populate its light cone?
I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).
But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity - but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.
On top of this, I ... well, I can't say "can't imagine," but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn't be used for millenia, when it could instead focus on improving local life.
There is, of course, the intelligence-explosion argument, but I'd think even intelligence would hit diminishing marginal returns eventually.
So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities - in which case there's quite a bit more leeway in the Fermi Paradox.
Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn't send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you'd want to leave early.
Also, as Oscar said, you don't want the resources you can easily reach to go to waste while you're putting off using them.
It's because we want to secure as many resources as possible, before the aliens get to them.
I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.
So maybe the Solar System has been secured by an alien-FAI and we're being saved for the aliens to use much later..?
It's totally possible, but they'd have to have a good reason for staying hidden for the reason nyan_sandwich gives.
Most valuable of those resources is free energy. The sun is burning that into low grade light and heat at an incredible rate.
So does that imply that a rapidly expanding resource-saving FAI would go around extinguishing stars?
I suspect using them is more likely. They certainly aren't going to just let them keep wasting fuel. Not unless they have the opportunity to prevent even more waste. For example, they will send out probes to other systems before worrying too much about this system.
Is that even possible!? The FAI would want to somehow pause the burning of the star, allowing it to begin producing energy again when needed. For example collapsing it into a black hole wouldn't be what we want, since the energy would be wasted.
Would star lifting be enough to slow the burning of a star to a standstill?
Seems prudent to do.
Unless it values the existence of stars more than it values other things it could do with that energy.
Upvoted for being the first instance I've seen of someone describing extinguishing all the stars in the night sky as being prudent.
Read up on the Dominion Lands Act and the Homestead Act for a historic human precedent.
Right, but I'm not sure that's the right precedent to use. Space is big: it'd be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.
-- Pirates of the Caribbean: Dead Man's Chest
Or in this case, scope instead of price.
Jokes aside, the point being is the sponsored settlement of the prairies had an influence of the negotiations of the Canada / U.S.A. border. If an human civilization had belief that it may have future competition with aliens for territory in space it would make sense to them to secured as much as possible as a Schelling Point in negotiations / conflicts.
Point granted.
... and once an FAI has sent out probes to claim territory anyway, it loses nothing by making those probes nanotech with a copy of the FAI loaded on it, so we would indeed expect to see lightspeed expansions of FAI-controlled civilizations. Fair enough, then.
My priors tell me that statistical arbitrage opportunities in online poker to net 100k a year to be less than 2% for someone who has an IQ of 100. And likely to be diminishing quickly as the years go by.
A few reasons include: Bots are confirmed to be winning players, in full ring and NL games. Online poker is mature and has better players. Rake. New 'fish" to grinder ratio is getting smaller.
Does anyone have thoughts to the contrary? Perhaps more sophisticated software to catch botters? Or new regulations legalizing online poker to increase new fish?
It's true and has been for years (since the early 00's boom). Except that bots (to my knowledge) are not really a big problem while the separation of countries (e.g. US players being able to play only with US players) from the general pool of players is. This is why I stopped playing in 2010.
Is there a word processing program for Windows that's similar to TextEdit on a Mac? I always preferred TextEdit over programs like Microsoft Word or Pages because it loads quickly and you can easily fit it in a small window for writing quick notes. In other words, it's "small", I guess you would say.
Right now I'm using CopyWriter, which is pretty good, but it has two problems 1) no spell check and 2) no autosave. Mostly I just use Evernote and Google Docs though.
Any suggestions?
WordPad is the built-in Windows light word processor. Other alternatives that come to mind are SciTE and Notepad++
I typed up the below message before discovering that the term I was looking for is "data dredging" or "hypothesis fishing." Still decided to post below so others know.
(link) Effective Altruism: Professionals donate expertise. Toyota sends some industrial engineers to improve NYC's Food Bank charity.
HT Hacker News
Does anyone know why GiveWell is registered with the IRS under a different name (Clear Fund)? I am including a link to their recommendation for the AMF on a wedding registry and have already gotten a question about about why their name differs.
I had noticed that when I got a receipt for a donation I made to them, but I assumed “Clear Fund” was their former name and they hadn't bothered to legally change it or something and didn't worry too much about that.
This is extremely common, though the link pinyaka gave has a column for "doing business as," which should say GiveWell, but is left blank.
A few months ago, I decided to try a "gather impossible problems, hold off on proposing solutions until we've thoroughly understood them, then solve them" 'campaign'. The problems I came up with focused on blindness, so I started the discussion here rather than LW. I was surprised when I looked it up today and found that it only lasted for four days--I had been sure it had managed to drag on a little longer than that.
I recall someone tried something similar on LW, though considerably less focused and more willing to take things they couldn't be expected to solve without many more resources. I also recall that little if anything came of it.
Something tells me we're doing it wrong.
Starting to write introductions to LW for friends; here's my fast-track.
Please comment with thoughts here (or there).
I got a 'page not found' error when I clicked on that link because of the period at the end.
Fixed.
If I believe that automation causing mass unemployment is around the corner (10-20 years), what do I do or invest in now to prepare for it?
You can train yourself in one of the industries you expect to thrive. This could either be the high-tech route of being the one programming and developing the machines, or it could be in a job that never goes away, like plumbing/carpentry/welding. All of which can earn 6 figures, it's a matter of the type of work you like doing.
Acquire as much capital as you can, presumably. If the share of economic growth for labor is falling, that of capital must be rising. The topic has come up before but I'm not sure anyone had more concrete advice than index funds - it's tempting to try to invest in software or specific tech companies, except then you're basically being a VC and it's very hard to pick the winners.
Or land.
As I understand applying Bayes to science, the aim is to direct research into areas that make sense. However, sometimes valuable discoveries are made by accident.
Is there any way to tell whether your research is over-focused? To improve the odds of noticing valuable anomalies?
What is the function of the karma awards page?
There's been some discussion about incentivizing people to do useful things for the community by putting up karma bounties, thus removing some of the uncertainty inherent in upvotes. The most comprehensive thread I could find is here; two years old, but LW development grinds slow.
That's my best guess, anyway.
Ok, thanks! Seems like an interesting plan, I hope it can get implemented.
Ugh. I am generally in the unsympathetic-to-PUA thinking camp, so I offer the following not to bring up a controversial subject again, but because I think publicly acknowledging when one encounters inconvenient evidence for one's priors is a healthy habit to be in...
Recently I added the following (truthful) text to my OkCupid! profile:
Having noted that I am a)unavailable and b)getting lots of competing offers, a high status combination, the result is... in three days, the number of women rating my profile highly has gone from 61 to 113.
There is this competing hypothesis, that the women upvoted you for being honest with them, or for being faithful to the lady you wrote about. (As opposed to just trying to bed as much women as possible.)
So... how about the number of women contacting you -- has it increased, decreased, or remained the same? Perhaps that could provide some evidence to discriminate between the "he is unavailable, therefore attractive" and "he is unavailable, upvoted for not wasting my hopes" hypotheses.
Wait a moment... How long did it take to go from 0 to 61? How long hadn't you logged into OkC before writing that? Maybe the increase is due to more people finding your profile when looking for people “Online today” or “Online this week”?
Alas, there are no loopholes here. 0-61 took almost exactly a year (it would have been more like 10 months, but you lose the votes of people who deactivate their profiles), and I was logging in at least weekly, usually more, during that time.