You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, July 29-August 4, 2013

3 Post author: David_Gerard 29 July 2013 10:26PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Of course, for "every Monday", the last one should have been dated July 22-28. *cough*

Comments (381)

Comment author: Prismattic 31 July 2013 02:02:38AM 30 points [-]

Ugh. I am generally in the unsympathetic-to-PUA thinking camp, so I offer the following not to bring up a controversial subject again, but because I think publicly acknowledging when one encounters inconvenient evidence for one's priors is a healthy habit to be in...

Recently I added the following (truthful) text to my OkCupid! profile:

Note, July 2013 -- I can't claim to be in a relationship yet, but I have had a couple of dates with a someone who had me totally enthralled within 30 minutes of meeting her. I'm flattered by the wave of other letters that have come in the past month, but I've put responding to anyone else on hold while I devote myself to worshiping the ground she walks on.

Having noted that I am a)unavailable and b)getting lots of competing offers, a high status combination, the result is... in three days, the number of women rating my profile highly has gone from 61 to 113.

Comment author: Eliezer_Yudkowsky 31 July 2013 02:31:18AM 19 points [-]

+1 for acknowledging the inconvenient (without regard to subject matter).

Comment author: SilasBarta 01 August 2013 07:50:30PM 7 points [-]

+1 for a (+1 for acknowledging the inconvenient) on a subject you dislike discussion of.

Comment author: RomeoStevens 31 July 2013 07:55:10AM 9 points [-]

OTOH I wouldn't at all be shocked to find out that profiles rated highly and profiles most often responded to are significantly different sets. Signalling preferences vs revealed preference yada yada.

Comment author: Matt_Simpson 31 July 2013 04:57:19PM 6 points [-]

Funny, I read your post and my initial reaction was that this evidence cuts against PUA. (Now I'm not sure whether it supports PUA or not, but I lean towards support).

PUA would predict that this phrase

...while I devote myself to worshiping the ground she walks on.

is unattractive.

Comment author: [deleted] 31 July 2013 05:26:23PM *  9 points [-]

I dunno, in the context it sounds clearly tongue-in-cheek -- though you usually can't countersignal to people who don't know you (see also).

Comment author: [deleted] 31 July 2013 10:12:21AM 6 points [-]

“People will be more likely to (say they) like you once you're in a relationship with someone else” isn't something only people in the sympathetic-to-PUA thinking camp usually say.

Comment author: [deleted] 31 July 2013 05:24:47PM 3 points [-]

Note also that the same action may be interpreted as a sexual advance if the recipient is available (or at least there's no common knowledge to the contrary) and as a sincere compliment for its own sake otherwise; therefore, if someone is willing to do the former but not the latter for whatever reason (e.g. irrational fear of creep- or slut-shaming due to ethanol deficiency)...

Comment author: Viliam_Bur 10 August 2013 09:32:24PM 2 points [-]

in three days, the number of women rating my profile highly has gone from 61 to 113.

There is this competing hypothesis, that the women upvoted you for being honest with them, or for being faithful to the lady you wrote about. (As opposed to just trying to bed as much women as possible.)

So... how about the number of women contacting you -- has it increased, decreased, or remained the same? Perhaps that could provide some evidence to discriminate between the "he is unavailable, therefore attractive" and "he is unavailable, upvoted for not wasting my hopes" hypotheses.

Comment author: [deleted] 06 August 2013 11:21:27AM 1 point [-]

in three days, the number of women rating my profile highly has gone from 61 to 113.

Wait a moment... How long did it take to go from 0 to 61? How long hadn't you logged into OkC before writing that? Maybe the increase is due to more people finding your profile when looking for people “Online today” or “Online this week”?

Comment author: Lumifer 30 July 2013 01:57:48AM *  29 points [-]

An interesting story -- about science, what gets published, and what the incentives for scientists are. But really it is about whether you ought to believe published research.

The summary has three parts (I am quoting from the story).

Part 1 : We were inspired by the fast growing literature on embodiment that demonstrates surprising links between body and mind (Markman & Brendl, 2005; Proffitt, 2006) to investigate embodiment of political extremism. Participants from the political left, right and center (N = 1,979) completed a perceptual judgment task in which words were presented in different shades of gray. Participants had to click along a gradient representing grays from near black to near white to select a shade that matched the shade of the word. We calculated accuracy: How close to the actual shade did participants get? The results were stunning. Moderates perceived the shades of gray more accurately than extremists on the left and right (p = .01). Our conclusion: political extremists perceive the world in black-and-white, figuratively and literally. Our design and follow-up analyses ruled out obvious alternative explanations such as time spent on task and a tendency to select extreme responses.

Part 2 : Before writing and submitting, we paused. ... We conducted a direct replication while we prepared the manuscript. We ran 1,300 participants, giving us .995 power to detect an effect of the original effect size at alpha = .05.

Part 3 : The effect vanished (p = .59).

Comment author: Vaniver 30 July 2013 09:38:01PM 19 points [-]

I've noticed a few times how surprisingly easy it is to be in the upper echelon of some narrow area with a relatively small amount of expenditure (for an upper middle class American professional). This is easy to see in various entertainment hobbies- an American professional adult who puts, say, 10% of his salary into Legos will have a massive collection by the standards of most people who own Legos. Similarly, putting 10% of a professional's salary into buying gadgets means that you would be buying a new one or two every month.

I recently came across an article on political donations and saw the same effect- to be in the top .01% of American political donors, it only takes about $11k an election cycle (more in presidential years, less in legislative only years). Again, at 10% of income, that only takes an income of ~$55k a year (since the cycles occur every two years), which is comparable to the median American salary (and lower than the starting salaries for most of my friends who graduated with STEM bachelor's degrees).

It's not clear to me what percentage of people do this. It's the sort of thing that you could only do for a few narrow niches, since buying a ton of Legos impedes your ability to buy a bunch of gadgets, and it seems like most people go for broad niches instead of narrow niches. If you spend 10% of your income on clothes, say, then if most people spend 10% of their income on clothes you need to be in the top 1% of income-earners to be in the top 1% of clothes-buyers.

I know a handful of people in the LW sphere give a startlingly high percentage of their income to MIRI and are near the top of MIRI supporters. They probably also end up in the top percentile of charitable givers, but I don't have numbers on hand for that.

I'm curious if this is a worthwhile pattern to emulate. I currently do this for art collection in a narrow subfield, and noticed the benefits of being at the top percentage of expenditure mostly by accident, but don't have a good sense of how those benefits compare to marginal value comparisons between different potential hobbies. (Actually, now that I think about this, this might just be a special case of the general "specialization pays off" heuristic, where it may be better to have one extreme hobby than dabble in twenty things, but this may not be obvious when moving from twenty hobbies to nineteen hobbies.)

Comment author: Metus 30 July 2013 11:23:02PM 3 points [-]

Some random points that came to my mind. The Pareto principle: 80% of the effect comes from 20% of the expenditure. So if we take the figure 10,000h to mastery, 2,000h will already lead to ridiculous effects, compared to the average Joe. The tighter the niche you choose is, the less competition there will be, so sheer probability dictates that you are more likely to be in a higher percentile of the distribution.

Overall, it seems to be better to be extremely invested in one niche and take a low interest in a couple of others for social purposes at least than to dabble moderately in a lot of them. What are the 'benefits' you alude to?

Finally, people spending a little bit on a lot of hobbies my be a symptom of an S-shaped response curve to money spent. The first few dollars increase pleasure a lot. Then you are just throwing money at it without obviousy return, so you forego the opportunity cost and get your high elsewhere. But should you for any reason get over this hypothetical plateu you reach again an interval of high return, maybe even higher than in the beginning and spend your money there.

Comment author: Vaniver 31 July 2013 01:38:33AM 8 points [-]

What are the 'benefits' you alude to?

Mostly access to exceptional people / opportunities, and admiration / social status. For example, become a major donor to a wildlife rescue center, and you get invited to play with the tigers. I would be surprised if major MIRI donors that live in the Bay area don't get invited to dinner parties / similar social events with MIRI people.

For the status question, I think it's better to be high status in a narrow niche than medium status in many niches. It's not clear to me how the costs compare, though.

Comment author: spqr0a1 31 July 2013 09:05:26PM *  2 points [-]

Activity in many niches could credibly signal high status in some circles by making available many insights with short inferential distance to the general public (outside any of your niches). Allowing one to seem very experienced/intelligent.

Moreover, the benefits to being medium status in several hobby groups and the associated large number of otherwise unrelated social connections may be greater than readily apparent. https://en.wikipedia.org/wiki/Socialnetwork#Structuralholes

Comment author: Vaniver 01 August 2013 05:30:29AM 2 points [-]

Moreover, the benefits to being medium status in several hobby groups and the associated large number of otherwise unrelated social connections may be greater than readily apparent.

Agreed. It seems like there are several general-purpose hobby groups that seem to be particularly adept at serving this role, of which churches are the most obvious example.

Comment author: tim 30 July 2013 01:54:40AM 12 points [-]

So according to this article a large factor in rising tuition costs in American universities is attributable to increases in administration and overhead costs. For example,

Over the past four decades, though, the number of full-time professors or “full-time equivalents”—that is, slots filled by two or more part-time faculty members whose combined hours equal those of a full-timer—increased slightly more than 50 percent. That percentage is comparable to the growth in student enrollments during the same time period. But the number of administrators and administrative staffers employed by those schools increased by an astonishing 85 percent and 240 percent, respectively.

Certainly some of these increases are attributable to the need for more staff supporting new technological infrastructure such as network/computer administration but those needs don't explain the magnitude of the increases seen.

The author also highlights examples of excess and waste in administrative spending such as large pay hikes for top administrators in the face of budget cuts and the creation of pointless committees. How much these incidents contribute to the cost of tuition is somewhat questionable as the evidence is essentially a large list of anecdotes.

Anyway, this was surprising to me because I would naively predict that, if we were talking about almost any other product, we would begin to see less bureaucratically bloated competitors offering it for cheaper and driving the price down. What's unique about university that stops this from happening?

Possible explanations (based on an extremely basic understanding of economics, please correct),

  1. The author notes that the boards of trustees tend to be ill-prepared for making the kinds of decisions that might lead to a trimming of the fat. However, for this to be the reason (or at least a large part of the reason) boards would have to be almost universally incompetent else the few universities that take such action would have a market advantage over those that don't.

  2. Maybe, for whatever reason, its difficult for universities to grow past a certain point. If the market is already saturated with demand and universities are unable to expand in accommodation then they have no incentive to lower tuition. However, you would still expect lots of new universities to pop up as a result of this (which may or may not be the case as I couldn't find good statistics for this).

  3. The situation we find ourselves in appears to fit well with the signaling model of education. That is, college isn't about learning, it's about signaling your worth to potential employers via an expensive piece of paper. If this were the case it would be hard for a new or non-prestigious institution to break into the market or increase their market share even if the actual education was of high quality and inexpensive relative to competitors. In fact, under this model, more expensive schools may be preferred simply because they signal a higher level of prestige.

  4. Maybe I have been fooled by a misleading article that overblows the level of waste and inefficiency in American universities and that it would actually be quite difficult to run a modern educational institution without a comparable level of bureaucratic expenditure. There are parts of the article that do strike me as hyperbolic, but I've yet to come across a coherent argument that contends the current tuition levels are necessary and several that posit the opposite.

Comment author: Randaly 30 July 2013 09:16:23PM *  9 points [-]

Anyway, this was surprising to me because I would naively predict that, if we were talking about almost any other product, we would begin to see less bureaucratically bloated competitors offering it for cheaper and driving the price down. What's unique about university that stops this from happening?

We do see competition.

ETA: Two additional points:

  • A lot of the spending/waste is on prestige projects like new buildings, rather than on administrators.

  • If you're wondering why nobody is challenging the top schools, I have three responses:

1) It would require too high an initial investment. 2) It would require attracting top students, which is more difficult given scholarships and lack of reputation. 3) This college is trying to do so.

Comment author: beoShaffer 30 July 2013 02:40:41AM 8 points [-]

Also, worth considering is the idea that increased administration is needed to deal with new regulations and/or norms. For example many schools have added positions dealing with diversity, sexual assault, and disability accommodations.

Comment author: gwern 03 August 2013 02:23:59AM 10 points [-]

Question: where can I upload jailbroken PDFs that is public & Google-visible?

For a job, I compiled ~100MB of lipreading research, some of them extremely obscure & hard to find (I also have some Japanese literature PDFs in a similar situation); while I have no personal interest in the topic and do not want to host indefinitely the PDFs on gwern.net, I feel it would be a massive waste to simply delete them.

I cannot simply put them in a Dropbox public folder because they wouldn't show up in Google, and Scribd is an abomination I despise.

(crosspost from Google+)

Comment author: Douglas_Knight 13 September 2013 05:45:51PM 2 points [-]

wordpress.com has 3gb quota and pdfs are visible to google.

Comment author: hg00 07 August 2013 05:24:38AM 1 point [-]

Scribd is an abomination I despise.

Hm? As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents. They have to make money somehow. Would you rather them insert full-page ads in documents the way YouTube now plays ads before video clips?

Anyway, one idea is to find people who run sites on topics related to the PDFs and suggest that they upload them to their sites. Should increase the google juice of both the documents and the sites of those who upload them, so win/win, right?

Comment author: gwern 07 August 2013 11:31:25PM 2 points [-]

As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents.

Money which they have zero right to collect and which breaks the implied contract they had with their previous users who uploaded those documents.

And their interface is butt-ugly with PDFs completely unreadable in their HTML version - but of course they don't let you download the PDFs because they're all behind the Scribd paywall.

Hosting documents. A pretty simple task, one would think, and yet Scribd manages to do it both scuzzily and poorly.

They have to make money somehow.

A fully-general excuse. But they are not owed a living.

Comment author: DanielLC 03 August 2013 03:57:02AM 1 point [-]

I'd guess Google Drive.

You could get a website that points to wherever the download actually is.

Comment author: gwern 03 August 2013 02:39:46PM 1 point [-]

That's one of the suggestions on G+ too. I didn't think that they would show up in Google proper and get indexed, but someone said they had for him, so maybe I will go with that. (Even if it doesn't work, I can always redownload and upload somewhere else, presumably.)

Comment author: RolfAndreassen 29 July 2013 10:42:54PM 10 points [-]

Open comment thread, Monday July 29th

If it's worth saying, but not worth its own top-level comment in the open thread, it goes here.

Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)

Comment author: Dorikka 29 July 2013 10:52:22PM 5 points [-]

Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)

In some cases, true iff you point it out in advance.

Comment author: [deleted] 30 July 2013 01:05:51PM 2 points [-]

Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)

Also the n-th time for n >> 1.

Comment author: pinyaka 01 August 2013 12:19:42PM *  8 points [-]

Does anyone know why GiveWell is registered with the IRS under a different name (Clear Fund)? I am including a link to their recommendation for the AMF on a wedding registry and have already gotten a question about about why their name differs.

Comment author: [deleted] 02 August 2013 11:47:06AM 1 point [-]

I had noticed that when I got a receipt for a donation I made to them, but I assumed “Clear Fund” was their former name and they hadn't bothered to legally change it or something and didn't worry too much about that.

Comment author: Douglas_Knight 05 August 2013 08:19:36PM 1 point [-]

This is extremely common, though the link pinyaka gave has a column for "doing business as," which should say GiveWell, but is left blank.

Comment author: pragmatist 30 July 2013 06:28:54AM *  8 points [-]

Any LW readers living in India? I recently moved here (specifically, New Delhi) from the United States and I'm interested in the possibility of a local meet-up.

Comment author: Ben_LandauTaylor 30 July 2013 02:05:31PM 26 points [-]

The usual suggestion for cases like this is to unilaterally announce a meetup in a public place, and bring a book in case no one shows up. Best case: awesome people doing awesome things. Worst case: you spend a couple hours reading.

Comment author: peter_hurford 30 July 2013 08:13:20PM 7 points [-]

In the past, people like Eliezer Yudkowsky and, I think, Luke Meulhauser have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined? I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.

Comment author: NotInventedHere 01 August 2013 12:02:20PM 1 point [-]

Do you have a permalink to any of those instances? It would be helpful to know what they defined medium as.

Comment author: peter_hurford 01 August 2013 01:25:30PM 4 points [-]

see 1, 2, 3, 4, and 5.

Comment author: Fhyve 30 July 2013 04:38:04AM 7 points [-]

Does anyone know of a good textbook on public relations (PR), or a good resource/summary of the state of the field? I think it would be interesting to know about this, especially with regards to school clubs, meetups, and online rationality advocacy.

Comment author: PECOS-9 01 August 2013 05:31:23PM 6 points [-]

I typed up the below message before discovering that the term I was looking for is "data dredging" or "hypothesis fishing." Still decided to post below so others know.

Is there a well-known term for the kind of error that pre-registrations of scientific studies is meant to avoid? I mean the error where an experiment is designed to test something like "This drug cures the common cold," but then when the results show no effect, the researchers repeatedly do the analysis on smaller slices of the data from the experiment, until eventually they have the results "This drug cures the common cold in males aged 40-60, p<.05," when of course that result is just due to random chance (because if you do the statistical tests on 20 subsets of the data, chances are one of them will show an effect with p<.05).

It's similar to the file drawer effect, except it's within a single experiment, not many.

Comment author: NancyLebovitz 31 July 2013 06:49:23AM 6 points [-]

As I understand applying Bayes to science, the aim is to direct research into areas that make sense. However, sometimes valuable discoveries are made by accident.

Is there any way to tell whether your research is over-focused? To improve the odds of noticing valuable anomalies?

Comment author: ESRogs 30 July 2013 04:10:56AM 6 points [-]

I have a question about the Simulation Argument.

Suppose that it's some point in the future, and we're able to run conscious simulations of our ancestors. We're considering whether or not to run such a simulations.

We are also curious about whether we are in a simulation ourselves, and we know that knowledge that civilizations like ours run ancestor simulations would be evidence for the proposition that we ourselves are in a simulation.

Could the choice at this point whether or not to run a simulation be used as a form of acausal control over the probability that we ourselves are living in a simulation?

Comment author: Jayson_Virissimo 30 July 2013 05:08:27AM 4 points [-]

Taboo "acausal control."

Comment author: ESRogs 30 July 2013 09:57:29AM 4 points [-]

Hmm, okay, to put it another way -- if we avoid running ancestor simulations for the purpose of maximizing the probability that we are not in a simulation, is it valid to, based on this fact, increase our credence in not being in a simulation?

Comment author: linkhyrule5 30 July 2013 10:42:27PM 2 points [-]

I think so. If we decided not to run a simulation, any would-be-simulators analogous to us would also choose not to run a simulation, so you've eliminated a bunch of worlds where simulations are possible.

Comment author: shminux 30 July 2013 05:35:54AM 4 points [-]

The most you can say is that all reflectively consistent ancestors would behave the same way you do. Wasn't there a Greg Egan's story about it?

Comment author: komponisto 30 July 2013 08:58:51AM *  6 points [-]

Wasn't there a Greg Egan's story about it?

English tip: the possessive ending " 's " carries an implicit "the". Thus "Greg Egan's story" means "the story of Greg Egan", not just "story of Greg Egan". (This is unlike the corresponding construction in, for example, German.) Instead of the above, you wanted to write:

Wasn't there a Greg Egan story about it?

(This particular mistake occurs often among non-native-speakers, and indeed is a dead giveaway of one's status as such, so it's worth saying something about.)

Comment author: [deleted] 30 July 2013 01:03:27PM *  1 point [-]

English tip: the possessive ending " 's " carries an implicit "the".

(Except in constructs like “girls' school” or “a ten minutes' walk”.)

Comment author: komponisto 30 July 2013 04:34:10PM 0 points [-]

You're right about "girls' school", but "a ten minutes' walk" is wrong (should be "a ten-minute walk" or "ten minutes' walk").

Comment author: Tenoke 30 July 2013 10:27:38AM *  2 points [-]

No. It is unreasonable to think that all simulations are ancestral anyway. Even if no one runs ancestral simulations people will still run simulations of other possible words for a variety of reasons and we will be likely in one of those. And anyway, as soon as you can make a complete ancestral simulation (without knowing of any way to do so without giving consciousnesses/qualia/whatever to the simulated) you can be >99% that you live in a simulation no matter if you run anything yourself or not.

Comment author: NancyLebovitz 30 July 2013 04:57:42PM 8 points [-]

I strongly recommend not using stupid. It's less distracting to just point out mistakes without using insults.

Comment author: Tenoke 30 July 2013 07:53:01PM 1 point [-]

changed to unreasonable if that helps

Comment author: Ben_LandauTaylor 30 July 2013 08:50:55PM 3 points [-]

That is less insulting, and therefore an improvement. A version that's not even a little insulting might look something like "Not all simulations are ancestral." That approach expresses disagreement with the original claim, but doesn't connote anything about the person who made it.

Comment author: Tenoke 30 July 2013 09:08:46PM 1 point [-]

However, your version completely skips what I am actually saying - that I think that whole line of thinking is bad.

Comment author: linkhyrule5 02 August 2013 08:04:07AM 5 points [-]

Waffled between putting this here and putting this in the Stupid Questions thread:

Why is the default assumption that a superintelligence of any type will populate its light cone?

I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).

But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity - but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.

On top of this, I ... well, I can't say "can't imagine," but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn't be used for millenia, when it could instead focus on improving local life.

There is, of course, the intelligence-explosion argument, but I'd think even intelligence would hit diminishing marginal returns eventually.

So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities - in which case there's quite a bit more leeway in the Fermi Paradox.

Comment author: Oscar_Cunningham 02 August 2013 11:22:48AM *  4 points [-]

It's because we want to secure as many resources as possible, before the aliens get to them.

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

Comment author: Lumifer 02 August 2013 08:04:12PM 1 point [-]

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

So maybe the Solar System has been secured by an alien-FAI and we're being saved for the aliens to use much later..?

Comment author: Oscar_Cunningham 02 August 2013 08:36:52PM 1 point [-]

It's totally possible, but they'd have to have a good reason for staying hidden for the reason nyan_sandwich gives.

Comment author: [deleted] 02 August 2013 08:16:15PM 1 point [-]

Most valuable of those resources is free energy. The sun is burning that into low grade light and heat at an incredible rate.

Comment author: Lumifer 02 August 2013 08:41:20PM 2 points [-]

So does that imply that a rapidly expanding resource-saving FAI would go around extinguishing stars?

Comment author: [deleted] 02 August 2013 10:10:58PM 8 points [-]

Seems prudent to do.

Unless it values the existence of stars more than it values other things it could do with that energy.

Comment author: Nisan 04 August 2013 04:03:26PM 4 points [-]

Upvoted for being the first instance I've seen of someone describing extinguishing all the stars in the night sky as being prudent.

Comment author: DanielLC 03 August 2013 03:23:38AM 1 point [-]

I suspect using them is more likely. They certainly aren't going to just let them keep wasting fuel. Not unless they have the opportunity to prevent even more waste. For example, they will send out probes to other systems before worrying too much about this system.

Comment author: Oscar_Cunningham 03 August 2013 12:16:39AM 1 point [-]

extinguishing stars

Is that even possible!? The FAI would want to somehow pause the burning of the star, allowing it to begin producing energy again when needed. For example collapsing it into a black hole wouldn't be what we want, since the energy would be wasted.

Would star lifting be enough to slow the burning of a star to a standstill?

Comment author: wadavis 02 August 2013 02:59:09PM 1 point [-]

Read up on the Dominion Lands Act and the Homestead Act for a historic human precedent.

Comment author: linkhyrule5 02 August 2013 07:32:32PM 1 point [-]

Right, but I'm not sure that's the right precedent to use. Space is big: it'd be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.

Comment author: wadavis 02 August 2013 08:19:43PM *  3 points [-]

Davy Jones: One Soul is not equal to another

Jack Sparrow: Aha! So we've established my proposal is sound in principle, now we're just haggling over price.

-- Pirates of the Caribbean: Dead Man's Chest

Or in this case, scope instead of price.

Jokes aside, the point being is the sponsored settlement of the prairies had an influence of the negotiations of the Canada / U.S.A. border. If an human civilization had belief that it may have future competition with aliens for territory in space it would make sense to them to secured as much as possible as a Schelling Point in negotiations / conflicts.

Comment author: linkhyrule5 03 August 2013 12:32:34AM 1 point [-]

Point granted.

... and once an FAI has sent out probes to claim territory anyway, it loses nothing by making those probes nanotech with a copy of the FAI loaded on it, so we would indeed expect to see lightspeed expansions of FAI-controlled civilizations. Fair enough, then.

Comment author: DanielLC 03 August 2013 03:26:45AM 3 points [-]

Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn't send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you'd want to leave early.

Also, as Oscar said, you don't want the resources you can easily reach to go to waste while you're putting off using them.

Comment author: niceguyanon 31 July 2013 11:02:44PM 5 points [-]

If I believe that automation causing mass unemployment is around the corner (10-20 years), what do I do or invest in now to prepare for it?

Comment author: gwern 31 July 2013 11:21:44PM *  7 points [-]

Acquire as much capital as you can, presumably. If the share of economic growth for labor is falling, that of capital must be rising. The topic has come up before but I'm not sure anyone had more concrete advice than index funds - it's tempting to try to invest in software or specific tech companies, except then you're basically being a VC and it's very hard to pick the winners.

Comment author: Jayson_Virissimo 31 July 2013 11:25:18PM 4 points [-]

Or land.

Comment author: Username 02 August 2013 05:02:51PM *  1 point [-]

You can train yourself in one of the industries you expect to thrive. This could either be the high-tech route of being the one programming and developing the machines, or it could be in a job that never goes away, like plumbing/carpentry/welding. All of which can earn 6 figures, it's a matter of the type of work you like doing.

Comment author: sixes_and_sevens 30 July 2013 12:20:58PM 5 points [-]

Warning: politics, etc., etc.

What do conservative political traditions squabble over?

My upbringing and social circles are moderately left-wing. There's a well-observed failure mode in these circles, not entirely dissimilar to what's discussed in Why Our Kind Can't Cooperate, where participants sabotage cooperation by going out of their way to find things to disagree about, presumably for moral posturing and virtue-signalling reasons.

In recent years I have become fairly sceptical of intrinsic differences between political groups, which leads me to my opening question: what do conservative political traditions squabble over? I find it hard to imagine what form this sort of self-sabotaging moral posturing might take. Can anyone who grew up on the other side of the fence offer any insight?

Comment author: palladias 30 July 2013 04:51:21PM 13 points [-]

We used to nutshell it as Trads vs Libertarians in college. Here are the relevant strawmen each group has of the other. (Hey, you asked what the fights look like!)

Trads see libertarians as: Just as prone to utopian thinking as those wretched liberals, or else shamelessly callous. Either they really do believe that people will just be naturally good without laws or institutions (what piffle!) or they just don't care about the casualties and trust that they themselves will rise to the top of their brutal, anarchic meritocracy. Not to mention that some of them could be more accurately described as libertines and just want an excuse for license.

Libertarians see trads as: Hidebound stick in the muds. They'd rather have people following arbitrary rules than thinking critically. They despise modernity, but don't actually have a positive vision of what they want instead (they're prone to ruefully shaking their heads and saying "Everything went downhill after the 1950s, or the American Revolution, or the Fall of Man"). By proposing ridiculous schemes (a surprising number have monarchist sympthies!) and washing their hands of governance in a show of 'epistemological modesty' and 'subsidiarity' they wriggle out of putting principles into practice.

Comment author: Randy_M 30 July 2013 04:00:40PM *  2 points [-]

(entirely based on recent USA politics) My instinct is the say conservatives do less jockying for status and have more subtantive disagreements with each other (not without vitriol, of course). I thik this is true, but likely not as much as it seems to me.

One main conservative divide is over how much to use the state to influence the country towards traditional insitutions versus staying with a libertarian framework. Social conservatives vs fiscal conservatives. Generally the first group still wants to work within the democratic process, and see left groups as wanting to appeal to judges to find novel interpretations of exisiting laws. (ie, conservatives amending the state consititution to define marriage vs liberals finding exisiting non-dscrimination amendments to apply more broadly they were likely intended).

Social conservatives will want ordered, controled immigration vs open, almost unregulated immigration of fiscal conservatives (probably justice vs pragmatism), though both will affirm legal immigrants and both will likely want to reduce direct incentives for immigrants (ie, welfare).

A mirror of this in foreign policy is libertarian isolationism vs hawkish/neo-con interventionism, the latter falling out of favor lately, as anger fades and war weariness sets in (or more charitably, people learn lessons and modify their theories).

There are other divisions that I don't think fall along the same lines. Another broad category is how radically to enact change. There is a bit of fundamental tension in a "conservative" philosophy in that at some point after losing a battle there is almost an obligation to conserve the victories of your opponents while fighting their next expansion. (By analogy, picture two nations fighting over borders where A wants to annex the B, but B has an ideological goal to keep the borders set in place by each most recent treaty. Hence, i suspect, the rise of internet Reactionaries who want to do more than draw new lines in the sand).

For example, all conservatives are going to be in favor of free markets, but some may differ on the needed level of intervention by regulators or quasi-governmental groups like the Fed, where those in favor of less are viewed as more conservative but may be called "out of the mainstream" or such. There are some who self-identify as conservatives and argue for expanded state-business cooperation/interference, such as GW Bush proposing TARP.

Another division, perhaps more petty, is over how much to compromise and work with liberals/Democrats vs standing on, and losing with, principles. Some argue that if Republicans articulate a conservative vision and do not sell out people will embrace that; some argue that people probably won't, but then we should let them get what they want by electing Democrats and not having policies that [conservatives view] are inevitable failures be painted with a bipartisan brush so as to be an object lesson, others that politics is messy, we have to compromise to get the best policies that we can while working together with the otherside. Optimism vs pessimism vs pragmatism.

Despite being overly long, I don't know if this answers your question or says anything non-obvious, as you seem to be asking for more petty disputes. I think that those tend to be a magification of a difference along some of the axis mentioned above into not just a quantitative difference but an unbridgeable qualitative one. But there are fundamental disagreements such that one can't say "I'm more conservative than you because I want more x than you" and expect it to hold sway and earn status points across the ideology. Well, maybe lower taxes.

Comment author: Lumifer 30 July 2013 08:44:05PM *  5 points [-]

The left-to-right political axis is a very poor tool for looking at political goal/values/theories/opinions/etc.

First, to even talk about it you need to specify at least the locality. "Left" (or, say, "liberal") in the US means something different from what "left" (or "liberal") means in Europe. I'd wager it means something different yet in China, Russia, India...

Second, one dimension is clearly inadequate for political analysis. For example consider a very important (IMHO) concept in politics: statism. Is the American left statist? Well, kinda. They are statist economically but not culturally. Is the American right statist? Well, kinda. They are statist morally but not economically. I'm, of course, speaking in crude generalizations here.

Comment author: Alejandro1 30 July 2013 09:08:01PM 0 points [-]

At the most basic level, the definitions are that the right wing wants to keep things as they are and the left wing wants to change them. There is one way to do the first, and innumerable to do the second. This probably accounts for a large part of the effect you observe.

(There are of course, many exceptions to the given definition; for example, conservatives wanting to eliminate government programs that are currently part of the status quo. But in this case, they are likely to frame this as a return to a previous state when they didn't exist, which is still a well-defined Schelling point. Right-wingers that do not fit this categorization, such as extreme libertarians calling for a minimal state that has never existed, are known to squabble among them as much as left-wingers.)

Comment author: Randaly 31 July 2013 01:26:56AM 2 points [-]

the right wing wants to keep things as they are

This is not actually accurate. On virtually any issue you can think of, the right-wing consensus supports changes in government policy. This is true to an extent such that some have argued that Republicans oppose everything about the liberal executive branch and civil service, simply because Obama is in office.

Comment author: Randy_M 31 July 2013 09:29:50PM 1 point [-]

"This is true to an extent such that some have argued that Republicans oppose everything about the liberal executive branch and civil service, simply because Obama is in office." The arguments could be rhetorical, hence not demonstrative of the extent of the truth of such proposition. Weak evidence without discussing how those arguments are put forth.

Comment author: Transfuturist 30 July 2013 02:35:17AM *  5 points [-]

I believe I've encountered a problem with either Solomonoff induction or my understanding of Solomonoff induction. I can't post about it in Discussion, as I have less than 20 karma, and the stupid questions thread is very full (I'm not even sure if it would belong there).

I've read about SI repeatedly over the last year or so, and I think I have a fairly good understanding of it. Good enough to at least follow along with informal reasoning about it, at least. Recently I was reading Rathmanner and Hutter's paper, and Legg's paper, due to renewed interest in AIXI as the theoretical "best intelligence," and the Arcade Learning Environment used to test the computable Monte Carlo AIXI approximation. Then this problem came to me.

Solomonoff Induction uses the size of the description of the smallest Turing machine to output a given bitstring. I saw this as a problem. Say AIXI was reasoning about a fair coin. It would guess before each flip whether it would come up heads or tails. Because Turing machines are deterministic, AIXI cannot make hypotheses involving randomness. To model the fair coin, AIXI would come up with increasingly convoluted Turing machines, attempting to compress a bitstring that approaches Kolmogorov randomness as its length approaches infinity. Meanwhile, AIXI would be punished and rewarded randomly. This is not a satisfactory conclusion for a theoretical "best intelligence." So is the italicized statement a valid issue? An AI that can't delay reasoning about a problem by at least labeling it "sufficiently random, solve later" doesn't seem like a good AI, particularly in the real world where chance plays a significant part.

Naturally, Eliezer has already thought of this, and wrote about it in Occam's Razor:

The formalism of Solomonoff Induction measures the "complexity of a description" by the length of the shortest computer program which produces that description as an output. To talk about the "shortest computer program" that does something, you need to specify a space of computer programs, which requires a language and interpreter. Solomonoff Induction uses Turing machines, or rather, bitstrings that specify Turing machines. What if you don't like Turing machines? Then there's only a constant complexity penalty to design your own Universal Turing Machine that interprets whatever code you give it in whatever programming language you like. Different inductive formalisms are penalized by a worst-case constant factor relative to each other, corresponding to the size of a universal interpreter for that formalism.

In the better (IMHO) versions of Solomonoff Induction, the computer program does not produce a deterministic prediction, but assigns probabilities to strings. For example, we could write a program to explain a fair coin by writing a program that assigns equal probabilities to all 2^N strings of length N. This is Solomonoff Induction's approach to fitting the observed data. The higher the probability a program assigns to the observed data, the better that program fits the data. And probabilities must sum to 1, so for a program to better "fit" one possibility, it must steal probability mass from some other possibility which will then "fit" much more poorly. There is no superfair coin that assigns 100% probability to heads and 100% probability to tails.

Does this warrant further discussion, if at least to validate or refute this claim? I don't think Eliezer's proposal for a version of SI that assigns probabilities to strings is strong enough, it doesn't describe what form the hypotheses would take. Would hypotheses in this new description be universal nondeterministic Turing machines, with the aforementioned probability distribution summed over the nondeterministic outputs?

Comment author: Qiaochu_Yuan 30 July 2013 04:14:36AM *  6 points [-]

Hypotheses in this description are probabilistic Turing machines. These can be cashed out to programs in a probabilistic programming language.

I think it's going too far to call this a "problem with Solomonoff induction." Solomonoff induction makes no claims; it's just a tool that you can use or not. Solomonoff induction as a mathematical construct should be cleanly separated from the claim that AIXI is the "best intelligence," which is wrong for several reasons.

Comment author: Adele_L 30 July 2013 04:29:25AM *  4 points [-]

the stupid questions thread is very full

Might be worth having those more often too; the last one was very popular, and had lots of questions that open threads don't typically attract.

Because Turing machines are deterministic, AIXI cannot make hypotheses involving randomness. To model the fair coin, AIXI would come up with increasingly convoluted Turing machines, attempting to compress a bitstring that approaches Kolmogorov randomness as its length approaches infinity. Meanwhile, AIXI would be punished and rewarded randomly.

Just a naïve thought, but maybe it would come up with MWI fairly quickly because of this. (I can imagine this being a beisutsukai challenge – show a student radioactive decay, and see how long it takes them to come up with MWI.) A probabilistic one is probably better for the other reasons brought up, though.

Comment author: David_Gerard 30 July 2013 12:21:47PM 2 points [-]

the stupid questions thread is very full

Might be worth having those more often too; the last one was very popular, and had lots of questions that open threads don't typically attract.

Someone want to start one day after tomorrow? Run monthly or something? Let's see what happens.

Comment author: GuySrinivasan 01 August 2013 04:00:09PM *  4 points [-]

(link) Effective Altruism: Professionals donate expertise. Toyota sends some industrial engineers to improve NYC's Food Bank charity.

HT Hacker News

Comment author: Adele_L 31 July 2013 02:38:27AM 4 points [-]

What is the function of the karma awards page?

Comment author: Nornagest 31 July 2013 03:46:53AM *  4 points [-]

There's been some discussion about incentivizing people to do useful things for the community by putting up karma bounties, thus removing some of the uncertainty inherent in upvotes. The most comprehensive thread I could find is here; two years old, but LW development grinds slow.

That's my best guess, anyway.

Comment author: wallowinmaya 30 July 2013 10:52:11PM *  3 points [-]

Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn't change my browser (Safari) or something else. I think.

Comment author: fubarobfusco 03 August 2013 06:06:00PM 3 points [-]

When reporting problems with a user interface, it often helps to post screenshots. On the web, you can use an image-hosting service such as imgur to make them accessible to people who read your comment.

Comment author: Tenoke 31 July 2013 11:12:29AM *  13 points [-]

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it. Please don't let my potential harm discourage you.

Comment author: wedrifid 01 August 2013 06:08:44AM 11 points [-]

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it.

We almost need a list for this. This makes half a dozen people I've seen making the same declaration.

Please don't let my potential harm discourage you.

Without endorsing the reasoning at all I note that those with information suppressing inclinations put only a little weight on harm caused to you and even less on your preferences. If they believe that the basilisk is worthy of the name they will expect giving it to you to result in you spreading it to others and thereby causing all sorts of unspeakable misery and soforth. It'd be like infecting a bat with ebola.

Comment author: pinyaka 31 July 2013 03:38:38PM 5 points [-]

You are using basilisk in a manner that I don't understand. I assume you're not asking if anyone has a lizard that will literally turn you into stone, so what does basilisk mean in this context?

Comment author: Tenoke 31 July 2013 03:43:49PM *  6 points [-]

Memetic/Information Hazards - the term comes from here. Basically anything that makes you significantly worse off after you know it than before. Giving someone wrong instructions for how to build a bomb wouldn't count for example as I can just never build a bomb or just use other instructions etc.

Warning: Could be dangerous to look into it

Comment author: Lumifer 31 July 2013 04:44:58PM 6 points [-]

Memetic/Information Hazards

They really should be called Medusas -- since it's you looking at them, not them looking at you.

Comment author: Rukifellth 31 July 2013 11:39:24PM 2 points [-]

I think they both need to make eye contact.

Comment author: Username 31 July 2013 08:24:41PM *  3 points [-]

Could you post how many you receive and your realistic estimation on whether any are actually dangerous? Without specifics of course. (If you take these things seriously, I suppose you should have a dead-man's switch.)

Though for the record I think the LW policy on not being able to discuss basilisks is ridiculous - a big banner at the top of a post saying for example 'Warning - Information Hazard to those who have suffered anxiety at the thought of AI acting acausally' should be fine. I strongly disagree with outright banning of discussion about specific basilisks/medusas, especially seeing as LW is one of the only places where one could have a meaningful conversation about them.

Comment author: HungryHippo 31 July 2013 04:46:10PM *  1 point [-]

Please let us know if you recieve anything interesting.

Comment author: Rukifellth 31 July 2013 11:24:33PM *  0 points [-]

You magnificent, magnanimous son of a bitch.

Comment author: Benito 31 July 2013 11:29:39PM 3 points [-]

Well that escalated quickly.

Comment author: Rukifellth 31 July 2013 11:33:19PM 1 point [-]

I think a level of gaiety and excitement is appropriate given the subject.

Comment author: sixes_and_sevens 31 July 2013 02:00:14PM 0 points [-]

Can you tell us what you're trying to achieve with this?

Comment author: Tenoke 31 July 2013 02:16:39PM 5 points [-]

Interested in the responses since I actually think I can learn some useful things if anyone actually shares something good. Also, I assign significantly less than 1% chance that anyone will actually tell me anything 'dangerous' - for example I think roko's is as dangerous as pie. I don't plan to release memetic hazards on unsuspecting citizens if that's your fear.

Comment author: sixes_and_sevens 31 July 2013 02:38:54PM 0 points [-]

It's more that soliciting information hazards seems like really odd behaviour. Even if no-one sends you an Interactive Suicide Rock, you might still receive some horrible or annoying stuff you don't want to be carrying around in your head.

I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.

Comment author: Tenoke 31 July 2013 02:53:22PM 3 points [-]

I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.

Other people expressed a similar view and since I don't mind, I can at least help with satisfying people's curiosity in a way that would cause minimal harm. However, I have found nothing worth talking about after some fairly extensive google searches so I am currently trying to think if there is anyone knowledgeable that I can e-mail (already have a few people on the list) or if there are any good search terms that I haven't tried yet.

Comment author: David_Gerard 31 July 2013 10:02:16PM -1 points [-]

The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

Comment author: FourFire 01 August 2013 09:08:54AM *  2 points [-]

I think that most of the general examples have been mentioned: Religion among others, which has the rather mildly harmful "fear of hell" and it's own propagation.

I think that any majorly harmful hazard which the general population was susceptible to would cause them to all shortly win darwin awards and remove themselves from the genepool.

As such we only have minority groups which are vulnerable to specific stimuli.

Comment author: Leonhart 05 August 2013 07:17:33PM *  1 point [-]

the rather mildly harmful "fear of hell"

The Typical Mind Fallacy is strong with this one.

remove themselves from the genepool

It's a good thing that isn't a mortal sin! Oh no wait.

Comment author: gwern 31 July 2013 10:10:27PM 5 points [-]

(Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

And even more obviously, epilepsy. Yet, I don't understand why you would except them.

'You see, X does not exist, since I choose to ignore all the cases in which X does exist; I hope you'll agree that this argument is watertight once you grant my premises.'

Comment author: asr 31 July 2013 10:29:49PM *  4 points [-]

I think David has a point here.

The cases you two have mentioned of sensory hazards all affect people who have identifiable susceptibilities that those people usually know about in advance and that affect relatively small minorities.

Somebody might have a high confidence that they are non-depressed, non-OCD, non-epileptic, etc. Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?

Comment author: gwern 31 July 2013 11:18:26PM 4 points [-]

Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?

But this is a different question. You have quietly redefined the question "are there harmful sensations to people?" - to which the answer is overwhelmingly, resoundingly, yes, there absolutely are - to 'are there harmful sensations to a newly redefined subset of people which we will immediately update if anyone produces further examples, so actually what I meant all along was "are there harmful sensations which we don't yet know about?"'

Or to put it more simply: 'Can you provide an example of a harmful sensation we don't yet know about?' Well... If I could produce a harmful sensation, you and David would simply say something like 'ah, well, I guess we now have a recognized medical problem, because look, we [commit suicide / collapse in convulsions / cease functioning / become obsessed with useless actions] if you expose us to X! That's a pretty serious psychiatric problem! But, are there examples of sensory hazards that apply to people who do not have a recognized medical problem?'

To which I can only shake my head no.

Comment author: asr 01 August 2013 04:56:19AM 2 points [-]

I hear you and I'm not trying to play the definition game or wriggle out of this. The way I conceptualized the question -- which I think the original poster had in mind and what I think is relevant to hazard risk assessment -- is more like one of these:

A) "What fraction of the public is seriously vulnerable to sensory hazards",

B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."

My hunch is that the answers are "less than 20%" and "close to zero." The example of epilepsy didn't shift my beliefs about either; epilepsy is rare and is rarely adult-onset for the non-elderly.

Comment author: gwern 01 August 2013 02:41:20PM 9 points [-]

B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."

So you're asking, what new medical sensory hazards may be developed in the future.

Well, the example of photosensitive epilepsy, where no trigger is mentioned which could have existed before the 19th century or so, suggests you should be very wary of thinking the risk of new sensory hazards is close to zero. Flash grenades are another visual example of a historically novel sensation which badly damages ordinary people. Infrasound is another plausible candidate for future deliberate or accidental weaponization. And so on...

epilepsy is rare and is rarely adult-onset for the non-elderly.

There, see, you're doing it again! Why would you exclude the elderly? Keep in mind that you yourself should aspire to become elderly one day (after all, consider the most likely alternative...).

Comment author: David_Gerard 02 August 2013 11:51:40AM *  -1 points [-]

Gwern, this thread is about the Basilisk. Conflating that with epilepsy is knowing equivocation. Don't be dense, thanks.

Comment author: gwern 02 August 2013 02:35:27PM *  0 points [-]

No denser than thou, David:

The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

Who was it who brought up the Motif of Harmful Sensation, which is not limited to Roko's basilisk? Who was it who brought up in order to define away examples of depression or OCD? Thou, David, thou.

Comment author: drethelin 31 July 2013 01:25:13PM 0 points [-]

Some basilisks are potentially contagious.

Comment author: Tenoke 31 July 2013 01:27:22PM 8 points [-]

Please give me examples.

Comment author: drethelin 31 July 2013 08:22:34PM 7 points [-]

I think the most obvious semi-basilisk example is certain strains of religion. Insofar as it makes you believe you might go to hell, and all your friends are going to hell, these religions will make you feel bad an also make you want to spread them to everyone you know. Feeling bad is not the same as death or mental breakdown or other theoretical actual basilisk consequences but in essence there are meme complexes that contain elements that demand you spread the whole complex. If someone's in possession of such a concept but has defeated it or is in some way immune it may still be correct for them not to tell you for fear you are not and will spread it to others once it has worked it's will on you.

Comment author: linkhyrule5 31 July 2013 09:54:20PM 1 point [-]

Ever seen one of those "If you don't forward this email to five friends, your (relation) will DIE!!1!!!one!" emails?

Comment author: Omid 02 August 2013 03:24:38PM *  2 points [-]

What's the most credible way to set up an information bounty?

Comment author: Qiaochu_Yuan 02 August 2013 10:14:35PM 2 points [-]

What's an information bounty? What kind of information are you looking for?

Comment author: Omid 03 August 2013 04:56:47AM 1 point [-]

Sorry, I guess the proper term is "truth bounty." . The Truth Seal originally offered to arbitrate truth bounties, but it quickly went defunct.

Comment author: NancyLebovitz 30 July 2013 03:35:51PM 2 points [-]

You can't act on any object. You change its environment, and the object will flow.

Kate Stone, TED talk, paper with electronics

This seems like an interesting half truth since you can't change the environment without acting on objects. However, it's possible that the environment is a richer tool of influence than acting directly, and also possible that people are less apt to resent the environment for not doing what they want, therefore less likely to try to force it.

Comment author: linkhyrule5 29 July 2013 11:56:00PM 2 points [-]

Random idea for the Lobian obstacle that turned out not to work, but I decided to post anyway on the off chance someone can salvage it:

Inspired by the human brains bicameral system: Split the system into two, A and B. A has ((B proves C) -> C), B has ((A proves C) -> C). A, trusting B, can build B' as strong as B; B, trusting A, can build A' as strong as A.

Obvious flaw: A has ((B proves ((A proves C) -> C)) -> ((A proves C) -> C), so A has ((A proves C) -> C), and vice versa.

Comment author: David_Gerard 04 August 2013 03:27:05PM 2 points [-]
Comment author: Epiphany 03 August 2013 04:26:19AM *  1 point [-]

I'm looking for a reading recommendation on the topic of perverse incentives, especially incentives that cause people to do unethical things. Yes, I checked "The Best Textbooks on Every Subject" thread and have recorded all the economics recommendations of interest. However, as interested as I am in reading about economics in general, my specific focus is on perverse incentives, especially ones that cause people to do unethical things. I was wondering if anyone has explored this in depth or happens to know a term for "perverse incentives that cause people to do unethical things", (regardless of whether it's part of economics or some other subject), as I can't seem to find one.

Comment author: NancyLebovitz 04 August 2013 11:59:37PM *  5 points [-]

Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management has a fair amount about the limits of incentive plans.

From memory: incentives can work for work that's well-defined and can be done by one person. Otherwise, the result is people gaming the system and not cooperating with each other.

I don't remember whether the book covered something I heard about in the 70s or 80s about a car company which had incentives for teams assembling cars rather than an assembly line.

I was told about a shop owned by partners which had an incentive system for bringing in sales for the shifts the partners worked. The result was that the partners wouldn't tell customers to come back if it might be on someone else's shift.

Comment author: John_Maxwell_IV 06 August 2013 04:32:57PM 1 point [-]

Recent effective altruism job openings (all close within the next 10 days):

More info.

Comment author: Jayson_Virissimo 03 August 2013 02:56:39AM 1 point [-]

I'm planning on taking Algorithms Part 1 and Part 2 through Coursera to complement my first year computer science (software engineering) courses. I am very much interested in collaborating with other LWers. The first course in the sequence starts August 23. Please let me know if you are interested and what form of collaboration you would be most comfortable with (weekly "book club" posts in Discussion, IRC studyhall, etc... ) if you are.

About the course:

An introduction to fundamental data types, algorithms, and data structures, with emphasis on applications and scientific performance analysis of Java implementations. Specific topics covered include: union-find algorithms; basic iterable data types (stack, queues, and bags); sorting algorithms (quicksort, mergesort, heapsort) and applications; priority queues; binary search trees; red-black trees; hash tables; and symbol-table applications.

Recommended Background:

All you need is a basic familiarity with programming in Java. This course is primarily aimed at first- and second-year undergraduates interested in engineering or science, along with high school students and professionals with an interest (and some background) in programming.

Suggested Readings:

Although the lectures are designed to be self-contained, students wanting to expand their knowledge beyond what we can cover in a 6-week class can find a much more extensive coverage of this topic in our book Algorithms (4th Edition) , published by Addison-Wesley.

Course Format:

There will be two lectures (75 minutes each) each week. The lectures are each broken into about 4-6 pieces, separated by interactive quiz questions for you to to help you process and understand the material. In addition, there will be a problem set and a programming assignment each week and there will be a final exam.

Comment author: niceguyanon 01 August 2013 09:31:30PM *  1 point [-]

My priors tell me that statistical arbitrage opportunities in online poker to net 100k a year to be less than 2% for someone who has an IQ of 100. And likely to be diminishing quickly as the years go by.

A few reasons include: Bots are confirmed to be winning players, in full ring and NL games. Online poker is mature and has better players. Rake. New 'fish" to grinder ratio is getting smaller.

Does anyone have thoughts to the contrary? Perhaps more sophisticated software to catch botters? Or new regulations legalizing online poker to increase new fish?

Comment author: gothgirl420666 01 August 2013 06:16:38PM 1 point [-]

Is there a word processing program for Windows that's similar to TextEdit on a Mac? I always preferred TextEdit over programs like Microsoft Word or Pages because it loads quickly and you can easily fit it in a small window for writing quick notes. In other words, it's "small", I guess you would say.

Right now I'm using CopyWriter, which is pretty good, but it has two problems 1) no spell check and 2) no autosave. Mostly I just use Evernote and Google Docs though.

Any suggestions?

Comment author: Lumifer 01 August 2013 07:52:42PM 3 points [-]

WordPad is the built-in Windows light word processor. Other alternatives that come to mind are SciTE and Notepad++