Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.

The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.

What (dis)advantages does this have compared to the traditional model?

Some advantages:

  • Can spend more time on actual research.
  • A lot more freedom with regard to what kind of research one can pursue.
  • Cleaner mental separation between money-earning job and research time (less frustration about "I could be doing research now, instead of spending time on this stupid administrative thing").
  • Easier to take time off from research if feeling stressed out.

Some disadvantages:

  • Harder to network effectively.
  • Need to get around journal paywalls somehow.
  • Journals might be biased against freelance researchers.
  • Easier to take time off from research if feeling lazy.
  • Harder to combat akrasia.
  • It might actually be better to spend some time doing research under others before doing it on your own.

EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.

New to LessWrong?

New Comment
Rendering 1000/1001 comments, sorted by (show more) Click to highlight new comments since: Today at 4:17 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I believe that most people hoping to do independent academic research vastly underestimate both the amount of prior work done in their field of interest, and the advantages of working with other very smart and knowledgeable people. Note that it isn't just about working with other people, but with other very smart people. That is, there is a difference between "working at a university / research institute" and "working at a top university / research institute". (For instance, if you want to do AI research in the U.S., you probably want to be at MIT, Princeton, Carnegie Mellon, Stanford, CalTech, or UC Berkeley. I don't know about other countries.)

Unfortunately, my general impression is that most people on LessWrong are mostly unaware of the progress made in statistical machine learning (presumably the brand of AI that most LWers care about) and cognitive science in the last 20 years (I mention these two fields because I assume they are the most popular on LW, and also because I know the most about them). And I'm not talking about impressive-looking results that dodge around the real issues, I'm talking about fundamental progress towards resolving the key problems... (read more)

9Danny_Hintze13y
This might not even be a significant problem when the time does come around. High fluid intelligence only lasts for so long, and thus using more crystallized intelligence later on in life to guide research efforts rather than directly performing research yourself is not a bad strategy if the goal is to optimize for the actual research results.
4jsteinhardt13y
Those are roughly my thoughts as well, although I'm afraid that I only believe this to rationalize my decision to go into academia. While the argument makes sense, there are definitely professors that express frustration with their position. What does seem like pretty sound logic is that if you could get better results without a research group, you wouldn't form a research group. So you probably won't run into the problem of achieving suboptimal results from administrative overhead (you could always just hire less people), but you might run into the problem of doing work that is less fun than it could be. Another point is that plausibly some other profession (corporate work?) would have less administrative overhead per unit of efficiency, but I don't actually believe this to be true.
7nhamann13y
Could you point me towards some articles here? I fully admit I'm unaware of most of this progress, and would like to learn more.

A good overview would fill up a post on its own, but some relevant topics are given below. I don't think any of it is behind a paywall, but if it is, let me know and I'll link to another article on the same topic. In cases where I learned about the topic by word of mouth, I haven't necessarily read the provided paper, so I can't guarantee the quality for all of these. I generally tried to pick papers that either gave a survey of progress or solved a specific clearly interesting problem. As a result you might have to do some additional reading to understand some of the articles, but hopefully this is a good start until I get something more organized up.

Learning:

Online concept learning: rational rules for concept learning [a somewhat idealized situation but a good taste of the sorts of techniques being applied]

Learning categories: Bernoulli mixture model for document classification, spatial pyramid matching for images

Learning category hierarchies: nested Chinese restaurant process, hierarchical beta process

Learning HMMs (hidden Markov models): HDP-HMMs this is pretty new so the details haven't been hammered out, but the article should give you a taste of how people are approaching th... (read more)

2Perplexed13y
I'm not planning to do AI research, but I do like to stay no more than ~10 years out of date regarding progress in fields like this. At least at the intelligent-outsider level of understanding. So, how do I go about getting and keeping almost up-to-date in these fields. Is MacKay's book a good place to start on machine learning? How do I get an unbiased survey of cognitive science? Are there blogs that (presuming you follow the links) can keep you up to date on what is getting a buzz?
3jsteinhardt13y
I haven't read MacKay myself, but it looks like it hits a lot of the relevant topics. You might consider checking out Tom Griffiths' website, which has a reading list as well as several tutorials.
1sark13y
We should try to communicate with long letters (snail mail) more. Academics seem to have done that a lot in the past. From what I have seen these exchanges seem very productive, though this could be a sampling bias. I don't see why there aren't more 'personal communication' cites, except for them possibly being frowned upon.
1jsteinhardt13y
Why use snail mail when you can use skype? My lab director uses it regularly to talk to other researchers.
3sark13y
Because it is written. Which makes it good for communicating complex ideas. The tradition behind it also lends it an air of legitimacy. Researchers who don't already have a working relationship with each other will take each other's letters more seriously.
3jsteinhardt13y
Upvoted for the good point about communication. Not sure I agree with the legitimacy part (what is p(Crackpot | Snail Mail) compared to p(Crackpot | Email)? I would guess higher).
2Sniffnoy13y
What I'm now wondering is, how does using email vs. snail mail affect the probability of using green ink, or its email equivalent...
1sark13y
Heh you are probably right. It just seemed strange to me how researchers cannot just communicate with each other as long as they have the same research interests. My first thought was that it might have been something to do with status games, where outsiders are not allowed. I suppose some exchanges require rapid and frequent feedback. But then, like you mentioned, wouldn't Skype do?
1jsteinhardt13y
I'm not sure what the general case looks like, but the professors who I have worked with (who all have the characteristic that they do applied-ish research at a top research university) are both constantly barraged by more e-mails than they can possibly respond to. I suspect that as a result they limit communication to sources that they know will be fruitful. Other professors in more theoretical fields (like pure math) don't seem to have this problem, so I'm not sure why they don't do what you suggest (although some of them do). And I am not sure that all professors run into the same problem as I have described, even in applied fields.
0Desrtopa13y
"In the past" as in before they had alternative methods of long distance communication, or after?

(Shrugs.)

Your decision. The Singularity Institute does not negotiate with terrorists.

WFG, please quit with the 'increase existential risk' idea. Allowing Eliezer to claim moral high ground here makes the whole situation surreal.

A (slightly more) sane response would be to direct your altruistic punishment towards the SIAI specifically. They are, after all, the group who is doing harm (to you according to your values). Opposing them makes sense (given your premises.)

-7waitingforgodel13y
-26waitingforgodel13y
-21waitingforgodel13y

After several years as a post-doc I am facing a similar choice.

If I understand correctly you have no research experience so far. I'd strongly suggest completing a doctorate because:

  • you can use that time to network and establish a publication record
  • most advisors will allow you as much freedom as you can handle, particularly if you can obtain a scholarship so you are not sucking their grant money. Choose your advisor carefully.
  • you may well get financial support that allows you to work full time on your research for at least 4 years with minimal accountability
  • if you want, you can practice teaching and grant applications to taste how onerous they would really be
  • once you have a doctorate and some publications, it probably won't be hard to persuade a professor to offer you an honorary (unpaid) position which gives you an institutional affiliation, library access, and maybe even a desk. Then you can go ahead with freelancing, without most of the disadvantages you cite.

You may also be able to continue as a post-doc with almost the same freedom. I have done this for 5 years. It cannot last forever, though, and the longer you go on, the more people will expect you to devote yourself to grant applications, teaching and management. That is why I'm quitting.

5Kaj_Sotala13y
Huh. That's a fascinating idea, one which had never occurred to me. I'll have to give this suggestion serious consideration.

Ron Gross's The Independent Scholar's Handbook has lots of ideas like this. A lot of the details in it won't be too useful, since it is mostly about history and the humanities, but quite a bit will be. It is also a bit old to have some more recent stuff, since there was almost no internet in 1993.

4James_Miller13y
Or become a visiting professor in which you teach one or two courses a year in return for modest pay, affiliation and library access.

Dude, don't be an idiot. Really.

I'm putting the finishing touches on a future Less Wrong post about the overwhelming desirability of casually working in Australia for 1-2 years vs "whatever you were planning on doing instead". It's designed for intelligent people who want to earn more money, have more free time, and have a better life than they would realistically be able to get in the US or any other 1st world nation without a six-figure, part-time career... something which doesn't exist. My world saving article was actually just a prelim for this.

Are you going to accompany the "this is cool" part with a "here's how" part? I estimate that would cause it to influence an order of magnitude more people, by removing an inconvenience that looks at least trivial and might be greater.

4David_Gerard13y
I'm now thinking of why Australian readers should go to London and live in a cramped hovel in an interesting place. I feel like I've moved to Ankh-Morpork.
1Mardonius13y
Simple! Tell them they too can follow the way of Lu-Tze, The Sweeper! For is it not said, "Don't knock a place you've never been to"
3erratio13y
As someone already living in Australia and contemplating a relocation to the US for study purposes, I would be extremely interested in this article
1David_Gerard13y
Come to England! It's small, cramped and expensive! The stuff here is amazing, though. (And the GBP is taking a battering while the AUD is riding high.)
0Desrtopa13y
I was under the impression that England was quite difficult to emigrate to?
0David_Gerard13y
My mother's English, so I'm British by paperwork. Four-year working or study visas for Australians without a British parent are not impossible and can also be converted to a working one or even permanent residency if whatever hoops are in place at the time happen to suit.
2diegocaleiro13y
Hope face. Let's see if you can beat my next 2 years in Brazil..... I've been hoping for something to come along (trying to defeat my status quo bias) but it has been really hard to find something comparable. In fact, if this comment is upvoted enough, I might write a "How to be effective from wherever you are currently outside 1st world countries" post...... because if only I knew, life would be just, well, perfect. I assume many other latinos, africans, filipinos, and slavic fellows feel the same way!
0lukeprog13y
Louie? I was thinking about this years ago and would love to know more details. Hurry up and post it! :)
0katydee13y
Color me very interested!

Whats frustrating is I would have had no idea it was deleted- and just assumed it wasn't interesting to anyone, had I not checked after reading the above. I'd much rather be told to delete the relevant portions of the comment- lets at least have precise censorship!

Wow. Even the people being censored don't know it. That's kinda creepy!

his comment led me to discover that quite a long comment I made a little bit ago had been deleted entirely.

How did you work out that it had been deleted? Just by logging out, looking and trying to remember where you had stuff posted?

I think it's a standard tool: trollish comments look like being ignored to the trolls. But I think it's impolite to delete comments made in good faith without notification and usable guidelines for cleaning up and reposting. (Hint hint.)

5Jack13y
I only made one comment on the subject and I was rather confused that it was being ignored. I also knew I might have said too much about the Roko post and actually included a sentence saying that if I crossed the line I'd appreciate being told to edit it instead of having the entire thing deleted. So I just checked that one comment in particular. If other comments of mine have been deleted I wouldn't know about it, though this was the only comment in which I have discussed the Roko post.
5[anonymous]13y
I doubt that this is a deliberate feature.

Consider taking a job as a database/web developer at a university department. This gets you around journal paywalls, and is a low-stress job (assuming you have or can obtain above-average coding skills) that leaves you plenty of time to do your research. (My wife has such a job.) I'm not familiar with freelance journalism at all, but I'd still guess that going the software development route is lower risk.

Some comments on your list of advantages/disadvantages:

  • Harder to network effectively. - I guess this depends on what kind of research you want to do. For the areas I've been interested in, networking does not seem to matter much (unless you count participating in online forums as networking :).
  • Journals might be biased against freelance researchers. - I publish my results online, informally, and somehow they've usually found an interested audience. Also, the journals I'm familiar with require anonymous submissions. Is this not universal?
  • Harder to combat akrasia. - Actually, might be easier.

A couple other advantages of the non-traditional path:

  • If you get bored you can switch topics easily.
  • I think it's crazy to base one's income on making research progress. How do you stay o
... (read more)

Well I guess this is our true point of disagreement. I went to the effort of finding out a lot, went to SIAI and Oxford to learn even more, and in the end I am left seriously disappointed by all this knowedge. In the end it all boils down to:

"most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed, and you almost certainly fail to have an effect anyway. And by the way the future is an impending train wreck"

I feel quite strongly that this knowledge is not a worthy thing to have sunk 5 years of my life into getting. I don't know, XiXiDu, you might prize such knowledge, including all the specifics of how that works out exactly.

If you really strongly value the specifics of this, then yes you probably would on net benefit from the censored knowledge, the knowledge that was never censored because I never posted it, and the knowledge that I never posted because I was never trusted with it anyway. But you still probably won't get it, because those who hold it correctly infer that the expected value of releasing it is strongly negative from an altruist's perspective.

The future is probably an impending train wreck. But if we can save the train, then it'll grow wings and fly up into space while lightning flashes in the background and Dragonforce play a song about fiery battlefields or something. We're all stuck on the train anyway, so saving it is worth a shot.

I hate to see smart people who give a shit losing to despair. This is still the most important problem and you can still contribute to fixing it.

TL;DR: I want to give you a hug.

-4Roko13y
I disagree with this argument. Pretty strongly. No selfish incentive to speak of.

most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed,

So? They're just kids!

(or)

He glanced over toward his shoulder, and said, "That matter to you?"

Caw!

He looked back up and said, "Me neither."

4Roko13y
I mean I guess I shouldn't complain that you don't find this bothers you, because you are, in fact, helping me by doing what you do and being very good at it, but that doesn't stop it being demotivating for me! I'll see what I can do regarding quant jobs.
1[anonymous]13y
I liked the first response better.
5katydee13y
This isn't meant as an insult, but why did it take you 5 years of dedicated effort to learn that?
6Roko13y
Specifics. Details. The lesson of science is that details can sometimes change the overall conclusion. Also some amount of nerdyness meaning that the statements about human nature weren't obvious to me.
4timtyler13y
That doesn't sound right to me. Indeed, it sounds as though you are depressed :-( Unsolicited advice over the public internet is rather unlikely to help - but maybe focus for a bit on what you want - and the specifics of how to get to there.
3Jack13y
Upvoted for the excellent summary!
5katydee13y
I'm curious about the "future is an impending train wreck" part. That doesn't seem particularly accurate to me.
3Roko13y
Maybe it will all be OK. Maybe the trains fly past each other on separate tracks. We don't know. There sure as hell isn't a driver though. All the inside-view evidence points to bad things,with the exception that Big Worlds could turn out nicely. Or horribly.
-1timtyler13y
Perhaps try this one: The Rational Optimist: How Prosperity Evolves

The largest disadvantage to not having, essentially, an apprenticeship is the stuff you don't learn.

Now, if you want to research something where all you need is a keen wit, and there's not a ton of knowledge for you to pick up before you start... sure, go ahead. But those topics are few and far between. (EDIT: oh, LW-ish stuff. Meh. Sure, then, I guess. I thought you meant researching something hard >:DDDDD

No, but really, if smart people have been doing research there for 50 years and we don't have AI, that means that "seems easy to make progress" is a dirty lie. It may mean that other people haven't learned much to teach you, though - you should put some actual effort (get responses from at least two experts) finding out of this is the case)

Usually, an apprenticeship will teach you:

  • What needs to be done in your field.

  • How to write, publicize and present your work. The communication protocols of the community. How to access the knowledge of the community.

  • How to use all the necessary equipment, including the equipment that builds other equipment.

  • How to be properly rigorous - a hard one in most fields, you have to make it instinctual rather than just known.

  • The subtle tricks an experienced researcher uses to actually do research - all sorts of things you might not have noticed on your own.

  • And more!

Another idea is the "Bostrom Solution", i.e. be so brilliant that you can find a rich guy to just pay for you to have your own institute at Oxford University.

Then there's the "Reverse Bostrom Solution": realize that you aren't Bostrom-level brilliant, but that you could accrue enough money to pay for an institute for somebody else who is even smarter and would work on what you would have worked on. (FHI costs $400k/year, which isn't such a huge amount as to be unattainable by Kaj or a few Kaj-like entities collaborating)

5shokwave13y
Sounds like a good bet even if you are brilliant. Make money, use money to produce academic institute, do your research in concert with academics at your institute. This solves all problems of needing to be part of academia, and also solves the problem of academics doing lots of unnecessary stuff - at your institute, academics will not be required to do unnecessary stuff.

Maybe. The disadvantage is lag time, of course. Discount rate for Singularity is very high. Assume that there are 100 years to the singularity, and that P(success) is linearly decreasing in lag time; then every second approximately 25 galaxies are lost, assuming that the entire 80 billion galaxies' fate is decided then.

25 galaxies per second. Wow.

8PeerInfinity13y
I'm surprised that noone has asked Roko where he got these numbers from. Wikipedia says that there are about 80 billion galaxies in the "observable universe", so that part is pretty straightforward. Though there's still the question of why all of them are being counted, when most of them probably aren't reachable with slower-than-light travel. But I still haven't found any explanation for the "25 galaxies per second". Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is it the number of galaxies, multiplied by the distance to the edge of the observable universe, divided by the speed of light? calculating... Wikipedia says that the comoving distance from Earth to the edge of the observable universe is about 14 billion parsecs (46 billion light-years short scale, i.e. 4.6 × 10^10 light years) in any direction. Google Calculator says 80 billion galaxies / 46 billion light years = 1.73 galaxies per year, or 5.48 × 10^-8 galaxies per second so no, that's not it. If I'm going to allow my mind to be blown by this number, I would like to know where the number came from.
2Caspian13y
I also took a while to understand what was meant, so here is my understanding of the meaning: Assumptions: There will be a singularity in 100 years. If the proposed research is started now it will be a successful singularity, e.g. friendly AI. If the proposed research isn't started by the time of the singularity, it will be a unsuccessful (negative) singularity, but still a singularity. The probability of the successful singularity linearly decreases with the time when the research starts, from 100 percent now, to 0 percent in 100 years time. A 1 in 80 billion chance of saving 80 billion galaxies is equivalent to definitely saving 1 galaxy, and the linearly decreasing chance of a successful singularity affecting all of them is equivalent to a linearly decreasing number being affected. 25 galaxies per second is the rate of that decrease.
2Roko13y
I meant if you divide the number of galaxies by the number of seconds to an event 100 years from now. Yes, not all reachable. Probably need to discount by an order of magnitude for reachability at lightspeed.
0FAWS13y
Hmm, by the second wikipedia link there is no basis for the 80 billion galaxies since only a relatively small fraction of the observable universe (4.2%?) is reachable if limited by the speed of light, and if not the whole universe is probably at least 10^23 times larger (by volume or by radius?).
5shokwave13y
Guh. Every now and then something reminds me of how important the Singularity is. Time to reliable life extension is measured in lives per minute, time to Singularity is measured in galaxies per second.
1MartinB13y
Now thats a way to eat up your brain.
1Roko13y
Well conservatively assuming that each galaxy supports lives at 10^9 per sun per century (1/10th of our solar system), that's already 10^29 lives per second right there. And assuming utilization of all the output of the sun for living, i.e. some kind of giant spherical shell of habitable land, we can add another 12 orders of magnitude straight away. Then if we upload people that's probably another 10 orders of magnitude. Probably up to 10^50 lives per second, without assuming any new physics could be discovered (a dubious assumption). If instead we assume that quantum gravity gives us as much of an increase in power as going from newtonian physics to quantum mechanics did, we can pretty much slap another 20 orders of magnitude onto it, with some small probability of the answer being "infinity".
1XFrequentist13y
In what I take to be a positive step towards viscerally conquering my scope neglect, I got a wave of chills reading this.
0[anonymous]13y
What's your P of "the fate of all 80 billion galaxies will be decided on Earth in the next 100 years"?
0Vladimir_Nesov13y
About 10% (if we ignore existential risk, which is a way of resolving the ambiguity of "will be decided"). Multiply that by opportunity cost of 80 billion galaxies.
1David_Gerard13y
Could you please detail your working to get to this 10% number? I'm interested in how one would derive it, in detail.
0Vladimir_Nesov13y
I restored the question as asking about probability that we'll be finishing an FAI project in the next 100 years. Dying of engineered virus doesn't seem like an example of "deciding the fate of 80 billion galaxies", although it's determining that fate. FAI looks really hard. Improvements in mathematical understanding to bridge comparable gaps in understanding can take at least many decades. I don't expect a reasonable attempt at actually building a FAI anytime soon (crazy potentially world-destroying AGI projects go in the same category as engineered viruses). One possible shortcut is ems, that effectively compress the required time, but I estimate that they probably won't be here for at least 80 more years, and then they'll still need time to become strong enough and break the problem. (By that time, biological intelligence amplification could take over as a deciding factor, using clarity of thought instead of lots of time to think.)
-1[anonymous]13y
My question has only a little bit to do with the probability that an AI project is successful. It has mostly to do with P(universe goes to waste | AI projects are unsuccessful). For instance, couldn't the universe go on generating human utility after humans go extinct?
2ata13y
How? By coincidence? (I'm assuming you also mean no posthumans, if humans go extinct and AI is unsuccessful.)
2[anonymous]13y
Aliens. I would be pleased to learn that something amazing was happening (or was going to happen, long "after" I was dead) in one of those galaxies. Since it's quite likely that something amazing is happening in one of those 80 billion galaxies, shouldn't I be pleased even without learning about it? Of course, I would be correspondingly distressed to learn that something horrible was happening in one of those galaxies.
0Roko13y
Some complexities regarding "decided" since physics is deterministic, but hand waving that aside, I'd say 50%.
1[anonymous]13y
With high probability, many of those galaxies are already populated. Is that irrelevant?
-1Roko13y
I disagree. I claim that the probability of >50% of the universe being already populated (using the space of simultaneity defined by a frame of reference comoving with earth) is maybe 10%.
-1[anonymous]13y
"Already populated" is a red herring. What's the probability that >50% of the universe will ever be populated? I don't see any reason for it to be sensitive to how well things go on Earth in the next 100 years.
1Roko13y
I think it is likely that we are the only spontaneously-created intelligent species in the entire 4-manifold that is the universe, space and time included (excluding species which we might create in the future, of course).
1[anonymous]13y
I'm curious to know how likely, and why. But do you agree that aliens are relevant to evaluating astronomical waste?
0timtyler13y
That seems contrary to the http://en.wikipedia.org/wiki/Self-Indication_Assumption Do you have a critique - or a supporting argument?
5Roko13y
Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about "anthropics", e.g. that if your fising net has holes 2 inches big, don't expect to catch fish smaller then 2 inches wide)
3timtyler13y
I don't think you can really avoid anthropic ideas - or the universe stops making sense. Some anthropic ideas can be challenging - but I think we have got to try. Anyway, you did the critique - but didn't go for a supporting argument. I can't think of very much that you could say. We don't have very much idea yet about what's out there - and claims to know such things just seem over-confident.
1Roko13y
Basically Rare Earth seems to me to be the only tenable solution to Fermi's paradox.
0timtyler13y
Fermi's paradox implying no aliens surely applies within-galaxy only. Many galaxies are distant, and intelligent life forming there concurrently (or long before us) is quite compatible with it not having arrived on our doorsteps yet - due to the speed of light limitation. If you think we should be able to at least see life in distant galaxies, then, in short, not really - or at least we don't know enough to say yea or nay on that issue with any confidence yet.
0Roko13y
The Andromeda Galaxy is 2.5 million light-years away. The universe is about 1250 million years old. Therefore that's not far enough away to protect us from colonizing aliens travelling at 0.5c or above.
2timtyler13y
The universe is about 13,750 million years old. The Fermi argument suggests that - if there were intelligent aliens in this galaxy, they should probably have filled it by now - unless they originated very close to us in time - which seems unlikely. The argument applies much more weakly to galaxies, because they are much further away, and they are separated from each other by huge regions of empty space. Also, the Andromeda Galaxy is just one galaxy. Say only one galaxy in 100 has intelligent life - and the Andromeda Galaxy isn't among them. That bumps the required distance to be travelled up to 10 million light years or so. Even within this galaxy, the Fermi argument is not that strong. Maybe intelligent aliens formed in the last billion years, and haven't made it here yet - because space travel is tricky, and 0.1c is about the limit. The universe is only about 14 billion years old. For some of of that there were not too many second generations stars. The odds are against there being aliens nearby - but they are not that heavily stacked. For other galaxies, the argument is much, much less compelling.
0[anonymous]13y
There are strained applications of anthropics, like the doomsday argument. "What happened here might happen elsewhere" is much more innocuous.
1[anonymous]13y
There are some more practical and harmless applications as well. In Nick Bostrom's Anthropic Bias, for example, there is an application of the Self-Sampling Assumption to traffic analysis.
1timtyler13y
Bostrom says: "Cars in the next lane really do go faster"
0Vladimir_Nesov13y
I agree.
2[anonymous]13y
Even Nick Bostrom, who is arguably the leading expert on anthropic problems, rejects SIA for a number of reasons (see his book Anthropic Bias). That alone is a pretty big blow to its credibility.
0timtyler13y
That is curious. Anyway, the self-indication assumption seems fairly straight-forwards (as much as any anthropic reasoning is, anyway). The critical material from Bostrom on the topic I have read seems unpersuasive. He doesn't seem to "get" the motivation for the idea in the first place.
0Kevin13y
If you think there is a significant probability that an intelligence explosion is possible or likely, then that question is sensitive to how well things go on Earth in the next 100 years.
4[anonymous]13y
However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They'll make whatever use of the 80 billion galaxies that they can--will they be wasting them? If Earth wins by a hair, or by a lot, we'll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.
-1timtyler13y
That seems like a rather exaggerated sense of importance. It may be a fun fantasy in which the fate of the entire universe hangs in the balance in the next century - but do bear in mind the disconnect between that and the real world.
5shokwave13y
Out of curiosity: what evidence would convince you that the fate of the entire universe does hang in the balance?
2Manfred13y
No human-comparable aliens, for one. Which seems awfully unlikely, the more we learn about solar systems.
-2timtyler13y
"Convince me" - with some unspecified level of confidence? That is not a great question :-| We lack knowlegde of the existence (or non-existence) of aliens in other galaxies. Until we have such knowledge, our uncertainty on this matter will necessarily be high - and we should not be "convinced" of anything.
1shokwave13y
What evidence would convince you, with 95% confidence, that the fate of the universe hangs in the balance in this next century on Earth? You may specify evidence such as "strong evidence that we are completely alone in the universe" even if you think it is unlikely we will get such evidence.
-2timtyler13y
I did get the gist of your question the first time - and answered according. The question takes us far into counter-factual territory, though.
1shokwave13y
I was just curious to see if you rejected the fantasy on principle, or if you had other reasons.
1Larks13y
Unfortunately, FHI seems to have filled the vacancies it advertised earlier this month.
1Alexandros13y
Are you talking about these? (http://www.fhi.ox.ac.uk/news/2010/vacancies) This seems odd, the deadline for applications is on Jan 12th.
0Larks13y
Oh yes - strange, I swear it said no vacancies...
0Roko13y
Sure, so this favors the "Create a new James Martin" strategy.
[-][anonymous]13y100

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

The problem is that we have to defer to Eliezer's (and, by extension, SIAI's) judgment on such issues. Many of the commenters here think that this is not only bad PR for them, but also a questionable policy for a "community blog devoted to refining the art of human rationality."

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

If you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence):

But I have no reason to doubt Eliezer's honesty or intelligence in forming those expectations.

Also, I am getting tired of objections framed as predictions that others would make the objections. It is possible to have a reasonable discussion with people who put forth their own objections, explain their own true rejections, and update their own beleifs. But when you are presenting the objections you predict others will make, it is much harder, even if you are personally convinced, to predict that these nebulous others will also be persuaded by my response. So please, stick your own neck out if you want to complain about this.

3[anonymous]13y
That's definitely a fair objection, and I'll answer: I personally trust Eliezer's honesty, and he is obviously much smarter than myself. However, that doesn't mean that he's always right, and it doesn't mean that we should trust his judgment on an issue until it has been discussed thoroughly. I agree. The above paragraph is my objection.
2JGWeissman13y
The problem with a public thorough discussion in these cases is that once you understand the reasons why the idea is dangerous, you already know it, and don't have the opportunity to choose whether to learn about it. If you trust Eliezer's honesty, then though he may make mistakes, you should not expect him to use this policy as a cover for banning posts as part of some hidden agenda.
6[anonymous]13y
That's definitely the root of the problem. In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea. If there is, then it means we are doing something wrong. I don't think he's got a hidden agenda; I'm concerned about his mistakes. Though I'm not astute enough to point them out, I think the LW community as a whole is.
5JGWeissman13y
I have a response to this that I don't actually want to say, because it could make the idea more dangerous to those who have heard about it but are currently safe due to not fully understanding it. I find that predicting that this sort of thing will happen makes me reluctant to discuss this issue, which may explain why of those who are talking about it, most seem to think the banning was wrong. Given that there has been one banned post. I think that his mistakes are much less of a problem than overwrought concern about his mistakes.
1[anonymous]13y
If you have a reply, please PM me. I'm interested in hearing it.
1JGWeissman13y
Are you interested in hearing it if it does give you a better understanding of the dangerous idea that you then realize is in fact dangerous?
0[anonymous]13y
It may not matter anymore, but yes, I would still like to hear it.
0JGWeissman13y
In this case, the same point has been made by others in this thread.
2Vladimir_Nesov13y
Why do you believe that? FAI is full of potential for dangerous ideas. In its full development, it's an idea with the power to rewrite 100 billion galaxies. That's gotta be dangerous.
[-][anonymous]13y140

Let me try to rephrase: correct FAI theory shouldn't have dangerous ideas. If we find that the current version does have dangerous ideas, then this suggests that we are on the wrong track. The "Friendly" in "Friendly AI" should mean friendly.

Pretty much correct in this case. Roko's original post was, in fact, wrong; correctly programmed FAIs should not be a threat.

(FAIs shouldn't be a threat, but a theory to create a FAI will obviously have at least potential to be used to create uFAIs. FAI theory will have plenty of dangerous ideas.)

6XiXiDu13y
I want to highlight at this point how you think about similar scenarios: That isn't very reassuring. I believe that if you had the choice of either letting a Paperclip maximizer burn the cosmic commons or torture 100 people, you'd choose to torture 100 people. Wouldn't you? They are always a threat to some beings. For example beings who oppose CEV or other AI's. Any FAI who would run a human version of CEV would be a potential existential risk to any alien civilisation. If you accept all this possible oppression in the name of what is subjectively friendliness, how can I be sure that you don't favor torture for some humans that support CEV, in order to ensure it? After all you already allow for the possibility that many beings are being oppressed or possible killed.
4wedrifid13y
This seems to be true and obviously so.
-1Vladimir_Nesov13y
Narrowness. You can parry almost any statement like this, by posing a context outside its domain of applicability.
0[anonymous]13y
Another pointless flamewar. This part makes me curious though: There are two ways I can interpret your statement: a) you know a lot more about decision theory than you've disclosed so far (here, in the workshop and elsewhere); b) you don't have that advanced knowledge, but won't accept as "correct" any decision theory that leads to unpalatable consequences like Roko's scenario. Which is it?
8Vladimir_Nesov13y
From my point of view, and as I discussed in the post (this discussion got banned with the rest, although it's not exactly on that topic), the problem here is the notion of "blackmail". I don't know how to formally distinguish that from any other kind of bargaining, and the way in which Roko's post could be wrong that I remember required this distinction to be made (it could be wrong in other ways, but that I didn't notice at the time and don't care to revisit). (The actual content edited out and posted as a top-level post.)
2cousin_it13y
(I seem to have a talent for writing stuff, then deleting it, and then getting interesting replies. Okay. Let it stay as a little inference exercise for onlookers! And please nobody think that my comment contained interesting secret stuff; it was just a dumb question to Eliezer that I deleted myself, because I figured out on my own what his answer would be.) Thanks for verbalizing the problems with "blackmail". I've been thinking about these issues in the exact same way, but made no progress and never cared enough to write it up.
4Perplexed13y
Perhaps the reason you are having trouble coming up with a satisfactory characterization of blackmail is that you want a definition with the consequence that it is rational to resist blackmail and therefore not rational to engage in blackmail. Pleasant though this might be, I fear the universe is not so accomodating. Elsewhere VN asks how to unpack the notion of a status-quo, and tries to characterize blackmail as a threat which forces the recipient to accept less utility than she would have received in the status quo. I don't see any reason in game theory why such threats should be treated any differently than other threats. But it is easy enough to define the 'status-quo'. The status quo is the solution to a modified game - modified in such a way that the time between moves increases toward infinity and the current significance of those future moves (be they retaliations or compensations) is discounted toward zero. A player who lives in the present and doesn't respond to delayed gratification or delayed punishment is pretty much immune to threats (and to promises).
7David_Gerard13y
On RW it's called Headless Chicken Mode, when the community appears to go nuts for a time. It generally resolves itself once people have the yelling out of their system. The trick is not to make any decisions based on the fact that things have gone into headless chicken mode. It'll pass. [The comment this is in reply to was innocently deleted by the poster, but not before I made this comment. However, I think I'm making a useful point here, so would prefer to keep this comment.]
2Jack13y
This is certainly the case with regard to the kind of decision theoretic thing in Roko's deleted post. I'm not sure if it is the case with all ideas that might come up while discussing FAI.
-16Vladimir_Nesov13y

An important academic option: get tenure at a less reputable school. In the States at least there are tons of universities that don't really have huge research responsibilities (so you won't need to worry about pushing out worthless papers, preparing for conferences, peer reviewing, etc), and also don't have huge teaching loads. Once you get tenure you can cruise while focusing on research you think matters.

The down side is that you won't be able to network quite as effectively as if you were at a more prestigious university and the pay isn't quite as good.

3utilitymonster13y
Don't forget about the ridiculous levels of teaching you're responsible for in that situation. Lots worse than at an elite institution.
3Jordan13y
Not necessarily. I'm not referring to no-research universities, which do have much higher teaching loads (although still not ridiculous. Teaching 3 or 4 classes a semester is hardly strenuous). I'm referring to research universities that aren't in the top 100, but which still push out graduate students. My undegrad Alma Mater, Kansas University, for instance. Professors teach 1 or 2 classes a semester, with TA support (really, when you have TAs, teaching is not real work). They are still expected to do research, but the pressure is much less than at a top 50 school.

I pointed out to Roko by PM that his comment couldn't be doing his cause any favors, but did not ask him to delete it, and would have discouraged him from doing so.

2waitingforgodel13y
I can't be sure, but it sounded from: like he'd gotten a stronger message from someone high up in SIAI -- though of course, I probably like that theory because of the Bayesian Conspiracy aspects. Would you mind PM'ing me (or just posting) the message you sent? Also, does the above fit with your experiences at SIAI? I find it hard, but not impossible, to believe that Roko just described something akin to standard hiring procedure, and would very much like to hear an inside (and presumably saner) account.
9MichaelAnissimov13y
Most people who actually work full-time for SIAI are too busy to read every comments thread on LW. In some cases, they barely read it at all. The wacky speculation here about SIAI is very odd -- a simple visit in most cases would eliminate the need for it. Surely more than a hundred people have visited our facilities in the last few years, so plenty of people know what we're really like in person. Not very insane or fanatical or controlling or whatever generates a good comic book narrative.
6Nick_Tarleton13y
PMed the message I sent. Certainly not anything like standard hiring procedure.
8waitingforgodel13y
Thanks Nick. Please pardon my prying, but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc. I've seen evidence of fanaticism, but have always been confused about what the source is (did they start that way, or were they molded?). Basically, I would very much like to know what your experience has been as you've gotten closer to SIAI. I'm sure I'm not the only (past, perhaps future) donor would appreciate the air being cleared about this.

Please pardon my prying,

No problem, and I welcome more such questions.

but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc.

No; if anything, I see explicit advocacy, as Carl describes, against natural emergent fanaticism (see below), and people becoming less fanatical to the extent that they're influenced by group norms. I don't see emergent individual fanaticism generating significant unhealthy group dynamics like these. I do see understanding and advocacy of indirect utilitarianism as the proper way to 'shut up and multiply'. I would be surprised if I saw any of the specific things you mention clearly going on, unless non-manipulatively advising people on how to live up to ideals they've already endorsed counts. I and others have at times felt uncomfortable pressure to be more altruistic, but this is mostly pressure on oneself — having more to do with personal fanaticism and guilt than group dynamics, let alone deliberate manipulation — and creating a sense of pressure is generally recognized as harmf... (read more)

7Larks13y
I was there for a summer and don't think I was ever even asked to donate money.
0waitingforgodel13y
Ahh. I was trying to ask about Cialdini-style influence techniques.
6Roko13y
Very little, if any.
0wedrifid13y
What exactly is Roko's cause by your estimation? I wasn't aware he had one, at least in the secretive sense.
2Nick_Tarleton13y
I meant SIAI.

But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

This depends on the field, university, and maybe country. In many cases, doing your own research is the main focus f... (read more)

I would choose that knowledge if there was the chance that it wouldn't find out about it. As far as I understand your knowledge of the dangerous truth, it just increases the likelihood of suffering, it doesn't make it guaranteed.

I don't understand your reasoning here -- bad events don't get a "flawless victory" badness bonus for being guaranteed. A 100% chance of something bad isn't much worse than a 90% chance.

0XiXiDu13y
I said that I wouldn't want to know it if a bad outcome was guaranteed. But if it would make a bad outcome possible, but very-very-unlikely to actually occur, then the utility I assign to knowing the truth would outweigh the very unlikely possibility of something bad happening.
0Roko13y
No, dude, you're wrong

One big disadvantage is that you won't be interacting with other researchers from whom you can learn.

Research seems to be an insiders' game. You only ever really see the current state of research in informal settings like seminars and lab visits. Conference papers and journal articles tend to give strange, skewed, out-of-context projections of what's really going on, and books summarise important findings long after the fact.

4Danny_Hintze13y
At the same time however, you might be able to interact with researchers more effectively. For example, you could spend some of those research weeks visiting selected labs and seminars and finding out what's up. It's true that this would force you to be conscientious about opportunities and networking, but that's not necessarily a bad thing. Networks formed with a very distinct purpose are probably going to outperform those that form more accidentally. You wouldn't be as tied down as other researchers, which could give you an edge in getting the ideas and experiences you need for your research, while simultaneously making you more valuable to others when necessary (For example, imagine if one of your important research contacts needs two weeks of solid help on something. You could oblige whereas others with less fluid obligations could not.).

The compelling argument for me is that knowing about bad things is useful to the extent that you can do something about them, and it turns out that people who don't know anything (call them "non-cogniscenti") will probably free-ride their way to any benefits of action on the collective-action-problem that is the at issue here, whilst avoiding drawing any particular attention to themselves ==> avoiding the risks.

Vladimir Nesov doubts this prima facie, i.e. he asks "how do you know that the strategy of being a completely inert player is best?".

-- to which I answer, "if you want to be the first monkey shot into space, then good luck" ;D

-1timtyler13y
This is the "collective-action-problem" - where the end of the world arrives - unless a select band of heroic messiahs arrive and transport everyone to heaven...? That seems like a fantasy story designed to manipulate - I would council not getting sucked in.

No, this is the "collective-action-problem" - where the end of the world arrives - despite a select band of decidedly amateurish messiahs arriving and failing to accomplish anything significant.

You are looking at those amateurs now.

-3timtyler13y
The END OF THE WORLD is probably the most frequently-repeated failed prediction of all time. Humans are doing spectacularly well - and the world is showing many signs of material and moral progress - all of which makes the apocalypse unlikely. The reason for the interest here seems obvious - the Singularity Institute's funding is derived largely from donors who think it can help to SAVE THE WORLD. The world must first be at risk to enable heroic Messiahs to rescue everyone. The most frequently-cited projected cause of the apocalypse: an engineering screw-up. Supposedly, future engineers are going to be so incompetent that they accidentally destroy the whole world. The main idea - as far as I can tell - is that a bug is going to destroy civilisation. Also - as far as I can tell - this isn't the conclusion of analysis performed on previous engineering failures - or on the effects of previous bugs - but rather is wild extrapolation and guesswork. Of course it is true that there may be a disaster, and END OF THE WORLD might arrive. However there is no credible evidence that this is likely to be a probable outcome. Instead, what we have appears to be mostly a bunch of fear mongering used for fundraising aimed at fighting the threat. That gets us into the whole area of the use and effects of fear mongering. Fearmongering is a common means of psychological manipulation, used frequently by advertisers and marketers to produce irrational behaviour in their victims. It has been particularly widely used in the IT industry - mainly in the form of fear, uncertainty and doubt. Evidently, prolonged and widespread use is likely to help to produce a culture of fear. The long-term effects of that are not terribly clear - but it seems to be dubious territory. I would council those using fear mongering for fund-raising purposes to be especially cautious of the harm this might do. It seems like a potentially dangerous form of meme warfare. Fear targets circuits in the human brai

Do you also think that global warming is a hoax, that nuclear weapons were never really that dangerous, and that the whole concept of existential risks is basically a self-serving delusion?

Also, why are the folks that you disagree with the only ones that get to be described with all-caps narrative tropes? Aren't you THE LONE SANE MAN who's MAKING A DESPERATE EFFORT to EXPOSE THE TRUTH about FALSE MESSIAHS and the LIES OF CORRUPT LEADERS and SHOW THE WAY to their HORDES OF MINDLESS FOLLOWERS to AN ENLIGHTENED FUTURE? Can't you describe anything with all-caps narrative tropes if you want?

Not rhethorical questions, I'd actually like to read your answers.

2multifoliaterose13y
I laughed aloud upon reading this comment; thanks for lifting my mood.
1timtyler13y
Tim on global warming: http://timtyler.org/end_the_ice_age/ 1-line summary - I am not too worried about that either. Global warming is far more the subject of irrational fear-mongering then machine intelligence is. It's hard to judge how at risk the world was from nuclear weapons during the cold war. I don't have privileged information about that. After Japan, we have not had nuclear weapons used in anger or war. That doesn't give much in the way of actual statistics to go on. Whatever estimate is best, confidence intervals would have to be wide. Perhaps ask an expert on the history of the era this question. The END OF THE WORLD is not necessarily an idea that benefits those who embrace it. If you consider the stereotypical END OF THE WORLD plackard carrier, they are probably not benefitting very much personally. The benefit associated with the behaviour accrues mostly to the END OF THE WORLD meme itself. However, obviously, there are some people who benefit. 2012 - and all that. The probabality of the END OF THE WORLD soon - if it is spelled out exactly what is meant by that - is a real number which could be scientifically investigated. However whether the usual fundraising and marketing campaigns around the subject illuminate that subject more than they systematically distort it seems debatable.
3Desrtopa13y
This is a pretty optimistic way of looking at it, but unfortunately it's quite unfounded. Current scientific consensus is that we've already released more than enough greenhouse gases to avert the next glacial period. Melting the ice sheets and thus ending the ice age entirely is an extremely bad idea if we do it too quickly for global ecosystems to adapt.
-2timtyler13y
We don't even really understand what causes the glacial cycles yet. This is an area where there are multiple competing hypotheses. I list four of these on my site. So, since we don't have a proper understanding of the mechanics involved with much confidence yet, we don't yet know what it would take to prevent them. Here's what Dyson says on the topic: I do not believe this is contrary to any "scientific consensus" on the topic. Where is this supposed "scientific consensus" of which you speak? Melting the ice caps is inevitably an extremely slow process - due to thermal inertia. It is also widely thought to be a runaway positive feedback cycle - and so probably a phenomenon that it would be difficult to control the rate of.
1Desrtopa13y
Melting of the icecaps is now confirmed to be a runaway positive feedback process pretty much beyond a shadow of a doubt. Within the last few years, melting has occurred at a rate that exceeded the upper limits of our projection margins. Have you performed calculations on what it would take to avert the next glacial period on the basis of any of the competing models, or did you just assume that ice ages are bad, so preventing them is good and we should thus work hard to prevent reglaciation? There's a reason why your site is the first and possibly only only result in online searches for support of preventing glaciation, and it's not because you're the only one to think of it
1timtyler13y
There are others who share my views - e.g.: * http://www.theregister.co.uk/2007/08/14/freeman_dyson_climate_heresies/ * http://www.guardian.co.uk/environment/2002/dec/05/comment.climatechange * http://www.stanford.edu/~moore/Boon_To_Man.html * http://www.telegraph.co.uk/news/uknews/1563054/Global-warming-is-good-and-is-not-our-fault.html
2Desrtopa13y
Why is being difficult to control glacial melting a point in the favor of increasing greenhouse gas emissions? It's true that climate change models are limited in their ability to project climate change accurately, although they're getting better all the time. Unfortunately, the evidence currently suggests that they're undershooting actual warming rates even at their upper limits. The pro-warming arguments on your site essentially boil down to "warm earth is better than cold earth, so we should try to warm the earth up." Regardless of the relative merits of a warmer or colder planet though, rapid change of climate is a major burden on ecosystems. Flooding and forest fires are relatively trivial effects, it's mass extinction events that are a real matter of concern.
0timtyler13y
That is hard to parse. You are asking why I think the rate of runaway positive feedback cycles is difficult to control? That is because that is often their nature. You talk as though I am denying warming is happening. HUH? Right. So, if you want a stable climate, you need to end the yo-yo glacial cycles - and end the ice age. A stable climate is one of the benefits of doing that. I have a section entitled "Climate stablity" in my essay. To quote from it: * http://timtyler.org/end_the_ice_age/
2Desrtopa13y
I have no idea how you got that out of my question. It's obvious why runaway positive feedback cycles would be hard to control, the question I asked is why this in any way supports global warming not being dangerous. That was not something I meant to imply. My point is that you seem to have decided that it's better for our earth to be warm than cold, and thus that it's good to approach that state, but not done any investigation into whether what we're doing is a safe means of accomplishing that end; rather you seem to have assumed that we cannot do too much. Most of the species on earth today have survived through multiple glaciation periods. Our ecosystems have that plasticity, because those species that were not able to cope with the rapid cooling periods died out. Global warming could lead to a stable climate, but it's also liable to cause massive extinction in the process as climate zones shift in ways that they haven't in millions of years far at a rate outside the tolerances of many ecosystems. When it comes to global climate, there are really no "better" or "worse" states. Species adapt to the way things are. Cretaceous organisms are adapted to Cretaceous climates, Cenozoic organisms are adapted to Cenozoic climates, and either would have problems dealing with the other's climate. Humans more often suffer problems from being too cold than too hot, but we've scarcely had time to evolve since we left near-equatorial climates. We're adapted to be comfortable in hotter climates than the ones in which most people live today, but the species we rely on are mostly adapted to deal with the climates they're actually in, with cooling periods lying within the tolerances of ecosystems that have been forced to deal with them recently in their evolutionary history.
0timtyler13y
There most certainly are - from the perspective of individuals, groups, or species.
2Desrtopa13y
From the perspective of species, "better," is generally "maintain ecosystem status quo" and "worse" is everything else, except for cases where they come out ahead due to competitors suffering more heavily from the changes.
1timtyler13y
For most possible changes, a good rule of thumb is on average that half the agents affected do better than average, and half the agents affected do worse than average. Fitness is relative - and that's just what it means to consider an average value. I go into all this in more detail on: http://timtyler.org/why_everything_is_controversial/
0Desrtopa13y
Roughly half of agents may have a better than average response to the change, but when rapid ecosystem changes occur, the average species response is negative. Particularly when accompanied by other forms of ecosystem pressure (which humanity is certainly exerting) rapid changes in climate tend to be accompanied by extinction spikes and decreases in species diversity.
0timtyler13y
I am not sure I am following. You are saying that such changes are bad - because they drive species towards extinction? If you look at: http://alife.co.uk/essays/engineered_future/ ...you will see that I expect the current mass extinction to intensify tremendously. However, I am not clear about how or why that would be bad. Surely it is a near-inevitable result of progress.
0Desrtopa13y
Rapid change drives species to extinction at a rate liable to endanger the function of ecosystems we rely on. Massive extinction events are in no way an inevitable consequence of improving the livelihoods of humans, although I'm not optimistic about our prospects of actually avoiding them. Loss of a large percentage of the species on earth would hurt us, both in practical terms and as a matter of widely shared preference. As a species, we would almost certainly survive anthropogenic climate change even if it caused a runaway mass extinction event, but that doesn't mean that it's not an outcome that would be better to avoid if possible. Frankly, I don't expect legislation or social agitation ever to have an adequate impact in halting anthropogenic global warming; unless we come up with some really clever hack, the battle is going to be lost, but that doesn't mean that we shouldn't be aware of what we stand to lose, and take notice if any viable means of avoiding it arises.
1timtyler13y
The argument suggesting that we should move away from the "cliff edge" of reglaciation is that it is dangerous hanging around there - and we really don't want to fall off. You seem to be saying that we should be cautious about moving too fast - in case we break something. Very well, I agree entirely - so: let us study the whole issue while moving as rapidly away from the danger zone as we feel is reasonably safe.
4Desrtopa13y
As I already noted, as best indicated by our calculations we have already overshot the goal of preventing the next glaciation period. Moving away from the danger zone at a reasonably safe pace would mean a major reduction in greenhouse gas emissions.
6timtyler13y
We don't know that. The science of this isn't settled. The Milankovitch hypothesis of glaciation is more band-aid than theory. See: http://en.wikipedia.org/wiki/Milankovitch_cycles#Problems CO2 apparently helps - but even that is uncertain. I would want to see a very convincing case that we are far enough from the edge for the risk of reglaciation to be over before advocating hanging around on the reglaciation cliff-edge. Short of eliminating the ice caps, it is difficult to imagine what would be convincing. Those ice caps are potentially major bad news for life on the planet - and some industrial CO2 is little reassurance - since that could relatively quickly become trapped inside plants and then buried.
4Desrtopa13y
The global ice caps have been around for millions of years now. Life on earth is adapted to climates that sustain them. They do not constitute "major bad news for life on this planet." Reglaciation would pose problems for human civilization, but the onset of glaciation occurs at a much slower rate than the warming we're already subjecting the planet to, and as such even if raising CO2 levels above what they've been since before the glaciations began in the Pleistocene were not enough to prevent the next round, it would still be a less pressing issue. On a geological time scale, the amount of CO2 we've released could quickly be trapped in plants and buried, but with the state of human civilization as it is, how do you suppose that would actually happen quickly enough to be meaningful for the purposes of this discussion?
1timtyler13y
The ice age is a pretty major problem for the planet. Huge ice sheets obliterate most life on the northern hemisphere continents every 100 thousand years or so. Re: reglaciation being slow - the last reglaciation looked slower than the last melt. The one before that happened at about the same speed. However, they both look like runaway positive feedback processes. Once the process has started it may not be easy to stop it. Thinking of reglaciation as "not pressing" seems like a quick way to get reglaciated. Humans have got to intervene in the planet's climate and warm it up in order to avoid this disaster. Leaving the climate alone would be a recipe for reglaciation. Pumping CO2 into the atmosphere may have saved us from disaster already, may save us from disaster in the future, may merely be a step in the right direction - or may be pretty ineffectual. However, it is important to realise that humans have got to take steps to warm the planet up - otherwise our whole civilisation may be quickly screwed. We don't know that industrial CO2 will protect us from reglaciation - since we don't yet fully understand the latter process - though we do know that it devastates the planet like clockwork, and so has an astronomical origin. The atmosphere has a CO2 decay function with an estimated half-life time of somwhere between 20-100 years. It wouldn't vanish overnight - but a lot of it could go pretty quickly if civilisation problems resulted in a cessation of production.
2NancyLebovitz13y
If reglaciation starts, could it be stopped by sprinkling coal dust on some of the ice?
0timtyler13y
Hopefully - if we have enough of a civilisation at the time. Reglaciation seems likely to only really be a threat after a major disaster or setback - I figure. Otherwise, we can just adjust the climate controls. The chances of such a major setback may seem slender - but perhaps are not so small that we can afford to be blazee about the matter. What we don't want is to fall down the stairs - and then be kicked in the teeth. I discuss possible theraputic interventions on: http://timtyler.org/tundra_reclamation/ The main ones listed are planting northerly trees and black ground sheets.
0[anonymous]13y
We don't know great many things, but what to do right now, we must decide right now, based on whatever we happen to know. (To address the reason for Desrtopa's comment, if not any problem with your comment on this topic I'm completely ignorant about.)
0timtyler13y
If you are concerned about loss of potentially valuable information in the form of species extinction, global warming seems like total fluff. Look instead to habitat destruction and decimation, farming practices, and the resistribution of pathogens, predators and competitors by humans.
0Desrtopa13y
I do look at all these issues. I've spoken at conferences about how they receive too little attention relative to the danger they pose. That doesn't mean that global warming does not stand to cause major harm, and going on the basis of the content of your site, you don't seem to have invested adequate effort into researching the potential dangers, only the potential benefits.
-6timtyler13y
-1timtyler13y
Global warming seems a lot less dangerous than reglaciation. Actually, I expect us to master climate control fairly quickly. That is another reason why global warming is a storm in a teacup. However, the future is uncertain. We might get unlucky - and be hit by a fair-sized meteorite. If that happens, reglaciation is about the last thing we would want for desert.
3nshepperd13y
"Fairly quickly"? What if we don't? Do you expect reglaciation to occur within the next 100 years, 200 years? If not we can wait until we have the knowledge to pull off climate control safely. (And if we do get hit by an asteroid, the last thing we probably want is runaway climate change started when we didn't know what we were doing either.)
-3timtyler13y
If things go according to plan, we get climate control - and then need to worry little about either warming or reglaciation. The problem is things not going according to plan. Indeed. The "runaway climate change" we are scheduled for is reglaciation. The history of the planet is very clear on this topic. That is exactly what we don't want. A disaster followed by glaciers descending over the northern continents could make a mess of civilisation for quite a while. Warming, by contrast doesn't represent a significant threat - living systems including humans thrive in warm conditions.
4Desrtopa13y
Living systems including humans also thrive in cold conditions. Most species on the planet today have persisted through multiple glaciation periods, but not through pre-Pleistocene level warmth or rapid warming events. Plus, the history of the Pleistocene, in which our record of glaciation exists, contains no events of greenhouse gas release and warming comparable to the one we're in now, this is not business as usual on the track to reglaciation. Claiming that the history of the planet is very clear that we're headed for reglaciation is flat out misleading. Last time the world had CO2 levels as high as they are now, it wasn't going through cyclical glaciation.
-2timtyler13y
Most species on the planet are less than 2.5 million years old?!? I checked and found: "The fossil record suggests an average species lifespan of about five million years" and "Average species lifespan in fossil record: 4 million years." (search for sources). So, I figure your claim is probably factually incorrect. However, isn't it a rather meaningless statistic anyway? It depends on how often lineages speciate. That actually says very little about how long it takes to adapt to an environment.
2Desrtopa13y
The average species age is necessarily lower than the average species duration. Additionally, the fossil record measures species in paleontological terms, a paleontological "species" is not a species in biological terms, but a group which cannot be distinguished from each other by fossilized remains. Paleontological species duration sets the upper bound on biological species duration; in practice, biological species duration is shorter. Species originating more than 2.5 million years ago which were not capable of enduring glaciation periods would have died out when they occurred. The origin window for species without adaptations to cope is the last ten thousand years. Any species with a Pleistocene origin or earlier has persisted through glaciation periods.
0Vaniver13y
Allow me to try: There are positive feedback cycles which appear to be going in runaway mode. Why is this evidence for "things are going to get better" rather than "things are going to get worse"? Your argument as a whole- "we need to get above this variability regime into a stable regime"- answers why the runaway positive feedback loop would be desirable, but does not convincingly establish (the part I've read, at least, you may do this elsewhere) that the part above the current variability is actually a stable attractor, instead of us shooting to up to Venus's climate (or something less extreme but still regrettable for humans).
0timtyler13y
Well, we already know what the planet is like when it is not locked into a crippling ice age. Ice-cap free is how the planet has spent the vast majority of its history. We have abundant records about that already.
0timtyler13y
That's the whole "ice age: bad / normal planet: good" notion. I figure a planet locked into a crippling era of catastrophic glacial cycles is undesirable.
1Vladimir_Nesov13y
So the real problem here is weakness of arguments, since they lack explanatory power by being able to "explain" too much.
7Roko13y
Point of fact: the negative singularity isn't a superstimulus for evolved fear circuits: current best-guess would be that it would be a quick painless death in the distant future (30 years+ by most estimates, my guess 50 years+ if ever). It doesn't at all look like how I would design a superstimulus for fear.
4timtyler13y
It typically has the feature that you, all your relatives, friends and loved-ones die - probably enough for most people to seriously want to avoid it. Michael Vasser talks about "eliminating everything that we value in the universe". Maybe better super-stimuli could be designed - but there are constraints. Those involved can't just make up the apocalypse that they think would be the most scary one. Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these "hell" memes results in them becoming more prominent - or whether most people just find them too ridiculous to take seriously.
3wedrifid13y
Yes, you can only look at them through a camera lens, as a reflection in a pool or possibly through a ghost! ;)
2Roko13y
I think you're trying to fit the facts to the hypothesis. Negatve singularity in my opinion is at least 50 years away. Many people I know will already be dead by then, including me if I die at the same point in life as the average of my family. And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see). It is also not well-optimized to be believable.
4XiXiDu13y
It doesn't work. Jehovah's Witnesses don't even believe into a hell and they are gaining a lot of members each year and donations are on the rise. Donations are not even mandatory either, you are just asked to donate if possible. The only incentive they use is positive incentive. People will do everything for their country if it asks them to give their life. Suicide bombers also do not blow themselves up because of negative incentive but because they promise their families help and money. Also some believe that they will enter paradise. Negative incentive makes many people reluctant. There is much less crime in the EU than in the U.S. and they got death penalty. Here you get out of jail after max. ~20 years and there's almost no violence in jails either.
0wedrifid13y
I take it that you would place (t(positive singularity) | positive singularity) a significant distance further still? This got a wry smile out of me. :)
0Roko13y
(t(positive singularity) | positive singularity) I'm going to say 75 years for that. But really, this is becoming very much total guesswork. I do know that AGI -ve singularity won't happen in the next 2 decades and I think one can bet that it won't happen after that for another few decades either.
0wedrifid13y
It's still interesting to hear your thoughts. My hunch is that the difficulty of the -ve --> +ve step is much harder than the 'singularity' step so I would expect the time estimates to reflect that somewhat. But there are all sorts of complications there and my guesswork is even more guess-like than yours! If you find anyone who is willing to take you up on a bet of that form given any time estimate and any odds then please introduce them to me! ;)
0Roko13y
Many plausible ways to S^+ involve something odd or unexpected happening. WBE might make computational political structures, i.e. political structures based inside a computer full of WBEs. This might change the way humans cooperate. Suffices to say that FAI doesn't have to come via the expected route of someone inventing AGI and then waiting until they invent "friendliness theory" for it.
-1timtyler13y
Church and cute puppies are likely worse causes, yes. I listed animal charities in my "Bad causes" video. I don't have their budget at my fingertips - but SIAI has raked in around 200,000 dollars a year for the last few years. Not enormous - but not trivial. Anyway, my concern is not really with the cash, but with the memes. This is a field adjacent to one I am interested in: machine intelligence. I am sure there will be a festival of fear-mongering marketing in this area as time passes, with each organisation trying to convince consumers that its products will be safer than those of its rivals. "3-laws-safe" slogans will be printed. I note that Google's recent chrome ad was full of data destruction images - and ended with the slogan "be safe". Some of this is potentially good. However, some of it isn't - and is more reminiscent of the Daisy ad.

To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?

Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:

  • it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.

  • it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

  • it is not optimized for believability. In fact it is almost optimized for anti-believability, "rapture of the nerds", much public ridicule, etc.

6Roko13y
A moment's googling finds this: http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf ($863 444) I leave it to readers to judge whether Tim is flogging a dead horse here.
5wedrifid13y
Not the sort of thing that could, you know, give you nightmares?
5Roko13y
The sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere "existential risk" message.
-6timtyler13y
2NancyLebovitz13y
I think your disapproval of animal charities is based on circular logic, or at least an unproven premise. You seem to be saying that animal causes are unworthy recipients of human effort because animals aren't humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don't think it's proven that people only like animals because the animals are super-stimuli. I could be mistaken, but I think that a more abstract utilitarian approach grounds out in some sort of increased enjoyment of life, or else it's an effort to assume a universe-eye's view of what's ultimately valuable. I'm inclined to trust the former more. What's your line of argument for supporting charities that help people?
1timtyler13y
I usually value humans much more than I value animals. Given a choice between saving a human or N non-human animals, N would normally have to be very large before I would even think twice about it. Similar values are enshrined in law in most countries.
1wedrifid13y
To the extent that the law accurately represents the values of the people it governs charities are not necessary. Vales enshrined in law are by necessity irrelevant. (Noting by way of pre-emption that I do not require that laws should fully represent the values of the people.)
0timtyler13y
I do not agree. If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.
0wedrifid13y
And yet this is not contrary to my point. Charity operates, only needs to operate, on areas that laws do not already create a solution for. If there was a law specifying that dying kids get trips to Disneyland and visits by popstars then there wouldn't be a "Make A Wish Foundation".
0timtyler13y
You said the law was "irrelevant" - but there's a sense in which we can see consensus human values about animals by looking at what the law dictates as punishment for their maltreatment. That is what I was talking about. It seems to me that the law has something to say about the issue of the value of animals relative to humans. For the most part, animals are given relatively few rights under the law. There are exceptions for some rare ones. Animals are routinely massacred in huge numbers by humans - including some smart mammals like pigs and dolphins. That is a broad reflection how relatively-valuable humans are considered to be.
0shokwave13y
And once it's enshrined in law, it no longer matters whether citizens think killing a human is worse or better than killing a dog. I think that is what wedrifid was noting.
0multifoliaterose13y
You may be interested in Alan Dawrst's essays on animal suffering and animal suffering prevention.
1Airedale13y
I believe the numbers are actually higher than $200,000. SIAI's 2008 budget was about $500,000. 2006 was about $400,000 and 2007 was about $300,000 (as listed further in the linked thread). I haven't researched to see if gross revenue numbers or revenue from donations are available. Curiously, Guidestar does not seem to have 2009 numbers for SIAI, or at least I couldn't find those numbers; I just e-mailed a couple people at SIAI asking about that. That being said, even $500,000, while not trivial, seems to me a pretty small budget.
0timtyler13y
Sorry, yes, my bad. $200,000 is what they spent on their own salaries.
9steven046113y
I wonder what fraction of actual historical events a hostile observer taking similar liberties could summarize to also sound like some variety of "a fantasy story designed to manipulate".
-1timtyler13y
I don't know - but believing inaction is best is rather common - and there are pages all about it - e.g.: http://en.wikipedia.org/wiki/Learned_helplessness

Being in a similar position (also as far as aversion to moving to e.g. US is concerned), I decided to work part time (roughly 1/5 of the time of even less) in software industry and spend the remainder of the day studying relevant literature, leveling up etc. for working on the FAI problem. Since I'm not quite out of the university system yet, I'm also trying to build some connections with our AI lab staff and a few other interested people in the academia, but with no intention to actually join their show. It would eat away almost all my time, so I could wo... (read more)

Kaj, why don't you add the option of getting rich in your 20s by working in finance, then paying your way into research groups in your late 30s? PalmPilot guy, uh Jeff Hawkins essentially did this. Except he was an entrepreneur.

3Kaj_Sotala13y
That doesn't sound very easy.
7wedrifid13y
Sounds a heck of a lot easier than doing an equivalent amount of status grabbing within academic circles over the same time. Money is a lot easier to game and status easier to buy.

There is the minor detail that it really helps not to hate each and every individual second of your working life in the process. A goal will only pull you along to a certain degree.

(Computer types know all the money is in the City. I did six months of it. I found the people I worked with and the people whose benefit I worked for to be excellent arguments for an unnecessarily bloody socialist revolution.)

2wedrifid13y
For many people that is about half way between the Masters and PhD degrees. ;) If only being in a university was a guarantee of an enjoyable working experience.
1Roko13y
Curious, why did it bother you that you disliked the people you worked with? Couldn't you just be polite to them and take part in their jokes/socialgames/whatever? They're paying you handsomely to be there, after all? Or was it a case of them being mean to you?
2David_Gerard13y
No, just loathsome. And the end product of what I did and finding the people I was doing it for loathsome.
3Roko13y
I dunno, "loathsome" sounds a bit theoretical to me. Can you be specific?
5CronoDAS13y
One of my brother's co-workers at Goldman Sachs has actively tried to sabotage his work. (Goldman Sachs runs on a highly competitive "up or out" system; you either get promoted or fired, and most people don't get promoted. If my brother lost his job, his coworker would be more likely to keep his.)
3Roko13y
I don't understand: he tried to sabotage his cowerker's work, or his own?
8sfb13y
CronoDAS's Brother's Co-worker tried to sabotage CronoDAS's Brother's work.
0TheOtherDave13y
"Hamlet, in love with the old man's daughter, the old man thinks."
2David_Gerard13y
Not without getting political. Fundamentally, I didn't feel good about what I was doing. And I was just a Unix sysadmin. This was just a job to live, not a job taken on in the furtherance of a larger goal.
3Roko13y
Agreed. Average Prof is a nobody at 40, average financier is a millionaire. shrugs
0Hul-Gil13y
The average financier is a millionaire at 40?! What job is this, exactly?
2sark13y
Thank you for this. This was a profound revelation for me.
6Manfred13y
Upvoted for comedy.
4Roko13y
Also, you can get a PhD in a relevant mathy discipline first, thereby satisfying the condition of having done research. And the process of dealing with the real world enough to make money will hopefully leave you with better anti-akrasia tactics, better ability to achieve real-world goals, etc. You might even be able to hire others.
2Roko13y
I don't think you need to be excessively rich. $1-4M ought to be enough. Edit: oh, I forgot, you live in scandanavia, with a taxation system so "progressive" that it has an essential singularity at $100k. Might have to move to US.
1Kaj_Sotala13y
I'm afraid that's not really an option for me, due to various emotional and social issues. I already got horribly homesick during just a four month visit.
3Vaniver13y
Alaska might be a reasonable Finland substitute, weather-wise, but the other issues will be difficult to resolve (if you're moving to the US to make a bunch of money, Alaska is not the best place to do it). One of my favorite professors was Brazilian, who went to graduate school at the University of Rochester. Horrified (I used to visit my ex in upstate New York, and so was familiar with the horrible winters that take up 8 months of the year without the compensations that convince people to live in Scandinavia), I asked him how he liked the transition- and he said that he loved it, and it was the best time of his life. I clarified that I was asking about the weather, and he shrugged and said that in academia, you absolutely need to put the ideas first. If the best place for your research is Antarctica, that's where you go. The reason why I tell this story is that this is what successful professors look like, and only one tenth of the people that go to graduate school end up as professors. If you would be outcompeted by this guy instead of this guy, keep that in mind when deciding you want to enter academia. And, if you want to do research outside of academia, in order to do that well that requires more effort than research done inside of academia.
1Kaj_Sotala13y
It's not the weather: I'd actually prefer a warmer climate than Finland has. It's living in a foreign culture and losing all of my existing social networks. I don't have a problem with putting in a lot of work, but to be able to put in a lot of work, my life needs to be generally pleasant otherwise, and the work needs to be at least somewhat meaningful. I've tried the "just grit your teeth and toil" mentality, and it doesn't work - maybe for someone else it does, but not for me.
5Vaniver13y
The first part is the part I'm calling into question, not the second. Of course you need to be electrified by your work. It's hard to do great things when you're toiling instead of playing. But your standards for general pleasantness are, as far as I can tell, the sieve for a lot of research fields. As an example, it is actually harder to be happy on a grad student/postdoc salary; instead of it being shallow to consider that a challenge, it's shallow-mindedness to not recognize that that is a challenge. It is actually harder to find a mate and start a family while an itinerant academic looking for tenure. (Other examples abound; two should be enough for this comment.) If you're having trouble leaving your network of friends to go to grad school / someplace you can get paid more, then it seems likely that you will have trouble with the standard academic life or standard corporate life. While there are alternatives, those tend not to play well with doing research, since the alternative tends to take the same kind of effort that you would have put into research. I should comment that I think a normal day job plus research on the side can work out but should be treated like writing a novel on the side- essentially, the way creative literary types play the lottery.
1diegocaleiro13y
It's living in a foreign culture and losing all of my existing social networks. Of course it is! I am in the same situation. Just finished undergrad in philosophy. But here life is completely optimized for happiness: 1) No errands 2) Friends filtered through 15 years for intelligence, fun, beauty, awesomeness. 3) Love, commitment, passion, and just plain sex with the one, and the others. 4) Deep knowledge of the free culture available 5) Ranking high in the city (São Paulo's) social youth hierarchy 6) Cheap services 7) Family and acquaintances network. 8) Freedom timewise to write my books 9) Going to the park 10 min walking 10) Having been to, and having friends who were in the US, and knowing for fact that life just is worse there.... This is how much fun I have, the list's impact is the only reason I'm considering not going to study, get FAI faster, get anti-ageing faster. If only life were just a little worse...... I would be in a plane towards posthumanity right now. So how good has a life to be for you to be forgiven of not working for what really matters? Help me folks!
1Roko13y
Well, you wanna make an omlet, you gotta break some eggs!

Conditioning on yourself deeming it optimal to make a metaphorical omelet by breaking metaphorical eggs, metaphorical eggs will deem it less optimal to remain vulnerable to metaphorical breakage by you than if you did not deem it optimal to make a metaphorical omelet by breaking metaphorical eggs; therefore, deeming it optimal to break metaphorical eggs in order to make a metaphorical omelet can increase the difficulty you find in obtaining omelet-level utility.

5JGWeissman13y
Many metaphorical eggs are not [metaphorical egg]::Utility maximizing agents.
2Clippy13y
True, and to the extent that is not the case, the mechanism I specified would not activate.
1Strange713y
Redefining one's own utility function so as to make it easier to achieve is the road that leads to wireheading.
4Clippy13y
Correct. However, the method I proposed does not involve redefining one's utility function, as it leaves terminal values unchanged. It simply recognizes that certain methods of achieving one's pre-existing terminal values are better than others, which leaves the utility function unaffected (it only alters instrumental values). The method I proposed is similar to pre-commitment for a causal decision theorist on a Newcomb-like problem. For such an agent, "locking out" future decisions can improve expected utility without altering terminal values. Likewise, a decision theory that fully absorbs such outcome-improving "lockouts" so that it outputs the same actions without explicit pre-commitment can increase its expected utility for the same utility function.
1Larks13y
Do you have any advice for getting into Quant work? (I'm a second year maths student at Oxford, don't know much about the city).
8[anonymous]13y
An advice sheet for mathematicians considering becoming quants. It's not a path that interests me, but if it was I think I'd find this useful.
0katydee13y
Are there any good ways of getting rich that don't involve selling your soul?
8RHollerith13y
Please rephrase without using "selling your soul".

Are there any good ways of getting rich that don't involve a Faustian exchange with Lucifer himself?

3Alicorn13y
Pfft. No good ways.
3katydee13y
Without corrupting my value system, I suppose? I'm interested in getting money for reasons other than my own benefit. I am not fully confident in my ability to enter a field like finance without either that changing or me getting burned out by those around me.
5gwern13y
As well ask if there are hundred-dollar bills lying on sidewalks. EDIT: 2 days after I wrote this, I was walking down the main staircase in the library and laying on the central landing, highly contrasted against the floor, in completely clear view of 4 or 5 people who walked past it, was a dollar bill. I paused for a moment reflecting on the irony that sometimes there are free lunches - and picked it up.

This thread raises the question about how many biologists and medical researchers are on here. Due to our specific cluster I expect a strong learning towards the IT people. So AI research gets over proportional recognition, while medical research including direct life extension falls on the wayside.

Speaking as someone who is in grad school now, even with prior research, the formal track of grad school is very helpful. I am doing research that I'm interested in. I don' t know if I'm a representative sample in that regard. It may be that people have more flexibility in math than in other areas. Certainly my anecdotal impression is that people in some areas such as biology don't have this degree of freedom. I'm also learning more about how to research and how to present my results. Those seem to be the largest advantages. Incidentally, my impression is that for grad school at least in many areas, taking a semester or two off if very stressed isn't treated that badly if one is otherwise doing productive research.

0Matt_Simpson13y
I'm in grad school in statistics and am in the same boat. It doesn't seem that difficult to do research on something you're interested in while still in grad school. In a nutshell, choose your major professor wisely. (And make sure the department is large enough that there are plenty of options)

The above deleted comment referenced some details of the banned post. With those details removed, it said:

(Note, this comment reacts to this thread generally, and other discussion of the banning)

The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared.

I realize that you are describing how people generally react to this sort of thing, but this knee jerk stupid reaction is one of the misapplied heurestics we ought to be able notice and overcome.

So far, one p

... (read more)
4David_Gerard13y
Strange LessWrong software fact: this showed up in my reply stream as a comment consisting only of a dot ("."), though it appears to be a reply to a reply to me.
0JGWeissman13y
It also shows up on my user page as a dot. Before I edited it to be just a dot, it showed up in your comment stream and my user page with the original complete content.

There is a big mismatch here between "sending an email to a blogger" and "increase existential risk by one in a million". All of the strategies for achieving existential risk increases that large are either major felonies, or require abusing a political office as leverage. When you first made the threat, I got angry at you on the assumption that you realized this. But if all you're threatening to do is send emails, well, I guess that's your right.

LW is a place where you'll get useful help on weeding out mistakes in your plan to blow up the world, it looks like.

For Epistemic Rationality!

That reminds me of the joke about the engineer in the French revolution.

1waitingforgodel13y
Are you joking? Do you have any idea what a retarded law can do to existential risks?
7jimrandomh13y
P(law will pass|it is retarded && its sole advocate publicly described it as retarded) << 10^-6
-10waitingforgodel13y

(I would have liked to reply to the deleted comment, but you can't reply to deleted comments so I'll reply to the repost.)

  • EDIT: Roko reveals that he was actually never asked to delete his comment! Disregard parts of the rest of this comment accordingly.

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for th

... (read more)

Roko may have been thinking of [just called him, he was thinking of it] a conversation we had when he and I were roommates in Oxford while I was visiting the Future of Humanity Institute, and frequently discussed philosophical problems and thought experiments. Here's the (redeeming?) context:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as t... (read more)

3waitingforgodel13y
First off, great comment -- interesting, and complex. But, some things still don't make sense to me... Assuming that what you described led to: 1. How did precommitting enter in to it? 2. Are you prepared to be tortured or raped for the cause? Have you precommitted to it? 3. Have other SIAI people you know of talked about this with you, have other SIAI people precommitted to it? 4. What do you think of others who do not want to be tortured or raped for the cause? Thanks, wfg

I find this whole line of conversation fairly ludicrous, but here goes:

Number 1. Time-inconsistency: we have different reactions about an immediate certainty of some bad than a future probability of it. So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others. In the actual instant of being tortured anyone would break, but people do choose courses of action that carry risk (every action does, to some extent), so the latter is more meaningful for such hypotheticals.

Number 2. I have driven and flown thousands of kilometers in relation to existential risk, increasing my chance of untimely death in a car accident or plane crash, so obviously I am willing to take some increased probability of death. I think I would prefer a given chance of being tortured to a given chance of death, so obviously I care enough to take at least some tiny risk from what I said above. As I also said above, I'm not willing to make very big sacrifices (big probabilities of such nasty personal outcomes) for tiny shifts ... (read more)

8waitingforgodel13y
This sounds very sane, and makes me feel a lot better about the context. Thank you very much. I very much like the idea that top SIAI people believe that there is such a thing as too much devotion to the cause (and, I'm assuming, actively talk people who are above that level down as you describe doing for Roko). As someone who has demonstrated impressive sanity around these topics, you seem to be in a unique position to answer these questions with an above-average level-headedness: 1. Do you understand the math behind the Roko post deletion? 2. What do you think about the Roko post deletion? 3. What do you think about future deletions?

Do you understand the math behind the Roko post deletion?

Yes, his post was based on (garbled versions of) some work I had been doing at FHI, which I had talked about with him while trying to figure out some knotty sub-problems.

What do you think about the Roko post deletion?

I think the intent behind it was benign, at least in that Eliezer had his views about the issue (which is more general, and not about screwed-up FAI attempts) previously, and that he was motivated to prevent harm to people hearing the idea and others generally. Indeed, he was explicitly motivated enough to take a PR hit for SIAI.

Regarding the substance, I think there are some pretty good reasons for thinking that the expected value (with a small probability of a high impact) of the info for the overwhelming majority of people exposed to it would be negative, although that estimate is unstable in the face of new info.

It's obvious that the deletion caused more freak-out and uncertainty than anticipated, leading to a net increase in people reading and thinking about the content compared to the counterfactual with no deletion. So regardless of the substance about the info, clearly it was a mistake to delete (w... (read more)

Less Wrong has been around for 20 months. If we can rigorously carve out the stalker/PIN/illegality/spam/threats cases I would be happy to bet $500 against $50 that we won't see another topic banned over the next 20 months.

That sounds like it'd generate some perverse incentives to me.

6TheOtherDave13y
Just to be clear: he recognizes this by comparison with the alternative of privately having the poster delete it themselves, rather than by comparison to not-deleting. Or at least that was my understanding. Regardless, thanks for a breath of clarity in this thread. As a mostly disinterested newcomer, I very much appreciated it.
2CarlShulman13y
Well, if counterfactually Roko hadn't wanted to take it down I think it would have been even more of a mistake to delete it, because then the author would have been peeved, not just the audience/commenters.
6TheOtherDave13y
Which is fine. But Eliezer's comments on the subject suggest to me that he doesn't think that. More specifically, they suggest that he thinks the most important thing is that the post not be viewable, and if we can achieve that by quietly convincing the author to take it down, great, and if we can achieve it by quietly deleting it without anybody noticing, great, and if we can't do either of those then we achieve it without being quiet, which is less great but still better than leaving it up. And it seemed to me your parenthetical could be taken to mean that he agrees with you that deleting it would be a mistake in all of those cases, so I figured I would clarify (or let myself be corrected, if I'm misunderstanding).
-3waitingforgodel13y
I should have taken this bet
6Eliezer Yudkowsky13y
Your post has been moved to the Discussion section, not deleted.
0[anonymous]13y
Looking at your recent post, I think Alicorn had a good point.
2TimFreeman13y
I agree with your main point, but the thought experiment seems to be based on the false assumption that the risk of being raped or murdered are smaller than 1 in 10K if you stay at home. Wikipedia guesstimates that 1 in 6 women in the US are on the receiving end of attempted rape at some point, so someone who goes to a place with a 1 in 10K chance of being raped or murdered has probably improved their personal safety. To make a better thought experiment, I suppose you have to talk about the marginal increase in rape or murder rate when working in the poor country when compared to staying home, and perhaps you should stick to murder since the rape rate is so high.
0wedrifid13y
You lost me at 'ludicrous'. :)
5waitingforgodel13y
but he won me back by answering anyway <3
0[anonymous]13y
How so?
2Bongo13y
Thanks!
1multifoliaterose13y
Great comment Carl!
8Nick_Tarleton13y
Roko was not requested to delete his comment. See this parallel thread. (I would appreciate it if you would edit your comment to note this, so readers who miss this comment don't have a false belief reinforced.) (ETA: thanks) Agreed (and I think the chance of wfg's reposts being deleted is very low, because most people get this). Unfortunately, I know nothing about the alleged event (Roko may be misdescribing it, as he misdescribed my message to him) or its context.
1Bongo13y
Roko said he was asked. You didn't ask him but maybe someone else did?
4Nick_Tarleton13y
Roko's reply to me strongly suggested that he interpreted my message as requesting deletion, and that I was the cause of him deleting it. I doubt anyone at SIAI would have explicitly requested deletion.
6Roko13y
I can confirm that I was not asked to delete the comment but did so voluntarily.

I think you are too trigger-happy.

-9waitingforgodel13y
0Perplexed13y
I'm wondering whether you, Nick, have learned anything from this experience - something perhaps about how attempting to hide something is almost always counterproductive? Of course, Roko contributed here by deleting the message, you didn't create this mess by yourself. But you sure have helped. :)

Well, look, I deleted it of my own accord, but only after being prompted that it was a bad thing to have posted. Can we just drop this? It makes me look like even more of a troublemaker than I already look like, and all I really want to do is finish the efficient charity competition then get on with life outside teh intenetz.

Will you at least publicly state that you precommit, on behalf of CEV, to not apply negative incentives in this case? (Roko, Jul 24, 2010 1:37 PM)

This is very important. If the SIAI is the organisation to solve the friendly AI problem and implement CEV then it should be subject to public examination, especially if they ask for money.

9David_Gerard13y
The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested. If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead. This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially. And we have those. The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans. However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is. Addendum: I see others have been asking "but what do you actually mean?" for a couple of years now.

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This strikes me as a demand for particular proof. SIAI is small (and was much smaller until the last year or two), the set of people engaged in FAI research is smaller, Eliezer has chosen to focus on writing about rationality over research for nearly four years, and FAI is a huge problem, in which any specific subproblem should be expected to be underdeveloped at this early stage. And while I and others expect work to speed up in the near future with Eliezer's attention and better organization, yes, we probably are dead.

The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans.

Somewhat nitpickingly, this is a reason for FAI in general. CEV is attractive mostly for moving as much work from the designers to the FAI as possible, reducing the potential for uncorrectable error, and being fairer than letting... (read more)

2David_Gerard13y
It wasn't intended to be - more incredulity. I thought this was a really important piece of the puzzle, so expected there'd be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small. I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I'd think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process. I thought we needed the CEV before the AI goes FOOM, because it's too late after. That implies it doesn't take a superintelligence to work it out. Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn't require first creating an AI. I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I'm asking (probably annoyingly) for more to work with.
6Nick_Tarleton13y
"CEV" can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you're conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there's no guarantee or demand that the process be capable of being executed by humans.
3David_Gerard13y
OK. I still don't understand it, but I now feel my lack of understanding more clearly. Thank you! (I suppose "what do people really want?" is a large philosophical question, not just undefined but subtle in its lack of definition.)
6Roko13y
I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are, in my opinion, the best we can hope for, and better than we are likely to get from any other entity involved in this. Besides, I think that XiXiDu, et al are complaining about the difference between cotton and silk, when what is actually likely to happen is more like a big kick in the teeth from reality. SIAI is imperfect. Yes. Well done. Nothing is perfect. At least cut them a bit of slack.
1timtyler13y
What?!? Open source code - under a permissive license - is the traditional way to signal that you are not going to run off into the sunset with the fruits of a programming effort. Private assurances are usually worth diddly-squat by comparison.
2Roko13y
I think that you don't realize just how bad the situation is. You want that silken sheet. Rude awakening methinks. Also open-source not neccessarily good for FAI in any case.
5XiXiDu13y
I don't think that you realize how bad it is. I'd rather have the universe being paperclipped than supporting the SIAI if that means that I might be tortured for the rest of infinity!

To the best of my knowledge, SIAI has not planned to do anything, under any circumstances, which would increase the probability of you or anyone else being tortured for the rest of infinity.

Supporting SIAI should not, to the best of my knowledge, increase the probability of you or anyone else being tortured for the rest of infinity.

Thank you.

5XiXiDu13y
But imagine there was a person a level above yours that went to create some safeguards for an AGI. That person would tell you that you can be sure that the safeguards s/he plans to implement will benefit everyone. Are you just going to believe that? Wouldn't you be worried and demand that their project is being supervised? You are in a really powerful position because you are working for an organisation that might influence the future of the universe. Is it really weird to be skeptical and ask for reassurance of their objectives?
0[anonymous]13y
Logical rudeness is the error of rejecting an argument for reasons other than disagreement with it. Does your "I don't think so" mean that you in fact believe that SIAI (possibly) plans to increase the probability of you or someone else being tortured for the rest of eternity? If not, what does this statement mean?

I removed that sentence. I meant that I didn't believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.

You can call me a hypocrite because I'm in favor of animal experiments to support my own survival. But I'm not sure if I'd like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don't like such decisions. I'd rather have the universe to end now, or having everyone turned into paperclips, than having to torture beings (especially if I am the being).

I feel uncomfortable that I don't know what will happen because there is a policy of censorship being favored when it comes to certain thought experiments. I believe that even given negative consequences, transparency is the way to go here. If the stakes are this high, people who believe will do anything to get what they want. That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

2Vladimir_Nesov13y
I apologize. I realized my stupidity in interpreting your comment a few seconds after posting the reply (which I then deleted).
-1timtyler13y
Better yet, you could use a kind of doublethink - and then even actually mean it. Here is W. D. Hamilton on that topic: * Discriminating Nepotism - as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.356.
-2timtyler13y
In TURING'S CATHEDRAL, George Dyson writes: I think many people would like to be in that group - if they can find a way to arrange it.
2shokwave13y
Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly. If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of 'social cooperation' mechanism will not arise.
-5timtyler13y
0katydee13y
That doesn't really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of "cheerful, contented, intellectually and physically well-nourished people."
0sketerpot13y
This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.
-2timtyler13y
It's a fair description of today's more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same - but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.
3[anonymous]13y
Currently, there are no entities in physical existence which, to my knowledge, have the ability to torture anyone for the rest of eternity. You intend to build an entity which would have that ability (or if not for infinity, for a googolplex of subjective years). You intend to give it a morality based on the massed wishes of humanity - and I have noticed that other people don't always have my best interests at heart. It is possible - though unlikely - that I might so irritate the rest of humanity that they wish me to be tortured forever. Therefore, you are, by your own statements, raising the risk of my infinite torture from zero to a tiny non-zero probability. It may well be that you are also raising my expected reward enough for that to be more than counterbalanced, but that's not what you're saying - any support for SIAI will, unless I'm completely misunderstanding, raise the probability of infinite torture for some individuals.
7Eliezer Yudkowsky13y
See the "Last Judge" section of the CEV paper. As Vladimir observes, the alternative to SIAI doesn't involve nothing new happening.
4[anonymous]13y
That just pushes the problem along a step. IF the Last Judge can't be mistaken about the results of the AI running AND the Last Judge is willing to sacrifice the utility of the mass of humanity (including hirself) to protect one or more people from being tortured, then it's safe. That's very far from saying there's a zero probability.
4ata13y
If the Last Judge peeks at the output and finds that it's going to decide to torture people, that doesn't imply abandoning FAI, it just requires fixing the bug and trying again.
4Vladimir_Nesov13y
Just because AGIs have capability to inflict infinite torture, doesn't mean they have a motive. Also, status quo (with regard to SIAI's activity) doesn't involve nothing new happening.
8[anonymous]13y
I explained that he is planning to supply one with a possible motive (namely that the CEV of humanity might hate me or people like me). It is precisely because of this that the problem arises. A paperclipper, or any other AGI whose utility function had nothing to do with humanity's wishes, would have far less motive to do this - it might kill me, but it really would have no motive to torture me.
-5timtyler13y
7XiXiDu13y
How so? I've just reread some of your comments on your now deleted post. It looks like you honestly tried to get the SIAI to put safeguards into CEV. Given that the idea spread to many people by now, don't you think it would be acceptably to discuss the matter before one or more people take it serious or even consider to implement it deliberately?
0Roko13y
I don't think it is a good idea to discuss it. I think that the costs outweigh the benefits. The costs are very big. Benefits marginal.
5Perplexed13y
Ok by me. It is pretty obvious by this point that there is no evil conspiracy involved here. But I think the lesson remains, I you delete something, even if it is just because you regret posting it, you create more confusion than you remove.
2waitingforgodel13y
I think the question you should be asking is less about evil conspiracies, and more about what kind of organization SIAI is -- what would they tell you about, and what would they lie to you about.
4XiXiDu13y
If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI. That alone is enough to conclude that the SIAI is not trying to hold back something that would discredit it as an organisation concerned with charitable objectives. The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence. Making the subject matter public did already harm some people and could harm people in future.
9David_Gerard13y
But the forbidden topic is already public. All the effects that would follow from it being public would already follow. THE HORSE HAS BOLTED. It's entirely unclear to me what pretending it hasn't does for the problem or the credibility of the SIAI.
3XiXiDu13y
It is not as public as you think. If it was then people like waitingforgodel wouldn't ask about it. I'm just trying to figure out how to behave without being able talk about it directly. It's also really interesting on many levels.

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

3David_Gerard13y
Precisely. The place to hide a needle is in a large stack of needles. The choice here was between "bad" and "worse" - a trolley problem, a lose-lose hypothetical - and they appear to have chosen "worse".
8wedrifid13y
I prefer to outsource my needle-keeping security to Clippy in exchange for allowing certain 'bending' liberties from time to time. :)
5David_Gerard13y
Upvoted for LOL value. We'll tell Clippy the terrible, no good, very bad idea with reasons as to why this would hamper the production of paperclips. "Hi! I see you've accidentally the whole uFAI! Would you like help turning it into paperclips?"
4wedrifid13y
Brilliant.
0[anonymous]13y
Frankly, Clippy would be better than the Forbidden Idea. At least Clippy just wants paperclips.
3TheOtherDave13y
Of course, if Clippy were clever he would then offer to sell SIAI a commitment to never release the UFAI in exchange for a commitment to produce a fixed number of paperclips per year, in perpetuity. Admittedly, his mastery of human signaling probably isn't nuanced enough to prevent that from sounding like blackmail.
7David_Gerard13y
I really don't see how that follows. Will more of the public take it seriously? As I have noted, so far the reaction from people outside SIAI/LW has been "They did WHAT? Are they IDIOTS?" That doesn't make it not stupid or not counterproductive. Sincere stupidity is not less stupid than insincere stupidity. Indeed, sincere stupidity is more problematic in my experience as the sincere are less likely to back down, whereas the insincere will more quickly hop to a different idea. Citation needed. Citation needed.
6XiXiDu13y
I sent you another PM.
4David_Gerard13y
Hmm, okay. But that, I suggest, appears to have been a case of reasoning oneself stupid. It does, of course, account for SIAI continuing to attempt to secure the stable doors after the horse has been dancing around in a field for several months taunting them with "COME ON IF YOU THINK YOU'RE HARD ENOUGH." (I upvoted XiXiDu's comment here because he did actually supply a substantive response in PM, well deserving of a vote, and I felt this should be encouraged by reward.)
4waitingforgodel13y
I wish I could upvote twice
0[anonymous]13y
A kind of meta-question: is there any evidence suggesting that one of the following explanations of the recent deletion is better than another? * That an LW moderator deleted Roko's comment. * That Roko was asked to delete it, and complied. * That Roko deleted it himself, without prompting.
-8waitingforgodel13y

Depending on what you're planning to research, lack of access to university facilities could also be a major obstacle. If you have a reputation for credible research, you might be able to collaborate with people within the university system, but I suspect that making the original break in would be pretty difficult.

For those curious: we do agree, but he went to quite a bit more effort in showing that than I did (and is similarly more convincing).

No, the rationale for deletion was not based on the possibility that his exact, FAI-based scenario could actually happen.

4wedrifid13y
What was the grandparent?
5ata13y
Hm? Did my comment get deleted? I still see it.
3komponisto13y
I noticed you removed the content of the comment from the record on your user page. I would have preferred you not do this; those who are sufficiently curious and know about the trick of viewing the user page ought to have this option.
2Vladimir_Nesov13y
Only if you disagree with correctness of moderator's decision.
1komponisto13y
Disagreement may be only partial. One could agree to the extent of thinking that viewing of the comment ought to be restricted to a more narrowly-filtered subset of readers.
0Vladimir_Nesov13y
Yes, this is a possible option, depending on the scope of moderator's decision. Banning comments from a discussion, even if they a backed up and publicly available elsewhere, is still an effective tool in shaping the conversation.
1ata13y
There was nothing particularly important or interesting in it, just a question I had been mildly curious about. I didn't think there was anything dangerous about it either, but, as I said elsewhere, I'm willing to take Eliezer's word for it if he thinks it is, so I blanked it. Let it go.
9komponisto13y
I know why you did it. My intention is to register disagreement with your decision. I claim it would have sufficed to simply let Eliezer delete the comment, without you yourself taking additional action to further delete it, as it were. I could do without this condescending phrase, which unnecessarily (and unjustifiably) imputes agitation to me.
3ata13y
Sorry, you're right. I didn't mean to imply condescension or agitation toward you; it was written in a state of mild frustration, but definitely not at or about your post in particular.
2wedrifid13y
Weird. I see: How does Eliezer's delete option work exactly? It stays visible to the author? Now I'm curious.
0ata13y
Yes, I've been told that it was deleted but that I still see it since I'm logged in. In that case I won't repeat what I said in it, partly because it'll just be deleted again but mainly because I actually do trust Eliezer's judgment on this. (I didn't realize that I was saying more than I was supposed to.) All I'll say about it is that it did not actually contain the question that Eliezer's reply suggests he thought it was asking, but it's really not important enough to belabor the point.
2waitingforgodel13y
yep, this one is showing as deleted
1[anonymous]13y
What? Did something here get deleted?

No, I think you're nitpicking to dodge the question, and looking for a more convenient world.

I think at this point it's clear that you really can't be expected to give a straight answer. Well done, you win!

-6Vladimir_Nesov13y
[-][anonymous]13y30

While it's not geared specifically towards individuals trying to do research, the (Virtual) Employment Open Thread has relevant advice for making money with little work.

If you had a paper that was good enough to get published if you were a professor then the SIAI could probably find a professor to co-author with you.

Google Scholar has greatly reduced the benefit of having access to a college library.

8sketerpot13y
That depends on the field. Some fields are so riddled with paywalls that Google Scholar is all but useless; others like computer science, are much more progressive.

What (dis)advantages does this have compared to the traditional model?

I think this thread perfectly illustrates one disadvantage of doing research in an unstructured environment. It is so easy to become distracted from the original question by irrelevant, but bright and shiny distractions. Having a good academic adviser cracking the whip helps to keep you on track.

855 comments so far, with no sign of slowing down!

you have to be very clever to come up with a truly dangerous thought -- and if you do, and still decide to share it, he'll delete your comments

This is a good summary.

Of course, what he actually did was not delete the thread

Eh what? He did and that's what the whole scandal was about. If you mean that he did not succesfully delete the thread from the whole internet, then yes.

Also see my other comment.

Yeah, I thought about that as well. Trying to suppress it made it much more popular and gave it a lot of credibility. If they decided to act in such a way deliberately, that be fascinating. But that sounds like one crazy conspiracy theory to me.

7David_Gerard13y
I don't think it gave it a lot of credibility. Everyone I can think of who isn't an AI researcher or LW regular who's read it has immediately thought "that's ridiculous. You're seriously concerned about this as a likely consequence? Have you even heard of the Old Testament, or Harlan Ellison? Do you think your AI will avoid reading either?" Note, not the idea itself, but that SIAI took the idea so seriously it suppressed it and keeps trying to. This does not make SIAI look more credible, but less because it looks strange. These are the people running a site about refining the art of rationality; that makes discussion of this apparent spectacular multi-level failure directly on-topic. It's also become a defining moment in the history of LessWrong and will be in every history of the site forever. Perhaps there's some Xanatos retcon by which this can be made to work.
4XiXiDu13y
I just have a hard time to believe that they could be so wrong, people who write essays like this. That's why I allow for the possibility that they are right and that I simply do not understand the issue. Can you rule out that possibility? And if that was the case, what would it mean to spread it even further? You see, that's my problem.
7David_Gerard13y
Indeed. On the other hand, humans frequently use intelligence to do much stupider things than they could have done without that degree of intelligence. Previous brilliance means that future strange ideas should be taken seriously, but not that the future ideas must be even more brilliant because they look so stupid. Ray Kurzweil is an excellent example - an undoubted genius of real achievements, but also now undoubtedly completely off the rails and well into pseudoscience. (Alkaline water!)
2timtyler13y
Ray on alkaline water: http://glowing-health.com/alkaline-water/ray-kurzweil-alkaine-water.html
2David_Gerard13y
See, RationalWiki is a silly wiki full of rude people. But one thing we know a lot about, is woo. That reads like a parody of woo.
2[anonymous]13y
Scary.
-1shokwave13y
I don't think that's credible. Eliezer has focused much of his intelligence on avoiding "brilliant stupidity", orders of magnitude more so than any Kurzweil-esque example.
3David_Gerard13y
So the thing to do in this situation is to ask them: "excuse me wtf are you doin?" And this has been done. So far there's been no explanation, nor even acknowledgement of how profoundly stupid this looks. This does nothing to make them look smarter. Of course, as I noted, a truly amazing Xanatos retcon is indeed not impossible.
5TheOtherDave13y
There is no problem. If you observe an action (A) that you judge so absurd that it casts doubt on the agent's (G) rationality, then your confidence (C1) in G's rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A's absurdity should decrease. So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G's rationality enough to believe that there exist good reasons for A. The only reason it feels like a problem is that human brains aren't good at this. It sometimes helps to write it all down on paper, but mostly it's just something to practice until it gets easier. In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what's your evidence? Are C1 and C2 at all calibrated to observed events? If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem. If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you're unaware of (more or less as you're doing now)... not because "you can't rule out the possibility" but because it seems more likely than the alternatives. Again, no problem. And the fact that other people don't end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don't have perfect trust in everyone's perfect Bayesianness. Again, no problem... you simply disagree. Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which ge
5Vaniver13y
Another thing: rationality is best expressed as a percentage, not a binary. I might look at the virtues and say "wow, I bet this guy only makes mistakes 10% of the time! That's fantastic!"- but then when I see something that looks like a mistake, I'm not afraid to call it that. I just expect to see fewer of them.
0timtyler13y
What issue? The forbidden one? You are not even supposed to be thinking about that! For pennance, go and say 30 "Hail Yudkowskys"!
2shokwave13y
You could make a similar comment about cryonics. "Everyone I can think of who isn't a cryonics project member or LW regular who's read [hypothetical cryonics proposal] has immediately thought "that's ridiculous. You're seriously considering this possibility?". "People think it's ridiculous" is not always a good argument against it. Consider that whoever made the decision probably made it according to consequentialist ethics; the consequences of people taking the idea seriously would be worse than the consequences of censorship. As many consequentialist decisions tend to, it failed to take into account the full consequences of breaking with deontological ethics ("no censorship" is a pretty strong injunction). But LessWrong is maybe the one place on the internet you could expect not to suffer for breaking from deontological ethics. Again, strange from a deontologist's perspective. If you're a deontologist, okay, your objection to the practice has been noted. The perfect Bayesian consequentialist, however, would look at the decision, estimate the chances of the decision-maker being irrational (their credibility), and promptly revise their probability estimate of 'bad idea is actually dangerous' upwards, enough to approve of censorship. Nothing strange there. You appear to be downgrading SIAI's credibility because it takes an idea seriously that you don't - I don't think you have enough evidence to conclude that they are reasoning imperfectly.

I'm speaking of convincing people who don't already agree with them. SIAI and LW look silly now in ways they didn't before.

There may be, as you posit, a good and convincing explanation for the apparently really stupid behaviour. However, to convince said outsiders (who are the ones with the currencies of money and attention), the explanation has to actually be made to said outsiders in an examinable step-by-step fashion. Otherwise they're well within rights of reasonable discussion not to be convinced. There's a lot of cranks vying for attention and money, and an organisation has to clearly show itself as better than that to avoid losing.

0shokwave13y
By the time a person can grasp the chain of inference, and by the time they are consequentialist and Aumann-agreement-savvy enough for it to work on them, they probably wouldn't be considered outsiders. I don't know if there's a way around that. It is unfortunate.
7David_Gerard13y
To generalise your answer: "the inferential distance is too great to show people why we're actually right." This does indeed suck, but is indeed not reasonably avoidable. The approach I would personally try is furiously seeding memes that make the ideas that will help close the inferential distance more plausible. See selling ideas in this excellent post.
5TheOtherDave13y
For what it's worth, I gather from various comments he's made in earlier posts that EY sees the whole enterprise of LW as precisely this "furiously seeding memes" strategy. Or at least that this is how he saw it when he started; I realize that time has passed and people change their minds. That is, I think he believes/ed that understanding this particular issue depends on understanding FAI theory depends on understanding cognition (or at least on dissolving common misunderstandings about cognition) and rationality, and that this site (and the book he's working on) are the best way he knows of to spread the memes that lead to the first step on that chain. I don't claim here that he's right to see it that way, merely that I think he does. That is, I think he's trying to implement the approach you're suggesting, given his understanding of the problem.
4David_Gerard13y
Well, yes. (I noted it as my approach, but I can't see another one to approach it with.) Which is why throwing LW's intellectual integrity under the trolley like this is itself remarkable.
3TheOtherDave13y
Well, there's integrity, and then there's reputation, and they're different. For example, my own on-three-minutes-thought proposed approach is similar to Kaminsky's, though less urgent. (As is, I think, appropriate... more people are working on hacking internet security than on, um, whatever endeavor it is that would lead one to independently discover dangerous ideas about AI. To put it mildly.) I think that approach has integrity, but it won't address the issues of reputation: adopting that approach for a threat that most people consider absurd won't make me seem any less absurd to those people.
6David_Gerard13y
However, discussion of the chain of reasoning is on-topic for LessWrong (discussing a spectacularly failed local chain of reasoning and how and why it failed), and continued removal of bits of the discussion does constitute throwing LessWrong's integrity in front of the trolley.
3Vaniver13y
There are two things going on here, and you're missing the other, important one. When a Bayesian consequentialist sees someone break a rule, they perform two operations- reduce the credibility of the person breaking the rule by the damage done, and increase the probability that the rule-breaking was justified by the credibility of the rule-breaker. It's generally a good idea to do the credibility-reduction first. Keep in mind that credibility is constructed out of actions (and, to a lesser extent, words), and that people make mistakes. This sounds like captainitis, not wisdom.
0Jack13y
Aside: Why would it matter?
-2Vaniver13y
You have three options, since you have two adjustments to do and you can use old or new values for each (but only three because you can't use new values for both).* Adjusting credibility first (i.e. using the old value of the rule's importance to determine the new credibility, then the new value of credibility to determine the new value of the credibility's importance) is the defensive play, and it's generally a good idea to behave defensively. For example, let's say your neighbor Tim (credibility .5) tells you that there are aliens out to get him (prior probability 1e-10, say). If you adjust both using the old values, you get that Tim's credibility has dropped massively, but your belief that aliens are out to get Tim has risen massively. If you adjust the action first (where the 'rule' is "don't believe in aliens having practical effects"), your belief that aliens are out to get Tim rises massively- and then your estimate of Tim's credibility drops only slightly. If you adjust Tim's credibility first, you find that his credibility has dropped massively, and thus when you update the probability that aliens are out to get Tim it only bumps up slightly. *You could iterate this a bunch of times, but that seems silly.
1Jack13y
Er, any update that doesn't use the old values for both is just wrong. If you use new values you're double-counting the evidence.
0Vaniver13y
I suppose that could be the case- I'm trying to unpack what exactly I'm thinking of when I think of 'credibility.' I can see strong arguments for either approach, depending on what 'credibility' is. Originally I was thinking of something along the lines of "prior probability a statement they make will be correct" but as soon as you know the content of the statement, that's not really relevant- and so now I'm imagining something along the lines of "how much I weight unlikely statements made by them," or more likely for a real person, "how much effort i put into checking their statements." And so for the first one, it doesn't make sense to update the credibility- if someone previously trustworthy tells you something bizarre, you weight it highly. But for the second one, it does make sense to update the credibility first- if someone previously trustworthy tells you something bizarre, you should immediately become more skeptical of the that statement and subsequent ones.
4Will_Sawin13y
But no more skeptical than is warranted by your prior probability. Let's say that if aliens exist, a reliable Tim has a 99% probability of saying they do. If they don't, he has a 1% probability of saying they do. An unreliable Tim has a 50/50 shot in either situation. My prior was 50/50 reliable/unreliable, 1,000,000/1 don't exist, exist so prior weights: reliable, exist: 1 unreliable, exist: 1 reliable, don't exist: 1,000,000 unreliable, don't exist, 1,000,000 Updates after he says they do: reliable, exist: .99 unreliable, exist: .5 reliable, don't exist: 10,000 unreliable, don't exist: 500,000 So we now believe approximately 50 to1 that he's unreliable, and 510,000 to 1.49 or 342,000 to 1 that they don't exist. This is what you get if you decide each of the new based on the old.
0Vaniver13y
Thanks for working that out- that made clearer to me what I think I was confused about before. What I was imagining by "update credibility based on their statement" was configuring your credibility estimate to the statement in question- but rather than 'updating' that's just doing a lookup to figure out what Tim's credibility is for this class of statements. Looking at shokwave's comment again with a clearer mind: When you estimate the chances that the decision-maker is irrational, I feel you need to include the fact that you disagree with them now (my original position of playing defensively), instead of just looking at your past. Why? Because it reduces the chances you get stuck in a trap- if you agree with Tim on propositions 1-10 and disagree on proposition 11, you might say "well, Tim might know something I don't, I'll change my position to agree with his." Then, when you disagree on proposition 12, you look back at your history and see that you agree with Tim on everything else, so maybe he knows something you don't. Now, even though you changed your position on proposition 11, you probably did decrease Tim's credibility- maybe you have stored "we agreed on 10 (or 10.5 or whatever) of 11 propositions." So, when we ask "does SIAI censor rationally?" it seems like we should take the current incident into account before we decide whether or not to take their word on their censorship. It's also rather helpful to ask that narrower question, instead of "is SIAI rational?", because general rationality does not translate to competence in narrow situations.
1shokwave13y
This is a subtle part of Bayesian updating. The question "does SIAI censor rationally?" is different to "was SIAI's decision to censor this case made rationally?" (it is different because in the second case we have some weak evidence that it was not - ie, that we as rationalists would not have made the decision they did). We used our prior for "SIAI acts rationally" to determine or derive the probability of "SIAI censors rationally" (as you astutely pointed out, general rationality is not perfectly transitive), and then used "SIAI censors rationally" as our prior for the calculation of "did SIAI censor rationally in this case". After our calculation, "did SIAI censor rationally in this case" is necessarily going to be lower in probability than our prior "SIAI censors rationally." Then, we can re-assess "SIAI censors rationally" in light of the fact that one of the cases of rational censorship has a higher level of uncertainty (now, our resolved disagreement is weaker evidence that SIAI does not censor rationally). That will revise "SIAI censors rationally" downwards - but not down to the level of "did SIAI censor rationally in this case". To use your Tim's propositions example, you would want your estimation of proposition 12 to depend on not only how much you disagreed with him on prop 11, but also how much you agreed with him on props 1-10. Perfect-Bayesian-Aumann-agreeing isn't binary about agreement; it would continue to increase the value of "stuff Tim knows that you don't" until it's easier to reduce the value of "Tim is a perfect Bayesian reasoner about aliens" - in other words, at about prop 13-14 the hypothesis "Tim is stupid with respect to aliens existing" would occur to you, and at prop 20 "Tim is stupid WRT aliens" and "Tim knows something I don't WRT aliens" would be equally likely.
3timtyler13y
It was left up for ages before the censorship. The Streisand effect is well known. Yes, this is a crazy kind of marketing stunt - but also one that shows Yu'El's compassion for the tender and unprotected minds of his flock - his power over the other participants - and one that adds to the community folklore.

See, that doesn't make sense to me. It sounds more like an initiation rite or something... not a thought experiment about quantum billionaires...

I can't picture EY picking up the phone and saying "delete that comment! wouldn't you willingly be tortured to decrease existential risk?"

... but maybe that's a fact about my imagination, and not about the world :p

I am doing something similar, except working as a freelance software developer. My mental model is that in both the traditional academic path and the freelance path, you are effectively spending a lot of your time working for money. In academia, the "dirty work" is stuff like teaching, making PowerPoint presentations (ugh), keeping your supervisor happy, jumping through random formatting hoops to get papers published, and then going to conferences to present the papers. For me, the decisive factor is that software development is actually quite fun, while academic money work is brain-numbing.

How hard is it to live off the dole in Finland? Also, non-academic research positions in think tanks and the like (including, of course, SIAI).

5Kaj_Sotala13y
Not very hard in principle, but I gather it tends to be rather stressful, with stuff like payments not arriving when they're supposed to happening every now and then. Also, I couldn't avoid the feeling of being a leech, justified or not. Non-academic think tanks are a possibility, but for Singularity-related matters I can't think of others than the SIAI, and their resources are limited.
3[anonymous]13y
Many people would steal food to save lives of the starving, and that's illegal. Working within the national support system to increase the chance of saving everybody/everything? If you would do the first, you should probably do the second. But you need to weigh the plausibility of the get-rich-and-fund-institute option, including the positive contributions of the others you could potentially hire.
0[anonymous]13y
I wonder how far some people would go for the cause. For Kaj, clearly, leeching of an already wasteful state is too far. I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause. I mean not actually, but, you know, in theory. Precommiting to being prepared to make a sacrifice that big. shrugs
3wedrifid13y
Forget entirely 'the cause' nonsense. How far would you go just to avoid not personally getting killed? How much torture per chance that your personal contribution at the margin will prevent your near term death?
1Eugine_Nier13y
Could we move this discussion somewhere, where we don't have to constantly worry about it getting deleted.

I'm not aware that LW moderators have ever deleted content merely for being critical of or potentially bad PR for SIAI, and I don't think they're naive enough to believe deletion would help. (Roko's infamous post was considered harmful for other reasons.)

1waitingforgodel13y
"Harmful for other reasons" still has a chilling effect on free speech... and given that those reasons were vague but had something to do with torture, it's not unreasonable to worry about deletion of replies to the above question.
4Bongo13y
The reasons weren't vague. Of course this is just your assertion against mine since we're not going to actually discuss the reasons here.
1wedrifid13y
There doesn't seem to be anything censor relevant in my question and for my part I tend to let big brother worry about his own paranoia and just go about my business. In any case while the question is an interesting one to me it doesn't seem important enough to create a discussion somewhere else. At least not until I make a post. Putting aside presumptions of extreme altruism just how much contribution to FAI development is rational? To what extent does said rational contribution rely on newcomblike reasoning? How much would a CDT agent contribute on the expectation that his personal contribution will make the difference and save his life? On second thoughts maybe the discussion does seem to interest me sufficiently. If you are particularly interested in answering me feel free to copy and paste my questions elsewhere and leave a back-link. ;)
-4waitingforgodel13y
I think you/we're fine -- just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted. Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient. Besides, it's looking like after the Roko thing they've decided to cut back on such silliness.

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn't necessarily mean that it's incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it's still correct and should be taken.

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision. )

(This is a note about a problem in your argument, not an argument for correctness of EY's decision. My argument for correctness of EY's decision is here and here.)

4wedrifid13y
This is possible but by no means assured. It is also possible that he simply didn't choose to write a full evaluation of consequences in this particular comment.
3Vladimir_Golovin13y
Upvoted. This just helped me get unstuck on a problem I've been procrastinating on.
1xamdam13y
Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)
1Vladimir_Nesov13y
The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It's difficult (for me) to estimate whether it's so.
0xamdam13y
I suspect it's also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of "having an agenda", as there is significant opportunity to do harm either way.
1waitingforgodel13y
Very much agree btw
-2red7513y
Shouldn't AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally. And please, define how do you tell moral heuristics and moral values apart. E.g. which is "don't change moral values of humans by wireheading"?
-6waitingforgodel13y
6Vladimir_Nesov13y
Following is another analysis. Consider a die that was tossed 20 times, and each time it fell even side up. It's not surprising because it's a low-probability event: you wouldn't be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observations that you've missed. You notice your own confusion. In this case, you look at the event of censoring a post (topic), and you're surprised, you don't understand why that happened. And then your brain pattern matches all sorts of hypotheses that are not just improbable, but probably meaningless cached phrases, like "It's convenient", or "To oppose freedom of speech", or "To manifest dictatorial power". Instead of leaving the choice of a hypothesis to the stupid intuitive processes, you should notice your own confusion, and recognize that you don't know the answer. Acknowledging that you don't know the answer is better than suggesting an obviously incorrect theory, if much more probability is concentrated outside that theory, where you can't suggest a hypothesis.
3waitingforgodel13y
Since we're playing the condescension game, following is another analysis: You read a (well written) slogan, and assumed that the writer must be irrational. You didn't read the thread he linked you to, you focused on your first impression and held to it.
1Vladimir_Nesov13y
I'm not. Seriously. "Whenever convenient" is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem. Please unpack the references. I don't understand.
3waitingforgodel13y
Sorry, it looks like we're suffering from a bit of cultural crosstalk. Slogans, much like ontological arguments, are designed to make something of an illusion in the mind -- a lever to change the change the way you look at the world. "Whenever convenient" isn't there as a statement of belief, so much as a prod to get you thinking... "How much to I trust that EY knows what he's doing?" You may as well argue with Nike: "Well, I can hardly do everything..." (re: Just Do It) That said I am a rationalist... I just don't see any harm in communicating to the best of my ability. I linked you to this thread, where I did display some biases, but also decent evidence for not having the ones you're describing... which I take to be roughly what you'd expect of a smart person off the street.
2Vladimir_Nesov13y
I can't place this argument at all in relation to the thread above it. Looks like a collection of unrelated notes to me. Honest. (I'm open to any restatement; don't see what to add to the notes themselves as I understand them.)
4waitingforgodel13y
The whole post you're replying to comes from your request to "Please unpack the references". Here's the bit with references, for easy reference: The first part of the post you're replying to's "Sorry, it looks... best of my ability" maps to "You read a.. irrational" in the quote above, and this tries to explain the problem as I understand it: that you were responding to a slogans words not it's meaning. Explained it's meaning. Explained how "Whenever convenient" was a pointer to the "Do I trust EY?" thought. Gave a backup example via the Nike slogan. The last paragraph in the post you're replying to tried to unpack the "you focused... held to it" from the above quote
-1Vladimir_Nesov13y
I see. So the "writer" in the quote is you. I didn't address your statement per se, more a general disposition of the people who state ridiculous things as explanation for the banning incident, but your comment did make the same impression on me. If you correctly disagree that it applies to your intended meaning, good, you didn't make that error, and I don't understand what did cause you to make that statement, but I'm not convinced by your explanation so far. You'd need to unpack "Distrusting EY" to make it clear that it doesn't fall in the same category of ridiculous hypotheses.
2shokwave13y
The Nike slogan is "Just Do It", if it helps.
0Vladimir_Nesov13y
Thanks. It doesn't change the argument, but I'll still delete that obnoxious paragraph.
2Eugine_Nier13y
I believe EY takes this issue very seriously.
2waitingforgodel13y
Ahh. Are you aware of any other deletions?
5XiXiDu13y
Here... I'd like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech? The subject matter here has a somewhat different nature that rather fits a more people - more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn't mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.
7TheOtherDave13y
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could. If necessary, make this two articulations: one that is easy to understand (in the sense of answering "is what I'm about to say a problem?") even if it's way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test. Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn't itself violate those boundaries. Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that's more than just me, run my boundary articulation(s) past the group and edit as appropriate. Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request). Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them). Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4. == That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn't.
6David_Gerard13y
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky - he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it. And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage. Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren't the only person in the whole world smart enough to find the flaw.
2Eugine_Nier13y
Yes, several times other poster's have brought up the subject and had their comments deleted.
0Bongo13y
I hadn't seen a lot of stubs of deleted comments around before the recent episode, but you say people's comments had gotten deleted several times. So, have you seen comments being deleted in a special way that doesn't leave a stub?
2Eugine_Nier13y
Comments only leave a stub if they have replies that aren't deleted.
-6waitingforgodel13y
1[anonymous]13y
Hard to say. Probably a lot if I could precommit to it in advance, so that once it had began I couldn't change my mind. There are many complicating factors, though.
-1waitingforgodel13y
Am I the only one who can honestly say that it would depend on the day? There's a TED talk I once watched about how republicans reason on five moral channels and democrats only reason on two. They were (roughly): 1. harm/care 2. fairness/reciprocity 3. in-group/out-group 4. authority 5. purity/scarcity/correctness According to the talk, Democrats reason with primarily the first two and Republicans with all of them. I took this to mean that Republicans were allowed to do moral calculus that Democrats could not... for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn't fair)... If, on the other hand, I'm allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn't from my home town, or because my religion says to. Republicans therefore have it much easier in rationalizing self-serving motives. (As an aside, it's interesting to note that Democrats must have started with more than just the two when they were young. "Mommy said not to" is a very good reason to do something when you're young. It seems that they must have grown out of it). After watching the TED talk, I was reflecting on how it seems that smart people (myself sadly included) let relatively minor moral problems stop them from doing great things... and on how if I were just a little more Republican (in the five channel moral reasoning sense) I might be able to be significantly more successful. The result is a WFG that cycles in and out of 2-channel/5-channel reasoning. On my 2-channel days, I'd have a very hard time hurting another person to save myself. If I saw them, and could feel that human connection, I doubt I could do much more than I myself would be willing to endure to save another's life (perhaps two hours assuming hand-over-a-candle level of pain -- permanent disfigurement would be harder to justify, but if it was relatively minor). On my 5-channel days, I'm
2Eugine_Nier13y
First let me say that as a Republican/libertarian I don't entirely agree with Haidt's analysis. In any case, the above is not quiet how I understand Haidt's analysis. My understanding is that Democracts have no way to categorically say that punching (or even killing) a baby is wrong. While they can say it's wrong because as you said it causes harm and isn't fair, they can always override that judgement by coming up with a reason why not punching and/or killing the baby would also cause harm. (See the philosophy of Peter Singer for an example). Republicans on the other hand can invoke sanctity of life.
3waitingforgodel13y
Sure, agreed. The way I presented it only showed very simplistic reasoning. Let's just say that, if you imagine a Democrat that desperately wants to do x but can't justify it morally (punch a baby, start a somewhat shady business, not return a lost wallet full of cash), one way to resolve this conflict is to add Republican channels to his reasoning. It doesn't always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)
-1Eugine_Nier13y
So I've noticed. See the discussion following this comment for an example. On the other hand other times Democrats take positions that Republicans horrific, e.g., euthanasia, abortion, Peter Singer's position on infanticide.
7David_Gerard13y
Peter Singer's media-touted "position on infanticide" is an excellent example of why even philosophers might shy away from talking about hypotheticals in public. You appear to have just become Desrtopa's nighmare.
1Eugine_Nier13y
My problem with Singer is that his "hypotheticals" don't appear all that hypothetical.
1Eugine_Nier13y
What specifically are you referring to? (I haven't been following Desporta's posts.)
4David_Gerard13y
It's evident you really need to read the post. He can't get people to answer hypotheticals in almost any circumstances and thought this was a defect in the people. Approximately everyone responded pointing out that in the real world, the main use of hypotheticals is to use them against people politically. This would be precisely what happened with the factoid about Singer.
4waitingforgodel13y
Thanks for the link -- very interesting reading :)
0[anonymous]13y
Here I was thinking it was, well, nearly the opposite of that! :)
3Eugine_Nier13y
Given the current economic situation in Europe, I'm not sure that's a good long term strategy. Also, I suspect spending to long on the dole may cause you to develop habits that'll make it harder to work a paying job.
[-][anonymous]13y10

Don't vote this down under the default viewing threshold, please!

Oh, and I'm reposting it here just in case WFG tries to delete it later:

Agree except for the 'terrorism' and 'allegedly' part.

I just emailed a right-wing blogger some stuff that probably isn't good for the future. Not sure what the increase was, hopefully around 0.0001%.

I'll write it up in more detail and post a top-level discussion thread after work.

-wfg

Ordinarily I'd consider that a violation of netiquette, but under these exact circumstances...

Reposting comments deleted by the authors or by moderators will be considered hostile behavior and interfering with the normal and intended behavior of the site and its software, and you will be asked to leave the Less Wrong site.

-- Eliezer Yudkowsky, Less Wrong Moderator.

This decree is ambiguous enough to be seen as threatening people not to repost their banned comments (made in good faith but including too much forbidden material by accident) even after removing all objectionable content. I think this should be clarified.

Consider that clarification made; no such threat is intended.

5[anonymous]13y
Does this decree have a retrospective effect? And what about the private message system?
0XiXiDu13y
Does this only apply to comments? The reason that I ask by replying to this old comment is because I noticed that you can't delete posts. If you delete a post it is not listed anymore and the name of the author disappears but it is still available, can be linked to using the original link and can be found via the search function. If you want to delete a post you first have to edit it to remove its content manually.
-13waitingforgodel13y

I'll ignore your bit about bad conduct -- if you want to move slowly then ask, otherwise you can't complain about moving fast.

What not ignoring that bit would look like? Your answer seems to cover the problem fully.

It doesn't matter how fast or slow you move, it matters which points you currently expect agreement or disagreement on. If you expect disagreement on a certain point, you can't use it as a settled point in conversation, with other elements of the conversation depending on it, unless it's a hypothetical. Otherwise you are shouting, not arguing.

0waitingforgodel13y
Perhaps. I bet you use the same "fast moving explanation" with your friends to speed up communication. Either way, please explain why you think it's not irrational.
3Vladimir_Nesov13y
Isn't a normative argument. If indeed I follow certain behavior, doesn't make it less of an error. There could be valid reasons for the behavior not being an error, but me following it isn't one of them.
3waitingforgodel13y
The hope with pointing out that you do it, would be that you'd remember why you do it... it's friggin effective. (also, my other reply makes the reasoning for this more explicit)
3Vladimir_Nesov13y
I try not to. For achievement of what purpose? There certainly is a place for communication of one's position, but not where the position is already known, and is being used as an argument.
5waitingforgodel13y
Time Savings. Imagine that Alice and Bob go into a room and Alice explains an interesting idea to Bob. They leave the room together after time t. If a method exists, whereby Alice and Bob can instead leave the room in t-x time, with Bob knowing the same information, then this method of communicating can be said to yield a time savings of x compared to the previous method.
0Vladimir_Nesov13y
It's not an answer to my question about the purpose, it's a restatement of another aspect of it efficiency. (Maybe you mean "stopping an argument is sometimes in order"? I agree, but that an argument didn't proceed still doesn't warrant you to assume as settled in the conversation things that your interlocutor doesn't agree with.)
3waitingforgodel13y
I should start waiting before replying to you :p Why not? My view is you take a normal conversation/discussion, and then proceed as fast as you can toward resolution (being careful that it's the same resolution).
0Vladimir_Nesov13y
I don't understand this. What's "resolution"? You can't assume a bottom line and then rationalize your way towards it faster. You are not justified in agreeing with me faster than it takes you to understand and accept the necessary arguments. Sometimes you'll even find an error along the way that will convince me that my original position was wrong.
6wedrifid13y
Not implied by grandparent.
-1Vladimir_Nesov13y
Likely not, this phrase was given as an example of hypothesis that I can see and that's probably not right. I can't find a reasonable hypothesis yet.
0Vladimir_Nesov13y
Comment on downvoting of the parent: In this comment, I explained why I wrote what I wrote in the grandparent comment. How should I interpret the downvotes? Discouragement of explaining one's actions? (Probably not!) A signal that I shouldn't have made that statement in the grandparent the way I have? (But then the grandparent should've been downvoted, and this comment possibly upvoted for clarification.) Punishment for inferred lying about my movites? (If that's how you read me, please say so, I never lie about things like this, I can make errors though and I'd like to understand them.)

Restoring comment above this, for posterity:

How? That is, what tool allowed you to restore the now deleted comments? Browser cache or something more impressive?

2waitingforgodel13y
To be more specific, when I saw that comment I assumed Roko was about to delete it and opened up a second browser window. I caught your comment with the script, because I've been half sure that EY would delete this thread all day...
1wedrifid13y
Ahh, gotcha. I like the script by the way... ruby! That is my weapon of choice these days. What is the nokogiri library like? I do a fair bit of work with html automation but haven't used that particular package.
2waitingforgodel13y
It's pretty nice, just a faster version of _why's Hpricot... or a ruby version of jQuery if you're in to that :) What tools do you use for html automation?
1wedrifid13y
hpricot, celerity, eventmachine, unadorned regex from time to time, a custom built http client and server that I built because the alternatives at the time didn't do what I needed them to do.
0waitingforgodel13y
That's awesome, thanks. I'm embarrassed to say I hadn't heard of celerity -- I'm excited to try it as a turbo-charger for watir tests :) On past occasions I admired you for the sort of extreme skepticism that Harry's heading toward in Ch 63 of MoR (not trusting anyone's motives and hence not toeing the LW party line). Glad to see that you're also a rubiest! I know it's a long shot... but does that make you a haskeller as well?
5Perplexed13y
toeing the line, not towing

Thanks. Direct feedback is always appreciated. No need for you to tiptow.

0wedrifid13y
Wow. That's my new thing learned for the day.
2wedrifid13y
Why thankyou. I think. :P It sounds like I'll enjoy catching up on the recent MoR chapters. I lost interest there for a bit when the cringe density got a tad too high but I expect it will be well worth reading when I get back into it. Afraid not. Sure, it's on a list of things to learn when I have time to waste but it is below learning more LISP. I do suspect I would enjoy Haskell. I tend to use similar techniques where they don't interfere too much with being practical.
2waitingforgodel13y
Paul Graham's On Lisp is free and an incredible LISP reference. He builds a mind-bending prolog compiler/lisp macro toward the end. Well worth the read. Paul Graham's ANSI Common Lisp is good preliminary reading, and also recommended. Paul Graham's essays are virulent memes, and are not recommended :p
2wedrifid13y
But fun! :)
2ata13y
Most likely either browser cache or a left-open browser tab containing the comments, being that the formatting of the line "FormallyknownasRoko | 07 December 2010 07:39:36PM* | 1 point[-]" suggests it was just copied and pasted.
1waitingforgodel13y
Pretty much
1waitingforgodel13y
A bit of both. I don't maintain a mirror of lesswrong or anything, but I do use a script to make checking for such things easier. I'd be interested to know what you were hoping for in the way of "more impressive" though :)
0waitingforgodel13y
note that script is pretty rough -- some false positives, would't count a "[redacted]" edit as deletion (though it would cache the content). more to avoid rescanning the page while working, etc

I think for many kinds of research, working in groups drastically increases the efficacy of individual effort, due to specialization, etc.

Are you trying to get in on AI research?

2Kaj_Sotala13y
Right now, my main interest is mostly in a) academic paper versions of the things SIAI has been talking about informally b) theoretical cognitive science stuff which may or may not be related to AI.

Ah, you remind me of me from a while back. When I was an elementary schooler, I once replied to someone asking "would you rather be happy or right" with "how can I be happy if I can't be right?" But these days I've moderated somewhat, and I feel that there is indeed knowledge that can be harmful.

[-][anonymous]13y00

Could someone upvote the parent please? This is really quite important.

Context was discussing hypothetical sacrifices one would make for utilitarian humanitarian gain, not just from one but from several different conversations.

5waitingforgodel13y
Care to share a more concrete context?
2Roko13y
That is the context in as concrete a way as is possible - discussing what people would really be prepared to sacrifice, versus making signallingly-useful statements. I responded that I wasn't even prepared to say that I would make {sacrifice=rape, being tortured, forgoing many years of good life, being humiliated etc}.
5waitingforgodel13y
Okay, you can leave it abstract. Here's what I was hoping to have explained: why were you discussing what people would really be prepared to sacrifice? ... and not just the surface level of "just for fun," but also considering how these "just for fun" games get started, and what they do to enforce cohesion in a group.
4David_Gerard13y
Big +1. Every cause wants to be a cult. Every individual (or, realistically, as many as possible) must know how to resist this for a group with big goals not to go off the rails.
1Roko13y
The context was the distinction between signalling-related speech acts and real values.
0[anonymous]13y
Did you see Carl Shulman's explanation?
2wedrifid13y
They actually had multiple conversations about hypothetical sacrifices they would make for utilitarian humanitarian gain? That's... adorable!

Nothing to do with "the ugly".

[-][anonymous]13y00

The Roko one.

I wonder whether it was Roko or big brother who deleted the nearby comments. One of them included the keywords 'SIAI' and 'torture' so neither would surprise me.

What about becoming a blogger at a place like ScienceBlogs?

Alternatively, if you're willing to live very ascetically, what about emailing/asking philanthropists/millionaires with ideas, and asking them to fund them? (perhaps with a probationary period if necessary). What about emailing professors?

Theoretically, if you had VERY supportive+tolerant parents/friends (rare, but they exist on the Internet), you could simply ask to live with them, and to do research in their house as well.

[-][anonymous]13y00

Well obviously I can't say that. We should just end the thread.

[-][anonymous]13y00

No you shouldn't restore this, I've been asked to remove it as it could potentially be damaging.

[-][anonymous]13y00

No, just that people should be prepared to make significant sacrifices on consequentialist/utilitarian grounds. I sort of agreed at the time.

Now I think that it is somehow self-contradictory to make utilitarian sacrifices for a set of people who widely condemn utilitarian arguments, i.e. humanity at large.

[-][anonymous]13y00

Another idea is the "Bostrom Solution", i.e. be so brilliant that you can find a rich guy to just pay for you to have your own institute at Oxford University.

Then there's the "Reverse Bostrom Solution": realize that you aren't Bostrom-level brilliant, but that you could accrue enough money to pay for an institute for somebody else who is even smarter and would work on what you would have worked on.

Why do you think it's not irrational?

It's a difficult question that a discussion in comments won't do justice (too much effort required from you also, not just me). Read the posts if you will (see "Decision theory" section).

Also keep in mind this comment: we are talking about what justifies attaining a very weak belief, and this isn't supposed to feel like agreement with a position, in fact it should feel like confident disagreement. Most of the force of the absurd decision is created by moral value of the outcome, not by its probability.

(I o... (read more)

3waitingforgodel13y
I understand your argument re: very weak belief... but it seems silly. How is this different than positing a very small chance that a future dictator will nuke the planet unless I mail a $10 donation to green peace?
4Desrtopa13y
Do you have any reason to believe that it's more likely that a future dictator, or anyone else, will nuke the planet if you don't send a donation to Greenpeace than if you do?
2Vladimir_Nesov13y
I agree that you are not justified in seeing a difference, unless you understand the theory of acausal control to some extent and agree with it. But when you are considering a person who agrees with that theory, and makes a decision based on it, agreement with the theory fully explains that decision, this is a much better explanation than most of the stuff people are circulating here. At that point, disagreement about the decision must be resolved by arguing about the theory, but that's not easy.
6David_Gerard13y
You appear to be arguing that a bad decision is somehow a less bad decision if the reasoning used to get to it was consistent ("carefully, correctly wrong"). No, because the decision is tested against reality. Being internally consistent may be a reason for doing something that it is obvious to others is just going to be counterproductive - as in the present case - but it doesn't grant a forgiveness pass from reality. That is: in practical effects, sincere stupidity and insincere stupidity are both stupidity. You even say this above ("There is only one proper criterion to anyone's actions, goodness of consequences"), making your post here even stranger. (In fact, sincere stupidity can be more damaging, as in my experience it's much harder to get the person to change their behaviour or the reasoning that led to it - they tend to cling to it and justify it when the bad effects are pointed out to them, with more justifications in response to more detail on the consequences of the error.) Think of it as a trolley problem. Leaving the post is a bad option, the consequences of removing it are then the question: which is actually worse and results in the idea propagating further? If you can prove in detail that a decision theory considers removing it will make it propagate less, you've just found where the decision theory fails. Removing the forbidden post propagated it further, and made both the post itself and the circumstances of its removal objects of fascination. It has also diminished the perceived integrity of LessWrong, as we can no longer be sure posts are not being quietly removed as well as loudly; this also diminished the reputation of SIAI. It is difficult to see either of these as working to suppress the bad idea.
8wedrifid13y
More importantly it removed lesswrong as a place where FAI and decision theory can be discussed in any depth beyond superficial advocacy.
3David_Gerard13y
The problem is more than the notion that secret knowledge is bad - it's that secret knowledge increasingly isn't possible, and increasingly isn't knowledge. If it's science, you almost can't do it on your own and you almost can't do it as a secret. If it's engineering, your DRM or other constraints will last precisely as long as no-one is interested in breaking them. If it's politics, your conspiracy will last as long as you aren't found out and can insulate yourself from the effects ... that one works a bit better, actually.
-1[anonymous]13y
The forbidden topic can be tackled with math.
0Vladimir_Nesov13y
I don't believe this is true to any significant extent. Why do you believe that? What kind of questions are not actually discussed that could've been discussed otherwise?
8wedrifid13y
You are serious? * What qualifies as a 'Friendly' AI? * If someone is about to execute an AI running 'CEV' should I push a fat man on top of him and save five people from torture? What about an acausal fat man? :) * (How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development? If I recall this topic was one that was explicitly deleted. Torture was mostly just a superficial detail. ... just from a few seconds brainstorming. These are the kinds of questions that can not be discussed without, at the very least, significant bias due to the threat of personal abuse and censorship if you are not careful. I am extremely wary of even trivial inconveniences.
0Vladimir_Nesov13y
Yes. This doesn't seem like an interesting question, where it intersects the forbidden topic. We don't understand decision theory well enough to begin usefully discussing this. Most directions of discussion about this useless question are not in fact forbidden and the discussion goes on. We don't formally understand even the usual game theory, let alone acausal trade. It's far too early to discuss its applications.
5wedrifid13y
It wasn't Vladimir_Nesov's interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing. They are topics that are at least as relevant as such things as 'Sleeping Beauty' that people have merrily prattled on about for decades. That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship. Even if the decision were wise and necessary there is allowed to be disappointing consequences. That's just how things are sometimes. The zeal here is troubling.
1Vladimir_Nesov13y
What do you mean by "decide"? Whether they are interested in that isn't influenced by my decisions, and I can well think about whether they are, or whether they should be (i.e. whether there is any good to be derived from that interest). I opened this thread by asking, You answered this question, and then I said what I think about that kind of questions. It wasn't obvious to me that you didn't think of some other kind of questions that I find important, so I asked first, not just rhetorically. What you implied in this comment seems very serious, and it was not my impression that something serious was taking place as a result of the banning incident, so of course I asked. My evaluation of whether the topics excluded (that you've named) are important is directly relevant to the reason your comment drew my attention.
-2Vladimir_Nesov13y
On downvoting of parent comment: I'm actually surprised this comment got downvoted. It's not as long inferential depth as this one that got downvoted worse, and it looks to me quite correct. Help me improve, say what's wrong.
0Vladimir_Nesov13y
The other way around. I don't "support censorship", instead I don't see that there are downsides worth mentioning (besides the PR hit), and as a result I disagree that censorship is important. Of course this indicates that I generally disagree with arguments for the harm of the censorship (that I so far understood), and so I argue with them (just as with any other arguments I disagree with that are on topic I'm interested in). No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
5wedrifid13y
No, yielding and the lack thereof is not the indicator of zeal of which I speak. It is the sending out of your soldiers so universally that they reach even into the territory of other's preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed. The lack of boundaries is a telling sign according to my model of social dynamics.
-4Vladimir_Nesov13y
It was not my intention to discuss whether something is interesting to others. If it wasn't clear, I do state so here explicitly. You were probably misled by the first part of this comment, where I objected to your statement that I shouldn't speculate about what others are interested in. I don't see why not, so I objected, but I didn't mean to imply that I did speculate about that in the relevant comment. What I did state is that I myself don't believe that conversational topic important, and motivation for that remark is discussed in the second part of the same comment. Besides, asserting that the topic is not interesting to others is false as a point of simple fact, and that would be the problem, not the pattern of its alignment with other assertions. Are there any other statements that you believe I endorse ("in support of censorship") and that you believe are mistaken?
0Vladimir_Nesov13y
On severe downvoting of the parent: What are that comment's flaws? Tell me, I'll try to correct them. (Must be obvious to warrant a -4.)
-1Vladimir_Nesov13y
(Should I lump everything in one comment, or is the present way better? I find it more clear if different concerns are extracted as separate sub-threads.)
8steven046113y
It's not just more clear, it allows for better credit assignment in cases where both good and bad points are made.
0wedrifid13y
Steven beat me to it - this way works well. Bear in mind though that I wasn't planning to engage in this subject too deeply. Simply because it furthers no goal that I am committed to and is interesting only in as much as it can spawn loosely related tangents.
-1Vladimir_Nesov13y
That some topics are excluded is tautological, so it's important what kind of topics were. Thus, stating "nor is it your place to decide what things others are interested in discussing" seems to be equivalent with stating "censorship (of any kind) is bad!", which is not very helpful in the discussion of whether it's in fact bad. What's the difference you intended?
5wedrifid13y
You do see the irony there I hope...
4XiXiDu13y
Would you have censored the information? If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum? Would you be interested to discuss it?
9wedrifid13y
No, for several reasons. I have made no secret of the fact that I don't think Eliezer processes perceived risks rationally and I think this applies in this instance. This is not a claim that censorship is always a bad idea - there are other obvious cases where it would be vital. Information is power, after all. Only if there is something interesting to say on the subject. Or any interesting conversations to be had on the various related subjects that the political bias would interfere with. But the mere fact that Eliezer forbids it doesn't make it more interesting to me. In fact, the parts of Roko's posts that were most interesting to me were not even the same parts that Eliezer threw a tantrum over. As far as I know Roko has been bullied out of engaging in such conversation even elsewhere and he would have been the person most worth talking to about that kind of counterfactual. Bear in mind that the topic has moved from the realm of abstract philosophy to politics. If you make any mistakes, demonstrate any ignorance or even say things that can be conceivably twisted to appear as such then expect that will be used against you here to undermine your credibility on the subject. People like Nesov and and jimrandom care, and care aggressively. Post away, if I have something to add then I'll jump in. But warily.
6XiXiDu13y
I am not sure if I understand the issue and if it is as serious as some people obviously perceive it to be. Because if I indeed understand it, then it isn't as dangerous to talk about it in public as it is portrayed to be. But that would mean that there is something wrong with otherwise smart people, which is unlikely? So should I conclude that it is more likely that I simply do not understand it? What irritates me is that people like Nesov are saying that "we don't formally understand even the usual game theory, let alone acausal trade". Yet they care aggressively to censor the topic. I've been told before that it is due to people getting nightmares from it. If that is the reason then I do not think censorship is justified at all.
4wedrifid13y
I wouldn't rule out the possibility that you do not fully understand it and they are still being silly. ;)
1XiXiDu13y
How about the possibility that you do not understand it and that they are not silly? Do you think it could be serious enough to have nightmares about it and to censor it as far as possible, but that you simply don't get it? How likely is that possibility?
5wedrifid13y
Why would you even ask me that? Clearly I have considered the possibility (given that I am not a three year old) and equally clearly me answering you would not make much sense. :) But the questioning of trusting people's nightmares is an interesting one. I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy. After that has been taken care of I may consider their advice.
1XiXiDu13y
I wasn't quite sure. I don't know how to conclude that they are silly and you are not. I'm not just talking about Nesov but also Yudkowsky. You concluded that they are all wrong about their risk estimations and act silly. Yudkowsky explicitly stated that he does know more. But you conclude that they don't know more, that they are silly. Yes, I commented before saying that it is not the right move to truncate your child's bed so that monsters won't fit under it but rather explain that it is very unlikely for monsters to hide under the bed.
0wedrifid13y
You can't. Given the information you have available it would be a mistake for you to make such a conclusion. Particularly given that I have not even presented arguments or reasoning on the core of the subject, what, with the censorship and all. :) Indeed. Which means that not taking his word for it constitutes disrespect. Once the child grows up a bit you can go on to explain to them that even though there are monsters out in the world being hysterical doesn't help either in detecting monsters or fighting them. :)
5David_Gerard13y
As I noted, it's a trolley problem: you have the bad alternative of doing nothing, and then there's the alternative of doing something that may be better and may be worse. This case observably came out worse, and that should have been trivially predictable by anyone who'd been on the net a few years. So the thinking involved in the decision, and the ongoing attempts at suppression, admits of investigation. But yes, it could all be a plot to get as many people as possible thinking really hard about the "forbidden" idea, with this being such an important goal as to be worth throwing LW's intellectual integrity in front of the trolley for.
-2Vladimir_Nesov13y
Caring "to censor the topic" doesn't make sense, it's already censored, and already in the open, and I'm not making any actions regarding the censorship. You'd need to be more accurate in what exactly you believe, instead of reasoning in terms of vague affect. Regarding lack of formal understanding, see this comment: the decision to not discuss the topic, if at all possible, follows from a very weak belief, not from certainty. Lack of formal understanding expresses lack of certainty, but not lack of very weak beliefs.
8XiXiDu13y
If an organisation, that is working on a binding procedure for a all-powerful dictator to implement it on the scale of the observable universe, tried to censor information, that could directly affect me for the rest of time in the worst possible manner, I got a very weak belief that their causal control is much more dangerous than the acausal control between me and their future dictator. So you don't care if I post it everywhere and send it to everyone I can?
2jimrandomh13y
For what it's worth, I've given up on participating in these arguments. My position hasn't changed, but arguing it was counterproductive, and extremely frustrating, which led to me saying some stupid things.
0Vladimir_Nesov13y
No, I don't (or, alternatively, you possibly could unpack this in a non-obvious way to make it hold). I suppose it just so happens it was the topic I engaged yesterday, and similar "care aggressively" characteristic can probably be seen in any other discussion I engage.
6wedrifid13y
I don't dispute that, and this was part of what prompted the warning to XiXi. When a subject is political and your opponents are known to use aggressive argumentative styles it is important to take a lot of care with your words - give nothing that could potentially be used against you. The situation is analogous to recent discussion of refraining to respond to the trolley problem. If there is the possibility that people may use your words against you in the future STFU unless you know exactly what you are doing!
0Vladimir_Nesov13y
No irony. You don't construct complex machinery out of very weak beliefs, but caution requires taking very weak beliefs into account.
1wedrifid13y
The irony is present and complex machinery is a red herring.
0Vladimir_Nesov13y
Well then, I don't see the irony, show it to me.
-1Vladimir_Nesov13y
Here, I'm talking about factual explanation, not normative estimation. The actions are explained by holding a certain belief, better than by alternative hypotheses. Whether they were correct is a separate question. You'd need to explain this step in more detail. I was discussing a communication protocol, where does "testing against reality" enter that topic?
2David_Gerard13y
Ah, I thought you were talking about whether the decision solved the problem, not whether the failed decision was justifiable in terms of the theory. I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect. As such, whether the decision was justifiable is less interesting except in terms of revealing the thinking processes of the person doing the justification (clinginess to pet decision theory, etc).
1Vladimir_Nesov13y
"Belief in the decision being a failure is an argument against adequacy of the decision theory", is simply a dual restatement of "Belief in the adequacy of the decision theory is an argument for the decision being correct".
1David_Gerard13y
This statement appears confusing to me: you appear to be saying that if I believe strongly enough in the forbidden post having been successfully suppressed, then censoring it will not have in fact caused it to propagated widely, nor will it have become an object of fascination and caused a reputational hit to LessWrong and hence SIAI. This, of course, makes no sense. I do not understand how this matches with the effects observable in reality, where these things do in fact appear to have happened. Could you please explain how one tests this result of the decision theory, if not by matching it against what actually happened? That being what I'm using to decide whether the decision worked or not. Keep in mind that I'm talking about an actual decision and its actual results here. That's the important bit.
0shokwave13y
If you believe that "decision is a failure" is evidence that the decision theory is not adequate, you believe that "decision is a success" is evidence that the decision theory is adequate. Since a decision theory's adequacy is determined by how successful its decisions are, you appear to be saying "if a decision theory makes a bad decision, it is a bad decision theory" which is tautologically true. Correct me if I'm wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant. The decision was made by a certain decision theory. The factual question "was the decision-maker holding to this decision theory in making this decision?" is entirely unrelated to the question "should the decision-maker hold to this decision theory given that it makes bad decisions?". To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
1David_Gerard13y
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov's energetic equivocation), then it is important to examine the limits of the decision theory. As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously. Assuming the decision was made by robust TDT.
7wedrifid13y
The decision you refer to here... I'm assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.) If so for your purposes you can safely consider 'TDT/CDT' irrelevant. While acausal (TDTish) reasoning is at play in establishing a couple of the important premises, they are not relevant to the reasoning that you actually seem to be criticising. ie. The problems you refer to here are not the fault of TDT or of abstract reasoning at all - just plain old human screw ups with hasty reactions.
2David_Gerard13y
That's the one, that being the one specific thing I've been talking about all the way through. Vladimir Nesov cited acausal decision theories as the reasoning here and here - if not TDT, then a similar local decision theory. If that is not the case, I'm sure he'll be along shortly to clarify. (I stress "local" to note that they suffer a lack of outside review or even notice. A lack of these things tends not to work out well in engineering or science either.)
4wedrifid13y
Good, that had been my impression. Independently of anything that Vladmir may have written it is my observation that the 'TDT-like' stuff was mostly relevant to the question "is it dangerous for people to think X?" Once that has been established the rest of the decision making, what to do after already having reached that conclusion, was for most part just standard unadorned human thinking. From what I have seen (including your references to reputation self sabotage by SIAI) you were more troubled by the the latter parts than the former. Even if you do care about the more esoteric question "is it dangerous for people to think X?" I note that 'garbage in, garbage out' applies here as it does elsewhere. (I just don't like to see TDT unfairly maligned. Tarnished by association as it were.)
2Vladimir_Nesov13y
See section 7 of the TDT paper (you'll probably have to read from the beginning to familiarize yourself with concepts). It doesn't take Omega to demonstrate that CDT errs, it takes mere ability to predict dispositions of agents to any small extent to get out of CDT's domain, and humans do that all the time. From the paper:
1jimrandomh13y
I wouldn't use this situation as evidence for any outside conclusions. Right or wrong, the belief that it's right to suppress discussion of the topic entails also believing that it's wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
7David_Gerard13y
I haven't been saying I believed it was wrong to censor (although I do think that it's a bad idea in general). I have been saying I believe it was stupid and counterproductive to censor, and that this is not only clearly evident from the results, but should have been trivially predictable (certainly to anyone who'd been on the Internet for a few years) before the action was taken. And if the LW-homebrewed, lacking in outside review, Timeless Decision Theory was used to reach this bad decision, then TDT was disastrously inadequate (not just slightly inadequate) for application to a non-hypothetical situation and it lessens the expectation that TDT will be adequate for future non-hypothetical situations. And that this should also be obvious.

Yes, the attempt to censor was botched and I regret the botchery. In retrospect I should have not commented or explained anything, just PM'd Roko and asked him to take down the post without explaining himself.

9David_Gerard13y
This is actually quite comforting to know. Thank you. (I still wonder WHAT ON EARTH WERE YOU THINKING at the time, but you'll answer as and when you think it's a good idea to, and that's fine.) (I was down the pub with ciphergoth just now and this topic came up ... I said the Very Bad Idea sounded silly as an idea, he said it wasn't as silly as it sounded to me with my knowledge. I can accept that. Then we tried to make sense of the idea of CEV as a practical and useful thing. I fear if I want a CEV process applicable by humans I'm going to have to invent it. Oh well.)
4Roko13y
And I would have taken it down. My bad for not asking first most importantly.
2wedrifid13y
It is evidence for said conclusions. Do you mean, perhaps, that it isn't evidence that is strong enough to draw confident conclusions on its own? To follow from the reasoning the embedded conclusion must be 'you should expect a higher probability'. The extent to which David should expect higher probability of unknown unknowns is dependent on the deference David gives to the judgement of the conscientious non-participants when it comes to the particular kind of risk assessment and decision making - ie. probably less than Jim does. (With those two corrections in place the argument is reasonable.)
1Vladimir_Nesov13y
I agree, and in this comment I remarked that we were assuming this statement all along, albeit in a dual presentation.
6waitingforgodel13y
If you're interested, we can also move forward as I did over here by simply assuming EY is right, and then seeing if banning the post was net positive
1Vladimir_Nesov13y
It's not "moving forward", it's moving to a separate question. That question might be worth considering, but isn't generally related to the original one. Why would the assumption that EY was right be necessary to consider that question? I agree that it was net negative, specifically because the idea is still circulating, probably with more attention drawn to it than would happen otherwise. Which is why I started commenting on my hypothesis about the reasons for EY's actions, in an attempt to alleviate the damage, after I myself figured it out. But that it was in fact net negative doesn't directly argue that given the information at hand when the decision was made, it had net negative expectation, and so that the decision was incorrect (which is why it's a separate question, not a step forward on the original one).
9waitingforgodel13y
I like the precision of your thought. All this time I thought we were discussing if blocking future censorship by EY was a rational thing to do -- but it's not what we were discussing at all. You really are in it for the details -- if we could find a way of estimating around hard problems to solve the above question, that's only vaguely interesting to you -- you want to know the answers to these questions. At least that's what I'm hearing. It sounds like the above was your way of saying you're in favor of blocking future EY censorship, which gratifies me. I'm going to do the following things in the hope of gratifying you: 1. Writing up a post on less wrong for developing political muscles. I've noticed several other posters seem less than savvy about social dynamics, so perhaps a crash course is in order. (I know that there are certainly several in the archives, I guarantee I'll bring several new insights [with references] to the table). 2. Reread all your comments, and come back at these issues tomorrow night with a more exact approach. Please accept my apology for what I assume seemed a bizarre discussion, and thanks for thinking like that. Night!
-1Vladimir_Nesov13y
I didn't address that question at all, and in fact I'm not in favor of blocking anything. I came closest to that topic in this comment.
7wedrifid13y
More than enough information about human behavior was available at the time. Negative consequences of the kind observed were not remotely hard to predict.
1Vladimir_Nesov13y
Yes, quite likely. I didn't argue with this point, though I myself don't understand human behavior enough for that expectation to be obvious. I only argued that the actual outcome isn't a strong reason to conclude that it was expected.
3waitingforgodel13y
I'm slowly moving through the sequences, I'll comment back here if/when I finish the posts as well. In the mean time, I've been told that if you can't explain something simply then you don't really understand it... wanna take a fast and loose whack? edit: did you drastically edit your comment?
4Vladimir_Nesov13y
I believe this is utter nonsense, play on the meaning of the word "explain". If explaining is to imply understanding by the recipient, then clearly fast explaining of great many things is not possible, otherwise education wouldn't be necessary. Creating an illusion of understanding, or equivalently a shallow understanding might be manageable of course, the easier the less educated and rational the victim.
5waitingforgodel13y
Interesting. I've found that intuitive explanations for relatively complex things are generally easier than a long, exact explanation. Basically fast explanations use hardware accelerated paths to understanding (social reasoning, toy problems that can be played with, analogies), and then leave it to the listener to bootstrap themselves. If you listen to the way that researchers talk, it's basically analogies and toy problems, with occasional black-board sessions if they're mathy. It's hard to understand matrix inversion by such a route, which I think you're saying is roughly what's required to understand why you believe this censorship to be rational. But, for the record, it ain't no illusionary understanding when I talk fast with a professor or fellow grad student.
1Vladimir_Nesov13y
Certainly easier, but don't give comparable depth of understanding or justify comparable certainty in statements about the subject matter. Also, the dichotomy is false, since detailed explanations are ideally accompanied by intuitive explanations to improve understanding. What we were talking about instead is when you have only a fast informal explanation, without the detail. It's because they already have the rigor down. See this post by Terence Tao.
0Vladimir_Nesov13y
Yes, I do that, sorry. What I consider improvement over the original.
-1David_Gerard13y
Your answer actually includes "you should try reading the sequences."
1Vladimir_Nesov13y
The reference to the sequences is not the one intended (clarified by explicitly referring to "Decision theory" section in the grandparent comment).

Ensuring that is part of being a rationalist; if EY, Roko, and Vlad (apparently Alicorn as well?) were bad at error-checking and Vaniver was good at it, that would be sufficient to say that Vaniver is a better rationalist than E R V (A?) put together.

8David_Gerard13y
Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so. "For the computer security community, the moral is obvious: if you are designing a system whose functions include providing evidence, it had better be able to withstand hostile review." - Ross Anderson, RISKS Digest vol 18 no 25 Until a clever new thing has had decent outside review, it just doesn't count as knowledge yet.
-3shokwave13y
That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb's Theorem is strong evidence that he is good at error-checking himself.
4David_Gerard13y
That's pretty much a circular argument. How's the third-party verifiable evidence look?
0shokwave13y
I dunno. Do the Sequences smell like bullshit to you? edit: this is needlessly antagonistic. Sorry.
6David_Gerard13y
Mostly not - but then I am a human full of cognitive biases. Has anyone else in the field paid them any attention? Do they have any third-party notice at all? We're talking here about somewhere north of a million words of closely-reasoned philosophy with direct relevance to that field's big questions, for example. It's quite plausible that it could be good and have no notice, because there's not that much attention to go around; but if you want me to assume it's as good as it would be with decent third-party tyre-kicking, I think I can reasonably ask for more than "the guy that wrote it and the people working at the institute he founded agree, and hey, do they look good to you?" That's really not much of an argument in favour. Put it this way: I'd be foolish to accept cryptography with that little outside testing as good, here you're talking about operating system software for the human mind. It needs more than "the guy who wrote it and the people who work for him think it's good" for me to assume that.
2shokwave13y
Fair enough. It is slightly more than Vaniver has going in their favour, to return to my attempt to balance their rationality against each other.
0TheOtherDave13y
Upvoted to zero because of the edit.
5Manfred13y
I haven't read fluffy (I have named it fluffy), but I'd guess it's an equivalent of a virus in a monoculture: every mode of thought has its blind spots, and so to trick respectable people on LW, you only need an idea that sits in the right blind spots. No need for general properties like "only infectious to stupid people." Alicorn throws a bit of a wrench in this, as I don't think she shares as many blind spots with the others you mention, but it's still entirely possible. This also explains the apparent resistance of outsiders, without need for Eliezer to be lying when he says he thinks fluffy was wrong.
0shokwave13y
Could also be that outsiders are resistant because they have blind spots where the idea is infectious, and respectable people on LW are respected because they do not have the blind spots - and so are infected. I think these two views are actually the same, stated as inverses of each other. The term blind spot is problematic.
0Manfred13y
I think the term blind spot is accurate, unless (and I doubt it) Eliezer was lying when he later said fluffy was wrong. What fits the bill isn't a correct scary idea, but merely a scary idea that fits into what the reader already thinks. Maybe fluffy is a correct scary idea, and your allocation of blind spots (or discouraging of the use of the term) is correct, but secondhand evidence points towards fluffy being incorrect but scary to some people.
0Alicorn13y
I'm curious about why you think this.
3Manfred13y
Honestly? Doesn't like to argue about quantum mechanics. That I've seen :D Your posts seem to be about noticing where things fit into narratives, or introspection, or things other than esoteric decision theory speculations. If I had to come up with an idea that would trick Eliezer and Vladimir N into thinking it was dangerous, it would probably be barely plausible decision theory with a dash of many worlds.
0Jack13y
I was also surprised by your reaction to the the argument. In my case this was due to the opinions you've expressed on normative ethics.
0Alicorn13y
How are my ethical beliefs related?
0Jack13y
Answered by PM