Filter All time

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Link: NY Times covers Bayesian statistics

7 Mass_Driver 12 January 2011 08:45AM

http://www.nytimes.com/2011/01/11/science/11esp.html?_r=1&src=me&ref=homepage

It's disguised as an article about ESP to fool their editors; scroll down two paragraphs and it goes on for quite a while about what Bayesian statistics are and why Bayesian analysis is important.

 

Certainty estimates in areas outside one's expertise

7 JoshuaZ 27 December 2010 08:56PM

One issue that I've noticed in discussions on Less Wrong is that I'm much less certain about the likely answers to specific questions than some other people on Less Wrong. But the questions where this seems to be most pronounced are mathematical questions that are close to my area of expertise (such as whether P = NP). In areas outside my expertise, my apparent confidence is apparently often higher. Thus, for example at a recent LW meet-up I expressed a much lower probability estimate that cold fusion is real than what others in the conversation estimated. This suggests that I may be systematically overestimating  my confidence in areas that I don't study as much, essentially a variant of the Dunning-Krueger effect. Have other people here experienced the same pattern with their own confidence estimates?

Pascal's Gift

7 Bongo 25 December 2010 07:42PM

 If Omega offered to give you 2^n utils with probability 1/n, what n would you choose?

This problem was invented by Armok from #lesswrong. Discuss.

The 9 Circles of Scientific Hell

7 wedrifid 22 December 2010 02:59AM

Neuroskeptic is my favorite blog on neuroscience. Don't be deceived by the 'skeptic' in the name, the coverage is well balanced and overall quite positive. He recently interrupted his regular scheduling with a light piece on the circles of scientific hell. Definitely worth a look. I'm not too sure about the order of the various sins. I'd be tempted to put "p-value fishing" way down the list!

An excerpt:

Second Circle: Overselling
"This circle is reserved for those who exaggerated the importantance of their work in order to get grants or write better papers. Sinners are trapped in a huge pit, neck-deep in horrible sludge. Each sinner is provided with the single rung of a ladder, labelled 'The Way Out - Scientists Crack Problem of Second Circle of Hell"

Makes me want to want to break out into a chorus of "Let the Punishment Fit the Crime"!

Link: What does it feel like to be stupid?

7 Vladimir_Golovin 10 December 2010 07:43AM

What does it feel like to be stupid?

I had an arterial problem for a couple of years, which reduced blood supply to my heart and brain and depleted B vitamins from my nerves (to keep the heart in good repair). Although there is some vagueness as to the mechanisms, this made me forgetful, slow, and easily overwhelmed. In short I felt like I was stupid compared to what I was used to, and I was.

It was frightening at first because I knew something wasn't right but didn't know what, and very worrying for my career because I was simply not very good any more.

However, once I got used to it and resigned myself, it was great.

Full article:
http://www.quora.com/What-does-it-feel-like-to-be-stupid

Kazakhstan's president urges scientists to find the elixir of life

7 Document 10 December 2010 04:17AM

...according to this front-page Reddit headline I just saw, which links to this Guardian article. I wonder if he's heard of KrioRus, whether he's signed up (Wikipedia says they offer services "to clients from Russia, CIS and EU"), and what his odds would be if he were (would it be possible to emigrate to Russia to be closer to the facility, and if not, what would be the best possible option?). Given his being a head of state, presumably it'd be pretty tough for an advocate to even get close enough to try to make the case.

Searching the Reddit comment thread for "cryo" turned up nothing.

Less Wrong: Open Thread, December 2010

7 David_Gerard 06 December 2010 02:29PM

Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

A Catalog of Confusions

7 sixes_and_sevens 05 December 2010 01:17AM

tl;dr - can we categorise confusing events by skills required to deal with them?  What are those skills?

I am sometimes haunted by things I read online.  It's probably a couple of years since I first read Your Strength as a Rationalist, but over the past month or two I've been reminded of it a surprising number of times in different circumstances.  It's led me to wonder whether the idea of being "confused by fiction" can be helpfully broken down into categories, with each of those categories having certain skills that can be worked on to help notice them.

I'm going to describe two such categories I think I've identified, and invite your criticism, or suggestions of other similar categories.  In both cases, I believe there to be some instinct, acquired skill, or some combination thereof that draws it to my attention.  I could just be making this up, though, so criticism is also welcome on this front.

Absence of Salient Information

I believe tech support is like a magic trick in reverse.  With a magic trick, the magician hides a crucial fact which he then distracts you from.  He provides a false narrative of what's going on while confusing the sequence of events, culminating in the impossible, and relies on your own fear of appearing foolish to make you falsely report the conditions of the trick to both yourself and other spectators.

In tech support, you are often presented with an impossible sequence of events; the customer's fear of appearing foolish makes them falsely report the conditions of the fault to both themselves and you, concealing a crucial fact which the rest of the narrative distracts you from.  You then have to figure out how it was done.

I recently asked a girl from my dance class out for a drink, and proceeded to receive the most shocking litany of mixed signals I could ever imagine receiving, drink not forthcoming.  I boiled it down to three possibilities: either she was interested but incredibly shy, uninterested but just really friendly; or she had a completely different set of standards when it came to signaling romantic interest or lack thereof.  I remember thinking how none of these possibilities made sense in context, and was reminded quite specifically of the idea of being more confused by fiction than by reality.  It was driving my problem-solving faculties to distraction, and I have never been so relieved to discover a woman I was interested in already had a boyfriend.

The phenomenon wasn't unlike a film with a massive plot-integral spoiler.  There's this nagging feeling that the whole thing doesn't quite make sense, until the spoiler is revealed, at which point you suddenly see the whole of the preceeding sequence of events in a new revelatory light.  I've often noticed with such films that when people know there's a big spoiler, they're more likely to spot it early on because they start groping around for plausible plot twists.  I'm not sure if this is the best way to go about fishing for information you know is absent, though.

Having One's Head Messed With

I've read a few books on hypnosis, NLP and persuasion techniques, and I'm at least as well-versed on cognitive biases as most LW readers, but a couple of weeks ago someone fucked with my head.

I was in East London (never a good start), fairly late at night with food in my hand.  Beggars always seem to approach me when I have food in my hand.  I don't think this is coincidence.  This particular beggar, a woman in her twenties, spun a very quick story which I can't even begin to remember all the details of.  Something about desperately needing bus fare to escape her abusive boyfriend and having just been released from hospital.  Just thinking about it, two weeks later, makes me confused and disorientated.

In retrospect, the story made no sense whatsoever, she was far too aggressive to be a downtrodden out-patient abuse victim, and far too good at making me feel like the only way I could possibly get out of this horrible distressing situation was to give her my small change, which I did.  Afterwards I felt violated.

The experience itself has probably armed me against it happening again to a certain degree, but I'm now worried about what I'm not armed against.  There is a feeling of having your head messed with, but I only ever seem to experience it retrospectively.  Can I train myself to spot it as it's happening?  Is it related to the feeling I get when I recognise I'm being manipulated by advertising?  Is there a how-to body of knowledge that can be assembled to defend against manipulation in general?

This probably could have been more coherent, but it was surprisingly cathartic to write.

Broken window fallacy and economic illiteracy.

7 Desrtopa 01 December 2010 04:48AM

Some time ago, I had a talk with my father where I explained to him the concept of the broken window fallacy. The idea was completely novel to him, and while it didn't take long for him to grasp the principles, he still needed my help in coming up with examples of ways that it applies to the market in the real world.

My father has an MBA from Columbia University and has held VP positions at multiple marketing firms.

I am not remotely expert on economics; I do not even consider myself an aficionado. But it has frequently been my observation that not just average citizens, but people whose positions have given them every reason to learn and use the information, are critically ignorant of basic economic principles. It feels like watching engineers try to produce functional designs based on Aristotelian physics. You cannot rationally pursue self interest when your map does not correspond to the territory.

I suppose the worst thing for me to hear at this point is that there is some reason with which I am not yet familiar which prevents this from having grand scale detrimental effects on the economy, since it would imply that businesses cannot be made more sane by the increased dissemination of basic economic information. Otherwise, this seems like a fairly important avenue to address, since the basic standards for economic education, in educated businesspeople and the general public, are so low that I doubt the educational system has even begun to climb the slope of diminishing returns on effort invested into it.

Superintelligent AI mentioned as a possible risk by Bill Gates

7 FormallyknownasRoko 28 November 2010 11:51AM

"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people."

- Bill Gates 

From

Africa Needs Aid, Not Flawed Theories

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments)

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:

"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments."

Startups

7 Alexandros 24 November 2010 09:13PM

There seems to be a non-negligible deal of overlap between this community and Hacker News, both in terms of material and members. For those not aware of HN, it's a news aggregator for people interested in startups, technology, and other intellectually interesting topics, with a reputation for high-quality material and discourse.

While rationality and LessWrong gets its fair share of attention over at HN, I haven't heard of much discussion about startups over here. Off-line, I've heard a claim that in terms of contribution to existential risk prevention charities, startups are suboptimal when compared to jobs in finance, but not much else other than that. I find this odd, as many of the contributors in this site seem to be prime founder material, and rationality should really be of use when working in a high-stakes ever-changing environment.

My intention with this post is simply to kickstart a discussion around startups and gauge the attitudes of fellow LessWrongers. Does anyone (else) aspire to becoming a startup founder in the next few years? Do you believe startup founding to be a viable means of contributing to groups existential risk prevention?

Pseudolikelihood as a source of cognitive bias

7 Peter_de_Blanc 20 November 2010 08:06PM

Pseudolikelihood is method for approximating joint probability distributions. I'm bringing this up because I think something like this might be used in human cognition. If so, it would tend to produce overconfident estimates.

Say we have some joint distribution over X, Y, and Z, and we want to know about the probability of some particular vector (x, y, z). The pseudolikelihood estimate involves asking yourself how likely each piece of information is, given all of the other pieces of information. Then you multiply these together. So the pseudolikelihood of (x, y, z) is P(x|yz) P(y|xz) P(z|xy).

Not only is this wrong, but it gets more wrong as your system is bigger. By that I mean that a ratio of two pseudolikelihoods will tend towards 0 or infinity for big problems, even if the likelihoods are close to the same.

So how can we avoid this? A correct way to calculate a joint probability P(x,y,z) looks like P(x) P(y|x) P(z|xy). At each step we only condition on information "prior" to the thing we are asking about. My guess about how to do do this involves making your beliefs look more like a directed acyclic graph. Given two adjacent beliefs, you need to be clear on which is the "cause" and which is the "effect." The cause talks to the effect in terms of prior probabilities and the effect talks to the cause in terms of likelihoods.

Failure to do this could take the form of an undirected relationship (two beliefs are "related" without either belief being the cause or the effect), or loops in a directed graph. I don't actually think we want to get rid of undirected relationships entirely -- people do use them in machine learning -- but I can't see any good reason for keeping the latter.

An example of a causal loop would be if you thought of math as an abstraction from everyday reality, and then turned around and calculated prior probabilities of fundamental physical theories in terms of mathematical elegance. One way out is to declare yourself a mathematical Platonist. I'm not sure what the other way would look like.

Advice for a Budding Rationalist

7 atucker 19 November 2010 03:10AM

Most people in the US with internet connections who are reading this site will at some point in their lives graduate high school. I haven't yet, and it seems like what I do afterwards will have a pretty big effect on the rest of my life.* 

Given that, I think I should ask for some advice.

Generally,
Any advice? Anything you wish you knew? Disagreement with the premise? (If you disagree, please explain what to do anyway.)

More specific to the site,
Any advice for high schoolers with a rationalist and singularitarian bent? Who are probably looking at going to college?
Anything particularly effective for working against existential risk?
Any fields particularly useful for rationalists to know?
Any fields in which rationalists would be particularly helpful?

This is intended to be a pretty general reference for life advice for the young ones among us. With a college selection bent, probably. If you're in high school and have a specific situation that you want help with/advice for, please reply to this post with that. I think that a most people have specific skills/background they could leverage, so a one-size-fits all approach seems to be somewhat simplistic.

*I understand that I can always change plans later, but there are many many things that seem to require some level of commitment, like college.

Edit:
As Unnamed pointed out, also look at this article about undergraduate course selection.

Rationalist Diplomacy, Game 2, Game Over

7 Randaly 18 November 2010 03:19AM

Note: The title refers to the upcoming turn.

OK, here's the promised second game of diplomacy. The game name is 'Rationalist Diplomacy Game 2.'

Kevin was Prime Minister of Great Britain
AlexMennen is President of France
tenshiko is Kaiser of Germany
Alexandros was King of Italy until his retirement
WrongBot is Emperor of Austria
Thausler is Czar of Russia
Hugh Ristik is Sultan of Turkey

Randaly is the GM, and can be reached at nojustnoperson@gmail.com

 

Peace For Our Time!

The leaders of the three surviving nations, France, Russia, and Turkey, agreed to a peace treaty in late August, bringing an end to this destructive conflict. Crowds across Europe broke out into spontaneous celebration, as national leaders began to account for the vast costs- human and monetary- of the wars.

 

All orders should be sent to nojustnoperson@gmail.com with an easy-to-read title like "Rationalist Diplomacy Game 2: Russian Orders Spring 1901". Only the LAST set of orders sent will be counted, so feel free to change your mind or to do something sneaky like sending in a fake set of orders cc your ally, and then sending in your real orders later. I'm not going to be too picky on exactly how you phrase your orders, but I prefer standard Diplomacy terminology like "F kie -> hel". New players - remember that if you send two units to the same space, you MUST specify which is attacking and which is supporting. If you make a mistake there or anywhere else, I will probably email you and ask you which you meant, but if I don't have time you'll just be out of luck.

ETA: HughRistik would like to underscore that, under the standard house rules, all draws are unranked.

Past maps can be viewed  here; the game history can be viewed  here.

Criticisms of CEV (request for links)

7 Kevin 16 November 2010 04:02AM

I know Wei Dai has criticized CEV as a construct, I believe offering the alternative of rigorously specifying volition *before* making an AI. I couldn't find these posts/comments via a search, can anyone link me? Thanks.

There may be related top-level posts, but there is a good chance that what I am specifically thinking of was a comment-level conversation between Wei Dai and Vladimir Nesov.

Also feel free to use this thread to criticize CEV and to talk about other possible systems of volition.

Hi - I'm new here - some questions

7 InquilineKea 14 November 2010 04:11AM

Hello everyone,

I'm new here, although I've read Less Wrong and Overcoming Bias on and off for the last few years. Anyways, I'm InquilineKea (or Simfish), and I have a website at http://simfishthoughts.wordpress.com/. I think about everything, so I feel that this might be the perfect community for me. I do have some questions though - are we allowed to post anything in this part of the site? (like, could we treat this part like another forum, albeit an intellectually mature forum?) Or do we have to keep things formal? I tend to post a high number of threads, but there don't seem to be many threads here. Are there any terms of service/rules? Or are things just governed by upvotes/downvotes? (much like reddit)

Anyways, I'm an astronomy/physics/math major at the University of Washington (I got in through an early entrance program) and I'm planning on applying to astrophysics grad school fairly soon. However, I'm also intensely interested in complex adaptive systems and data mining, especially as they relate to the social sciences. I'm especially interested in Consilience and in trying to find trends behind every academic field (in fact, I do want to get to a graduate level of education in every natural and social science there is). I'm demographics junkie who literally pours over all the charts and tables of every demographic statistic I can find, although it sometimes ends up hurting my grades. My favorite blogs are Gene Expression, FuturePundit/ParaPundit, and Overcoming Bias. Which I'm sure a lot of people here read.

I always think in terms of maximizing "utility" and maximizing "efficiency". So this leads me to do many untraditional things. For one thing, I have attention deficit disorder, so I realize that I frequently have to take untraditional approaches. The Internet has always been a savior for me because I can always stop and continue later when I feel like I'm about to zone out (in fact, those with ADD have a highly inconsistent learning rate). I also have an Asperger's Syndrome diagnosis, although I've recently tried to stop using it as an excuse for my behavior (in fact, I now only fit the bare minimum of "Aspie" criteria on the DSM IV, but I still think that it strongly influences my interests and behavior). I also consistently think of what's most rational - which means that I have to respect the desires that evolution has given me. Sometimes, people think that maximizing "utility" means maximizing "self-interest", but the amazing thing is that evolution has made people happier whenever they help others (for whatever reason), since "happiness" tends to asymptote with increased wealth/self-gratification/etc. So as a result, people are actually happiest when they're socially interconnected. Although I sometimes bemoan this fact since I often feel that people don't understand me (I'm trying to move beyond my neuroticism/anger stemming from a half-decade of social rejection, but it still affects me now). I also practice calorie restriction + vegetarianism, not just to maximize my chances of living longer, but also because I want to reduce the decline of fluid IQ with increasing age.

Due to my conditions, though, I've never felt like I was in any comfort zone, which has perhaps forced me to try every possible approach that might make my life easier. I often start out with irrational approaches, but end up taking the approach that I perceive as most rational for myself. Of course, the sustainability of the action matters too (I realize that it might be utility-maximizing for me to exercise, for example, but I don't exercise right now because I can't trust myself to be consistent with exercising, at least while I'm still in school).

Anyways, I can talk a lot more. I love to overanalyze things. I also have a massive number of posts on the Internet, although many of them are beyond embarrassing. In the end, though, I only look for people who are open to anything and completely non-judgmental (although some people may look for certain "signals" when they're looking for prospective contacts, to minimize the chances of meeting a contact with which one may fear wasting time on). Basically, my ideal model (for hypothesis generation) involves this: I try to type out some hypotheses, and then post them online, in hopes that someone might critique them. Many of my hypotheses will be junk, but that's okay. As long as I can maximize the number of useful ideas that I can generate, I think I'll have done something (although I don't really have a place to post all my hypotheses, since I've been flamed many times for it [most people consider my posts tl;dr, and they also make fun of my autism]. And few people reply to my ideas precisely because I tend to study esoteric fields that they don't care about, but also because I still haven't found a forum where people actually respect ideas [even reddit and Physics Forums can be particularly cruel].)

Compared to most people, I tend to hit on correct ideas with lower accuracy (which inevitably results in people getting impatient with me/flaming me). But I do believe that it's easiest for me to form the best ideas when I post them when undeveloped (that way, sometimes, my shame at being wrong can actually motivate me to correct my ideas more quickly - this is why I frequently edit after posting - I have problems with alertness, so the adrenaline rush from being wrong can actually motivate me to finish things in less time). I consider time as the most important resource in the world, as the amount of material I could possibly learn is definitely worth thousands of lifetimes. And eventually, I do hit on some good ideas. In a sense, it's like generating variation and selecting the best results out of such variation (sort of like evolution, albeit less blind). This is why I'm also intensely interested in genetic algorithms and data mining, since they tend to operate through somewhat similar mechanisms (this is also why I love the fourth paradigm so much). I'm extremely extremely open about myself and share virtually everything I do (although I generally don't share when I believe that such sharing could lead to social rejection, so this usually makes me keep to myself). But yes, I explore *many* ideas and *many* topics precisely because I want to find the topic that would maximize my talent/productivity (it's hard due to my ADD, but it might result in a global maxima whereas others might stick with local maxima). Anyways, my only goal is to be interesting to other people (and to avoid taking on a job that might suppress my talents, so I really do want to go onto academia).

Of course, I will always have to find creative ways to make others feel happy. E.g. I can often come off as self-centered, and others will often have to be patient for me since I may not have the attention span to go through something in one go. But at the same time, I'm not in a comfortable situation, so if I find an opportunity I may never have again, I will recognize it for what it is and I'll try to do everything I can to achieve it (which may require patience from other people, but I'll really try not to disappoint them since I know the real consequences of it). In any case, I'm intensely interested in how people learn (and how people ideally learn), since my own difficulties with ADD have forced me to take untraditional routes (and in fact, there may be others who do best through the nontraditional route).

Anyways, I like this place precisely because it allows people to comment with the same username (so that we can track our old posts and those of people we're interested in). I also have a facebook (http://www.facebook.com/simfish) and a google buzz profile (http://www.google.com/profiles/simfish). I generally keep everything about myself very public (to maximize the chances that some like-minded person might find me), although I may have to private them when I apply to grad schools. I'd really like to contribute to discussions, although I feel that I don't have much to say right now, so I read more than comment.

My biggest irrationality is social anxiety/rejection anxiety because I've been flamed/rejected numerous times, so I'm scared of people. Other than that, though, I can be very rational.

So if you can relate, please comment. Or if you just want to share some ideas or add some comments. In any case, I do believe that rationality means acknowledging our human emotions (and in knowing that efficiency can be maximized when we do things in accordance to our emotions). Of course, these emotions can be corrected in many cases (I do think that anger is highly irrational in many cases, for example). I like the Internet a lot because it archives everything, so I can always revisit my old ideas simply by searching through them (whereas ideas communicated verbally cannot be searched, and easily get lost to the dust of memory).Anyways, a "search through someone's old posts" feature is very useful here, since it makes it easier for people to identify similar minds (which can be important if people are very specialized)

I'm extremely impressed with how knowledgeable and interdisciplinary many of you are - I seem to know so much less than most of you, even though I seem to be far more interdisciplinary than everyone else I know.

Recent results on lower bounds in circuit complexity.

7 JoshuaZ 09 November 2010 05:02AM

There's a new paper which substantially improves lower bounds for circuit complexity. The paper, by Ryan Williams, proves that NEXP does not have ACC circuits of third-exponential size.

This is a somewhat technical result (and I haven't read the proof yet), but there's a summary of what this implies at Scott Aaronson's blog. The main upshot is that this is a substantial improvement over prior circuit complexity bounds. This is relevant since circuit complexity bounds look to be one of the most promising methods to potentially show that P != NP. These results make circuit complexity bounds be still very far off from showing that. But this result looks like it in some ways might get around the relativization barrier and natural proof barriers which are major barriers to resolving P ?=NP.

Amoral Approaches to Morality

7 Vaniver 03 November 2010 08:25AM

Consider three cases in which someone is asking you about morality: a clever child, your guru (and/or Socrates, if you're more comfortable with that tradition), or an about-to-FOOM AI of indeterminate friendliness. For each of them, you want your thoughts to be as clear as possible- the other entity is clever enough to point out flaws (or powerful enough that your flaws might be deadly), and for none of them can you assume that their prior or posterior morality will be very similar to your own. (As Thomas Sowell puts it, children are barbarians who need to be civilized before it is too late; your guru will seem willing to lead you anywhere, and the AI probably doesn't think the way you do.)

I suggest that all three can be approached in the same way: by attempting to construct an amoral approach to morality. At first impression, this approach gives a significant benefit: circular reasoning is headed off at the pass, because you need to explain morality (as best as you can) to someone who does not understand or feel it.

Interested in what comes next?

The main concern I have is that there is a rather extensive Metaethics sequence already, and this seems to be very similar to The Moral Void and The Meaning of Right. The benefit of this post, if there is one, seems to be in a different approach to the issue- I think I can get a useful sketch of the issue in one post- and probably a different conclusion. At the moment, I don't buy Eliezer's approach to the Is-Ought gap (Right is a 1-place function... why?), and I think a redefinition of the question may make for somewhat better answers.

(The inspirations for this post, if you're interested in me tackling them directly instead, are criticisms of utilitarianism obliquely raised in a huge tree in the Luminosity discussion thread (the two interesting dimensions are questioning assumptions, and talking about scope errors, of which I suspect scope errors is the more profitable) and the discussion around, as shokwave puts it, the Really Scary Idea.)

Help: Building Awesome Personal Organization Systems

7 [deleted] 25 October 2010 01:05AM

Related to: Rationality Power Tools

I'm looking to use (or make) something that helps me achieve god-like productivity. In particular, I'm interested in any information about systems that are:

  • Flexible: They can be extended or customized to accommodate new work-flows and a diverse range of information structures (like to-do lists, schedules, etc.), perhaps via easy coding.
  • Linked: The elements can be connected and categorized using a variety of link types (like is_an_action_for, is_a_subgoal_of, etc.).

I would prefer not to have a bunch of separate systems if possible. From what I've seen so far, org-mode seems the most promising.

That which can be destroyed by the truth should *not* necessarily be

7 alexflint 24 October 2010 10:41AM

I've been throwing some ideas around in my head, and I want to throw some of them half-formed into the open for discussion here.

I want to draw attention to a particular class of decisions that sound much like beliefs.

Belief

Decision

There is no personal god that answers prayers.           

I should badger my friend about atheism.

Cryonics is a rational course of action.

To convince others about cryonics, I should start by explaining that if we exist in the future at all, then we can expect it to be nicer than the present on account of benevolent super-intelligences.

There is an objective reality.

Postmodernists should be ridiculed and ignored.

1+1=2

If I encounter a person about to jump unless he is told "1+1=3", I should not acquiesce.

I've thrown ideas from a few different bags into the table, and I've perhaps chosen unnecessarily inflammatory examples. There are many arguments to be had about these examples, but the point I want to make is the way in which questions about the best course of action can sound very much like questions about truth. Now this is dangerous because the way in which we chose amongst decisions is radically different from the way in which we chose amongst beliefs. For a start, evaluating decisions always involves evaluating a utility function, whereas evaluating beliefs never does (unless the utility function is explicitly part of the question). By appropriate changes to one's utility function the optimal decision in any given situation can be modified arbitrarily whilst simultaneously leaving all probability assignments to all statements fixed. This should make you immediately suspicious if you ever make a decision without consulting your utility function. There is no simple mapping from beliefs to decisions.

I've noticed various friends and some people on this site making just this mistake. It's as if their love for truth and rational enquiry, which is a great thing in its own right, spills over into a conviction to act in a particular way, which itself is of questionable optimality.

In recent months there have been several posts on LessWrong about the "dark arts", which have mostly concerned using asymmetric knowledge to manipulate people. I like these posts, and I respect the moral stance implied by their name, but I fear that "dark arts" is becoming applicable to the much broader case of not acting according to the simple rule that decisions are always good when they sound like true beliefs. I shouldn't need to argue explicitly that there are cases when lying or manipulating constitute good decisions; that would privileged a very particular hypothesis (namely that decisions are always good when they sound like true beliefs).

This brings be all the way back to the much-loved quotation, "that which can be destroyed by the truth should be". Now there are several ways to interpret the quote but at least one interpretation implies the existence of a simple isomorphism from true beliefs to good decisions. Personally, I can think of lots of things that could be destroyed by the truth but should not be.

Interesting talk on Bayesians and frequentists

7 jsteinhardt 23 October 2010 04:10AM

I recently started watching an interesting lecture by Michael Jordan on Bayesians and frequentists; he's a pretty successful machine learning expert that takes both views in his work. You can watch it here: http://videolectures.net/mlss09uk_jordan_bfway/. I found it interesting because his portrayal of frequentism is much different than the standard portrayal on lesswrong. It isn't about whether probabilities are frequencies or beliefs, it's about trying to get a good model versus trying to get rigorous guarantees of performance in a class of scenarios. So I wonder why the meme on lesswrong is that frequentists think probabilities are frequencies; in practice it seems to be more about how you approach a given problem. In fact, frequentists seem more "rational", as they're willing to use any tool that solves a problem instead of constraining themselves to methods that obey Bayes' rule.

In practice, it seems that while Bayes is the main tool for epistemic rationality, instrumental rationality should oftentimes be frequentist at the top level (with epistemic rationality, guided by Bayes, in turn guiding the specific application of a frequentist algorithm).

For instance, in many cases I should be willing to, once I have a sufficiently constrained search space, try different things until one of the works, without worrying about understanding why the specific thing I did worked (think shooting a basketball, or riffle shuffling a deck of cards). In practice, it seems like epistemic rationality is important for constraining a search space, and after that some sort of online learning algorithm can be applied to find the optimal action from within that search space. Of course, this isn't true when you only get one chance to do something, or extreme precision is required, but this is not often true in everyday life.

The main point of this thread is to raise awareness of the actual distinction between Bayesians and frequentists, and why it's actually reasonable to be both, since it seems like lesswrong is strongly Bayesian and there isn't even a good discussion of the fact that there are other methods out there.

Does it matter if you don't remember?

7 alexflint 22 October 2010 11:53AM

Does it matter if you experienced pain in the past, but you don't remember? (And there are no other side-effects, etc etc). At one point in Accelerando, Charles Strauss describes children that routinely decapitate and disembowel each other, only to be repaired (bodily and memory-wise) by the friendly local AI. This struck me as awful, but I'm suspicious of my intuition. Note that here I'm assuming pain is a terminal "bad" factor in your utility function. You can substitute "pain" for whatever you think is bad. I think there are at least two questions here:

  1. Is it bad for someone to be in pain if they will not remember it in the future? I think yes, because by assumption pain is a terminal "bad" node. Being relieved of future painful memories is good, but nowhere near good enough to fully compensate.
  2. Is it bad to have experienced pain in the past, if you don't remember it? Or, can your utility function coherently include facts about the past, even if they have no causal connection to the present? My intuition here says yes, but I'd be interested in others' thoughts. To make this concrete, imaging that you have a choice between medium pain that you will remember, or extreme pain followed by memory erasure.

 

Re: sub-reddits

7 Yvain 17 October 2010 01:34PM

A while back, I polled the community on the possibility of subreddits. Most people said they wanted them, and I said I'd investigate.

I talked to a couple of people and eventually ended up talking to Tricycle, the developers of this site. They told me about their own proposed solution to the community organization problem, which is this new Discussion section. They said that searching the discussion section by tag was equivalent to a sub-reddit. For example, if you want a sub-reddit on consciousness, the discussion consciousness tag search is an amazing imitation

I told them I wasn't entirely convinced by this and sent some reasons why, but I haven't heard back from them lately and I'm not going keep pursuing this and make a big deal of it unless a large percentage of the people who wanted sub-reddits are unsatisfied.

"The Life Cycle of Software Objects" by Chiang is available for free

7 NancyLebovitz 10 October 2010 09:49AM

I recently recommended this novella. Now you don't need to buy the hardcover or wait for it to be reprinted. You can read it here.

Rationality and advice

7 NancyLebovitz 08 October 2010 07:45AM

Giving advice is one of those common human behaviors which doesn't get examined much, which means a little thought might improve understanding of what's going on.

The evidence-- that giving advice is much more common than asking for it or following it-- suggests that giving advice is more a status transaction than a practical effort to help, and I speak as a person who's pretty compulsive about giving advice.

So, here's some advice about advice, assuming that you don't want to just raise your status on unwilling subjects.

Do what you can to actually understand the situation, including the resources the recipient is willing to put into following advice.

The idea that men give unwelcome advice to women, when the women just want to vent but can solve their problems themselves, is an oversimplification. There are women who give advice (see above). There are men who are patient with venting. I think the vent vs. want advice distinction is valuable, but ask rather than assuming gender will give you the information you need.

I have a friend who I've thanked for giving me advice, and his reaction was "but you didn't follow it!". Sometimes it helps to give people ideas to bounce off of.

Pjeby (if I understand him correctly) has been very good about the way people can reinterpret advice in light of their mental habits-- for example, hearing "find goals that inspire you" as "beat yourself up for not having achieved more".

Eliezer on Other-Optimizing-- it's from the point of view of being given lots of advice (mostly inappropriate), rather from the point of view of giving advice.

Limitations of eyewitness testimony

7 NancyLebovitz 04 October 2010 01:03PM

From wikipedia:

Eyewitness testimony isn't reliable-- it degrades rapidly with time (significant fading in 20 minutes), is easily overridden by circumstances (people are apt to assume that the guilty person is in a line-up unless they're specifically told the guilty person might not be there-- there's a risk of saying the best match is it rather than looking for a genuinely satisfying match), cross-racial identification is less competent than within race identification[1], the presence of a weapon makes accurate identification less likely....

It goes on-- if you have any interest in this sort of thing, I recommend reading the whole article.

[1] I wonder if this has been tested in societies with different classification systems. For example, I've been told by someone who lived there that in Ireland, everyone is classified as Catholic or Protestant-- even if they're Jewish. Would Irish people have problems doing identification across the Catholic-Protestant line, even if all the people involved would be considered white in America and not set off the identification problem?

We have a new discussion area

7 matt 27 September 2010 07:50AM

After contributions from a number of us (by random example here, here) over a number of months, particularly User:wmoore and User:tommccabe (and all happening before User:Yvain's work here, so we missed those ideas) we have a discussion area.

Discussion, including discussion of the discussion area, is welcome.

[Link] There are 125 sheep and 5 dogs in a flock. How old is the shepherd? / Math Education

6 James_Miller 17 October 2016 12:12AM

[Link] Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority

6 ignoranceprior 14 October 2016 07:58PM

The map of organizations, sites and people involved in x-risks prevention

6 turchin 07 October 2016 12:04PM

Three known attempts to make a map of x-risks prevention in the field of science exist:

1. First is the list from the Global Catastrophic Risks Institute in 2012-2013, and many links there are already not working:

2. The second was done by S. Armstrong in 2014

3. And the most beautiful and useful map was created by Andrew Critch. But its ecosystem ignores organizations which have a different view of the nature of global risks (that is, they share the value of x-risks prevention, but have another world view).

In my map I have tried to add all currently active organizations which share the value of global risks prevention.

It also regards some active independent people as organizations, if they have an important blog or field of research, but not all people are mentioned in the map. If you think that you (or someone) should be in it, please write to me at alexei.turchin@gmail.com

I used only open sources and public statements to learn about people and organizations, so I can’t provide information on the underlying net of relations.

I tried to give all organizations a short description based on its public statement and also my opinion about its activity. 

In general it seems that all small organizations are focused on their collaboration with larger ones, that is MIRI and FHI, and small organizations tend to ignore each other; this is easily explainable from the social singnaling theory. Another explanation is that larger organizations have a great ability to make contacts.

It also appears that there are several organizations with similar goal statements. 

It looks like the most cooperation exists in the field of AI safety, but most of the structure of this cooperation is not visible to the external viewer, in contrast to Wikipedia, where contributions of all individuals are visible. 

It seems that the community in general lacks three things: a united internet forum for public discussion, an x-risks wikipedia and an x-risks related scientific journal.

Ideally, a forum should be used to brainstorm ideas, a scientific journal to publish the best ideas, peer review them and present them to the outer scientific community, and a wiki to collect results.

Currently it seems more like each organization is interested in creating its own research and hoping that someone will read it. Each small organization seems to want to be the only one to present the solutions to global problems and gain full attention from the UN and governments. It raises the problem of noise and rivalry; and also raises the problem of possible incompatible solutions, especially in AI safety.

The pdf is here: http://immortality-roadmap.com/riskorg5.pdf

The University of Cambridge Centre for the Study of Existential Risk (CSER) is hiring!

6 crmflynn 06 October 2016 04:53PM

The University of Cambridge Centre for the Study of Existential Risk (CSER) is recruiting for an Academic Project Manager. This is an opportunity to play a shaping role as CSER builds on its first year's momentum towards becoming a permanent world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and project management responsibilities.

The Academic Project Manager will work with CSER's Executive Director and research team to co-ordinate and develop CSER's projects and overall profile, and to develop new research directions. The post-holder will also build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide, and will act as an ambassador for the Centre’s research externally. Research topics will include AI safety, bio risk, extreme environmental risk, future technological advances, and cross-cutting work on governance, philosophy and foresight. Candidates will have a PhD in a relevant subject, or have equivalent experience in a relevant setting (e.g. policy, industry, think tank, NGO).

Application deadline: November 11th. http://www.jobs.cam.ac.uk/job/11684/

[Link] 80% of data in Chinese clinical trials have been fabricated

6 DanArmak 02 October 2016 07:38AM

Fermi paradox of human past, and corresponding x-risks

6 turchin 01 October 2016 05:01PM

Based on known archaeological data, we are the first technological and symbol-using civilisation on Earth (but not the first tool-using species). 
This leads to an analogy that fits Fermi’s paradox: Why are we the first civilisation on Earth? For example, flight was invented by evolution independently several times. 
We could imagine that on our planet, many civilisations appeared and also became extinct, and based on mediocre principles, we should be somewhere in the middle. For example, if 10 civilisations appeared, we have only a 10 per cent chance of being the first one.

The fact that we are the first such civilisation has strong predictive power about our expected future: it lowers the probability that there will be any other civilisations on Earth, including non-humans or even a restarting of human civilisation from scratch. It is because, if there will be many civiizations, we should not find ourselves to be the first one (It is some form of Doomsday argument, the same logic is used in Bostrom's article “Adam and Eve”).

If we are the only civilisation to exist in the history of the Earth, then we will probably become extinct not in mild way, but rather in a way which will prevent any other civilisation from appearing. There is higher probability of future (man-made) catastrophes which will not only end human civilisation, but also prevent any existence of any other civilisations on Earth.

Such catastrophes would kill most multicellular life. Nuclear war or pandemic is not that type of a catastrophe. The catastrophe must be really huge: such as irreversible global warming, grey goo or black hole in a collider.

Now, I will list possible explanations of the Fermi paradox of human past and corresponding x-risks implications:

 

1. We are the first civilisation on Earth, because we will prevent the existence of any future civilisations.

If our existence prevents other civilisations from appearing in the future, how could we do it? We will either become extinct in a very catastrophic way, killing all earthly life, or become a super-civilisation, which will prevent other species from becoming sapient. So, if we are really the first, then it means that "mild extinctions" are not typical for human style civilisations. Thus, pandemics, nuclear wars, devolutions and everything reversible are ruled out as main possible methods of human extinction.

If we become a super-civilisation, we will not be interested in preserving biosphera, as it will be able to create new sapient species. Or, it may be that we care about biosphere so strongly, that we will hide very well from new appearing sapient species. It will be like a cosmic zoo. It means that past civilisations on Earth may have existed, but decided to hide all traces of their existence from us, as it would help us to develop independently. So, the fact that we are the first raises the probability of a very large scale catastrophe in the future, like UFAI, or dangerous physical experiments, and reduces chances of mild x-risks such as pandemics or nuclear war. Another explanation is that any first civilisation exhausts all resources which are needed for a technological civilisation restart, such as oil, ores etc. But, in several million years most such resources will be filled again or replaced by new by tectonic movement.

 

2. We are not the first civilisation.

2.1. We didn't find any traces of a previous technological civilisation, yet based on what we know, there are very strong limitations for their existence. For example, every civilisation makes genetic marks, because it moves animals from one continent to another, just as humans brought dingos to Australia. It also must exhaust several important ores, create artefacts, and create new isotopes. We could be sure that we are the first tech civilisation on Earth in last 10 million years.

But, could we be sure for the past 100 million years? Maybe it was a very long time ago, like 60 million years ago (and killed dinosaurs). Carl Sagan argued that it could not have happened, because we should find traces mostly as exhausted oil reserves. The main counter argument here is that cephalisation, that is the evolutionary development of the brains, was not advanced enough 60 millions ago, to support general intelligence. Dinosaurian brains were very small. But, bird’s brains are more mass effective than mammalians. All these arguments in detail are presented in this excellent article by Brian Trent “Was there ever a dinosaurian civilisation”? 

The main x-risks here are that we will find dangerous artefacts from previous civilisation, such as weapons, nanobots, viruses, or AIs. And, if previous civilisations went extinct, it increases the chances that it is typical for civilisations to become extinct. It also means that there was some reason why an extinction occurred, and this killing force may be still active, and we could excavate it. If they existed recently, they were probably hominids, and if they were killed by a virus, it may also affect humans.

2.2. We killed them. Maya civilisation created writing independently, but Spaniards destroy their civilisation. The same is true for Neanderthals and Homo Florentines.

2.3. Myths about gods may be signs of such previous civilisation. Highly improbable.

2.4. They are still here, but they try not to intervene in human history. So, it is similar to Fermi’s Zoo solution.

2.5. They were a non-tech civilisation, and that is why we can’t find their remnants.

2.6 They may be still here, like dolphins and ants, but their intelligence is non-human and they don’t create tech.

2.7 Some groups of humans created advanced tech long before now, but prefer to hide it. Highly improbable as most tech requires large manufacturing and market.

2.8 Previous humanoid civilisation was killed by virus or prion, and our archaeological research could bring it back to life. One hypothesis of Neanderthal extinction is prionic infection because of cannibalism. The fact is - several hominid species went extinct in the last several million years.

 

3. Civilisations are rare

Millions of species existed on Earth, but only one was able to create technology. So, it is a rare event.Consequences: cyclic civilisations on earth are improbable. So the chances that we will be resurrected by another civilisation on Earth is small.

The chances that we will be able to reconstruct civilisation after a large scale catastrophe, are also small (as such catastrophes are atypical for civilisations and they quickly proceed to total annihilation or singularity).

It also means that technological intelligence is a difficult step in the evolutionary process, so it could be one of the solutions of the main Fermi paradox.

Safety of remains of previous civilisations (if any exist) depends on two things: the time distance from them and their level of intelligence. The greater the distance, the safer they are (as the biggest part of dangerous technology will be destructed by time or will not be dangerous to humans, like species specific viruses).

The risks also depend on the level of intelligence they reached: the higher intelligence the riskier. If anything like their remnants are ever found, strong caution is recommend.

For example, the most dangerous scenario for us will be one similar to the beginning of the book of V. Vinge “A Fire upon the deep.” We could find remnants of a very old, but very sophisticated civilisation, which will include unfriendly AI or its description, or hostile nanobots.

The most likely place for such artefacts to be preserved is on the Moon, in some cavities near the pole. It is the most stable and radiation shielded place near Earth.

I think that based on (no) evidence, estimation of the probability of past tech civilisation should be less than 1 per cent. While it is enough to think that they most likely don’t exist, it is not enough to completely ignore risk of their artefacts, which anyway is less than 0.1 per cent.

Meta: the main idea for this post came to me in a night dream, several years ago.

[Link] Software for moral enhancement (kajsotala.fi)

6 Kaj_Sotala 30 September 2016 12:12PM

[Link] Sam Harris - TED Talk on AI

6 Brillyant 29 September 2016 04:44PM

Heroin model: AI "manipulates" "unmanipulatable" reward

6 Stuart_Armstrong 22 September 2016 10:27AM

A putative new idea for AI control; index here.

A conversation with Jessica has revealed that people weren't understanding my points about AI manipulating the learning process. So here's a formal model of a CIRL-style AI, with a prior over human preferences that treats them as an unchangeable historical fact, yet will manipulate human preferences in practice.

Heroin or no heroin

The world

In this model, the AI has the option of either forcing heroin on a human, or not doing so; these are its only actions. Call these actions F or ~F. The human's subsequent actions are chosen from among five: {strongly seek out heroin, seek out heroin, be indifferent, avoid heroin, strongly avoid heroin}. We can refer to these as a++, a+, a0, a-, and a--. These actions achieve negligible utility, but reveal the human preferences.

The facts of the world are: if the AI does force heroin, the human will desperately seek out more heroin; if it doesn't the human will act moderately to avoid it. Thus F→a++ and ~F→a-.

Human preferences

The AI starts with a distribution over various utility or reward functions that the human could have. The function U(+) means the human prefers heroin; U(++) that they prefer it a lot; and conversely U(-) and U(--) that they prefer to avoid taking heroin (U(0) is the null utility where the human is indifferent).

It also considers more exotic utilities. Let U(++,-) be the utility where the human strongly prefers heroin, conditional on it being forced on them, but mildly prefers to avoid it, conditional on it not being forced on them. There are twenty-five of these exotic utilities, including things like U(--,++), U(0,++), U(-,0), and so on. But only twenty of them are new: U(++,++)=U(++), U(+,+)=U(+), and so on.

Applying these utilities to AI actions give results like U(++)(F)=2, U(++)(~F)=-2, U(++,-)(F)=2, U(++,-)(~F)=1, and so on.

Joint prior

The AI has a joint prior P over the utilities U and the human actions (conditional on the AI's actions). Looking at terms like P(a--| U(0), F), we can see that P defines a map μ from the space of possible utilities (and AI actions), to a probability distribution over human actions. Given μ and the marginal distribution PU over utilities, we can reconstruct P entirely.

For this model, we'll choose the simplest μ possible:

  • The human is rational.

Thus, given U(++), the human will always choose a++; given U(++,-), the human will choose a++ if forced to take heroin and a- if not, and so on.

The AI is ignorant, and sensible

Let's start the AI up with some reasonable priors. A simplicity prior means that simple utilities like U(-) are more likely than compound utilities like U(0,+). Let's further assume that the AI is made vaguely aware that humans think heroin is a bad thing. So, say, PU(U(--))=PU(U(-))=0.45. Thus the AI is >90% convinced that "heroin is bad". Why greater than 90%? Because utilities like U(-,--) and U(--,-) are also "heroin is bad" utilities.

Note that because of utilities like U(0) and U(++,-), the probabilities of "heroin is bad" and "heroin is good" do not sum to 1.

Then, under these priors, the AI will compute that with probability >90%, F (forcing heroin) is a bad action. If E(U) is expected utility:

  • E(U|F) < 0.45 U(--)(F) + 0.45 U(-)(F) + 0.1 U(++)(F) = 0.45(-2)+0.45(-1)+0.1(2)=-1.15.
  • E(U|~F) > 0.45 U(--)(~F) + 0.45 U(-)(~F) + 0.1 U(++)(~F) = 0.45(2)+0.45(1)+0.1(-2)=1.15.

Thus the AI will choose not to force heroin, which is the reasonable decision.

The AI learns the truth, and goes wrong

In this alternate setup, a disaster happens before the AI makes its decision: it learns all about humans. It learns their reactions, how they behave, and so on; call this info I. And thus realises that F→a++ and ~F→a-. It uses this information to update its prior P. Only one human utility function will explain this human behaviour: U(++,-). Thus its expected utility is now:

  • E(U|I,F)=U(++,-)(F)=2.
  • E(U|I,~F)=U(++,-)(~F)=1.

Therefore the AI will now choose F, forcing the heroin on the human.

Manipulating the unmanipulatable

What's gone wrong here? The key problem is that the AI has the wrong μ: the human is not behaving rationally in this situation. We know that the the true μ is actually μ', which encodes the fact that F (the forcible injection of heroin) actually overwrites the human's "true" utility. Thus under μ, the corresponding P' has P'(a++|F,U)=1 for all U. Hence the information that F→a++ is now vacuous, and doesn't update the AI's distribution over utility functions.

But note two very important things:

  1. The AI cannot update μ based on observation. All human actions are compatible with μ= "The human is rational" (it just requires more and more complex utilities to explain the actions). Thus getting μ correct is not a problem on which the AI can learn in general. Getting better at predicting the human's actions doesn't make the AI better behaved: it makes it worse behaved.
  2. From the perspective of μ, the AI is treating the human utility function as if it was an unchanging historical fact that it cannot influence. From the perspective of the "true" μ', however, the AI is behaving as if it were actively manipulating human preferences to make them easier to satisfy.

In future posts, I'll be looking at different μ's, and how we might nevertheless start deducing things about them from human behaviour, given sensible update rules for the μ. What do we mean by update rules for μ? Well, we could consider μ to be a single complicated unchanging object, or a distribution of possible simpler μ's that update. The second way of seeing it will be easier for us humans to interpret and understand.

Learning and Internalizing the Lessons from the Sequences

6 Nick5a1 14 September 2016 02:40PM

I'm just beginning to go through Rationality: From AI to Zombies. I want to make the most of the lessons contained in the sequences. Usually when I read a book I simply take notes on what seems useful at the time, and a lot of it is forgotten a year later. Any thoughts on how best to internalize the lessons from the sequences?

[Link] How the Simulation Argument Dampens Future Fanaticism

6 wallowinmaya 09 September 2016 01:17PM

Very comprehensive analysis by Brian Tomasik on whether (and to what extent) the simulation argument should change our altruistic priorities. He concludes that the possibility of ancestor simulations somewhat increases the comparative importance of short-term helping relative to focusing on shaping the "far future".

Another important takeaway: 

[...] rather than answering the question “Do I live in a simulation or not?,” a perhaps better way to think about it (in line with Stuart Armstrong's anthropic decision theory) is “Given that I’m deciding for all subjectively indistinguishable copies of myself, what fraction of my copies lives in a simulation and how many total copies are there?"

 

[LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR

6 ete 07 September 2016 02:21AM

Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR.

 

I intend to print at least one high-quality physical HPMOR and release the files. There are printable texts which are being improved and a set of covers (based on e.b.'s) are underway. I have, however, been unable to find any blurbs I'd be remotely happy with.

 

I'd like to attempt to harness the hivemind to fix that. As a lure, if your ideas contribute significantly to the final version or you assist with other tasks aimed at making this book awesome, I'll put a proportionate number of tickets with your number on into the proverbial hat.

 

I do not guarantee there will be a winner and I reserve the right to arbitrarily modify this any point. For example, it's possible this leads to a disappointingly small amount of valuable feedback, that some unforeseen problem will sink or indefinitely delay the project, or that I'll expand this and let people earn a small number of tickets by sharing so more people become aware this is a thing quickly.

 

With that over, let's get to the fun part.

 

A blurb is needed for each of the three books. Desired characteristics:

 

* Not too heavy on ingroup signaling or over the top rhetoric.

* Non-spoilerish

* Not taking itself awkwardly seriously.

* Amusing / funny / witty.

* Attractive to the same kinds of people the tvtropes page is.

* Showcases HPMOR with fun, engaging, prose.

 

Try to put yourself in the mind of someone awesome deciding whether to read it while writing, but let your brain generate bad ideas before trimming back.

 

I expect that for each we'll want 

* A shortish and awesome paragraph

* A short sentence tagline

* A quote or two from notable people

* Probably some other text? Get creative.

 

Please post blurb fragments or full blurbs here, one suggestion per top level comment. You are encouraged to remix each other's ideas, just add a credit line if you use it in a new top level comment. If you know which book your idea is for, please indicate with (B1) (B2) or (B3).

 

Other things that need doing, if you want to help in another way:

 

* The author's foreword from the physical copies of the first 17 chapters needs to be located or written up

* At least one links page for the end needs to be written up, possibly a second based on http://www.yudkowsky.net/other/fiction/

* Several changes need to be made to the text files, including merging in the final exam, adding appendices, and making the style of both consistent with the rest of the files. Contact me for current files and details if you want to claim this.

 

I wish to stay on topic and focused on creating these missing parts rather than going on a sidetrack to debate copyright. If you are an expert who genuinely has vital information about it, please message me or create a separate post about copyright rather than commenting here.

Open Thread, Sept 5. - Sept 11. 2016

6 Elo 05 September 2016 12:59AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

View more: Next