Comment author:wallowinmaya
01 January 2013 10:50:39PM
*
31 points
[-]
I'm thinking about writing a more comprehensive guide than Skatche's Rationalist's Guide to Psychoactive Drugs. In addition to the substances described in Skatche's guide I would discuss the risks, benefits and possible fields of applications of e.g. benzodiazepines, GHB, opioids and various research chemicals.
Is anyone interested in this kind of stuff? You don't have to comment, upvoting suffices (saves time and gives me precious karma).
And I'm a bit worried that this kind of post falls under the new censorship laws. What do those in power on LessWrong think about that?
Comment author:TimS
02 January 2013 02:09:59AM
6 points
[-]
And I'm a bit worried that this kind of post falls under the new censorship laws.
My analysis:
Do your posts look like solicitation to possess illegal drugs with intend to distribute? (Hint: for anything short of "Please tell me where to buy drugs," the answer is probably no).
Could a malicious prosecutor convince a grand jury to indict Eliezer (or others) as co-conspirators based on what you have written? (Hint: probably not).
In short, you are probably fine. But I am not a "power" on LW.
Just to be clear, I doubt this is Eliezer's thought process. But I suspect it is a fairly accurate heuristic for what is and isn't acceptable.
I agree with your analysis. However, the fact that some people are expressing concern that their comments might violate the new censorship policy suggests that others might abstain, or have already abstained, from posting valuable material to this forum, which in turn increases my credence that the censorship policy does more harm than good.
In context, this 2010 post (capture) is interesting: current version is about deaths of tobacco company employees, but it was changed after comments from the original, which was about slowing the computer industry to slow AI progress.
The few times I raised this question in the past, my comments were met with either indifference or hostility. I will try to raise it one more time in this open thread. If you think the question deserves a downvote, could you please, in addition to downvoting me, leave a brief comment explaining your rationale for doing so? I promise to upvote all comments providing such explanations.
So, here's the question: What is the reason for defining the class of beings whose volitions are to be coherently extrapolated as the class of present human beings? Why present and not also future (or past!)? Why human and not, say, mammals, males, or friends of Eliezer Yudkowsky?
Note that the question is not: Why should we value only present people? This way of framing the problem already assumes that "we" (i.e., present human beings) are the subjects whose preferences are to be accorded relevance in the process of coherent extrapolation, and that the interests of any other being (present or future, human or nonhuman) should matter only to the extent that "we" value them. What I am asking for, rather, is a justification of the assumption that only "our" preferences matter.
Comment author:Kaj_Sotala
03 January 2013 04:12:36AM
8 points
[-]
Luke lists "Why extrapolate the values of humans alone? What counts as a human? Do values converge if extrapolated?" as an open question in So You Want to Save the World.
Would the choice to extrapolate the values of humans alone be an unjustified act of speciesism, or is it justified because humans are special in some way — perhaps because humans are the only beings who can reason about their own preferences? And what counts as a human? The problem is more complicated than one might imagine (Bostrom 2006; Bostrom & Sandberg 2011). Moreover, do we need to scan the values of all humans, or only some? These problems are less important if values converge upon extrapolation for a wide variety of agents, but it is far from clear that this is the case (Sobel 1999, Doring & Steinhoff 2009).
Of course, the premise that "humans are the only beings who can reason about their own preferences" could only justify the conclusion that some human beings are special, since there are members of the human species who lack that ability. Similar objections could be raised against any other proposed candidate property. This has long been recognized by moral philosophers.
Comment author:MTGandP
04 January 2013 06:50:15AM
*
2 points
[-]
I see no reason to restrict our preference extrapolation to presently-existing humans. CEV should extrapolate from all preferences, which includes the preferences of all sentient beings, present and future. Any attempt to place boundaries on this require justification.
Edit: You might say, "Why not also include rocks in our consideration?" Simple: rocks don't have preferences. Sentient beings (including many non-human animals) have preferences.
Comment author:TimS
03 January 2013 02:13:00AM
2 points
[-]
I'm not sure that there is community consensus that "human beings currently living" is the right reference class. Eliezer suggests that he thinks the right reference class is all of humanity ever in this post.
If one assumes some kind of moral progress constraint and unpredictable future values, CEV(living humans) seems like our future descendents would hate it. Certainly, modern Westerners probably would hate CEV(Europeans-alive-in-1300). But I'm a moral anti-realist, so I don't believe there are constraints that cause moral progress - and don't expect CEV(all-humans-ever) to output a morality.
Comment author:TimS
03 January 2013 03:49:14PM
*
3 points
[-]
Gwern collects some evidence against the proposition. The fact that people disagree and think morality is timeless in some sense is not particularly strong evidence when compared to results of competent historical analysis.
Of course, which historical analysis is considered credible is fairly controversial.
Part of the point of CEV is to make the extrapolation process good enough that future beings X won't hate the extrapolation of arbitrary past group Y. The extrapolation should be effective and broad enough that extrapolating from humans in different parts of history would not appreciably change the outcome. My guess would be that the extrapolation process itself would provide most of the content, the starting reference class being a minor variable.
Resolving that issue is part of the overall goal of the SI, and a huge project. I'm also a moral anti-realist, by the way. CEV should be starter-insensitive w/ respect to humans from different time periods. My reasons for why I think that this is achievable in principle would be a whole post.
Comment author:leplen
03 January 2013 07:40:11PM
1 point
[-]
I would also like to see this discussion. It isn't terribly clear to me why the extinction of the human race and its replacement with some non-human AI is an inherently bad outcome. Why keep around and devote resources to human beings, who at best can be seen as sort of a prototype of true intelligence, since that's not really what they're designed for?
While imagining our extinction at the hands of our robot overlords seems unpleasant, if you imagine a gradual cyborg evolution to a post-human world, that seems scary, but not morally objectionable. Besides the Ship of Theseus, what's the difference?
Comment author:[deleted]
19 January 2013 07:14:04PM
*
1 point
[+]
(3
children)
Comment author:[deleted]
19 January 2013 07:14:04PM
*
1 point
[-]
No one else seems to be giving what is IMO the correct answer; I want the values of a created FAI to match my own, extrapolated. ie moral selfishness.
I would actually prefer that the extrapolation seed be drawn only from SI supporters (or ideally just me, but that's unlikely to fly), because I'm uneasy about what happens if some of my values turn out to be memetic, and they get swamped/outvoted by a coherent extrapolated deathist or hedonist memplex. Or if you include, for example, uplifted sharks in the process.
Comment author:TimS
19 January 2013 07:32:46PM
*
0 points
[-]
I too would prefer super AI to look to my values when deciding what to implement.
But, given the existence of moral disagreement, I don't see why that deserves to be labeled Friendly. And the whole point of CEV or similar process is to figure out what is awesome for humanity. Implementing something other than what is awesome for all of humanity is not Friendly.
If deathism really is what is awesome for all humanity, I expect a FAI to implement deathism. But there's no particular reason to believe that deathism is what is awesome for humanity.
Tim, your comment highlights the potential conflict between CEV and FAI that I also mentioned previously. FAI is by definition not hostile to human beings, whereas CEV might permit, or even require, the extinction of all humanity. This may happen, for instance, if the process of coherent extrapolation shows that humans value certain superior beings more than they value themselves, and if the coexistence of humans and these beings is impossible.
When I pointed out this problem, both Kaj Sotala and Michael Anissimov replied that CEV can never condone hostile actions towards humanity because FAI is "defined as 'human-benefiting, non-human harming'". However, this reply just proves my point, namely that there is a potential internal inconsistency between CEV and FAI.
Comment author:TimS
20 January 2013 03:46:53AM
0 points
[-]
Don't look at me to resolve that conflict. I think moral extrapolation is unlikely to output anything coherent if the reference class is sufficiently large to avoid the objections I raised above. And I can't think of any other plausible candidate to produce Friendly instructions for an AI.
Slight sidetrack: By the time AI seems plausible, I think it's likely that the human race will have done enough self-modification (computer augmentation, biological engineering) that the question of what's human is going to be more difficult than it is now.
Comment author:shminux
11 January 2013 08:06:05PM
*
14 points
[-]
Just wanted to point out that many contributors to the site are afflicted by what I call "theoritis", a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field. The field in question can be psychology, neuroscience, physics, math, computer science, you name it.
It is rare that people consider a reverse situation first: what would I think of an amateur who argues with me in the area of my competence? For example, if you are an auto mechanic, would you take seriously someone who tells you how to diagnose and fix car issues without ever having done any repairs first? If not, why would you argue about quantum mechanics with a physicist, with a decision theorist about utility functions,or with a mathematician about first-order logic, unless that's your area of expertise? Of course, looking back it what I post about, I am no exception.
OK, I cannot bring myself to add philosophy to the list of "don't argue with the experts, learn from them" topics, but maybe it's because I don't know anything about philosophy.
Comment author:OrphanWilde
25 January 2013 02:10:19PM
0 points
[-]
In practice the two are, in my line of work, very difficult to separate. The what is almost always the how. But both, out of practical necessity. When the client insists on a particular implementation, that's the implementation you go with.
Comment author:OrphanWilde
25 January 2013 04:40:35PM
0 points
[-]
That's part of it, but no, that's not what I'm referring to. Client necessities are client necessities.
"Encryption and file delivery need to be in separate process flows" would be closer. (This sounds high-level, but in the scripting language I do most of my work in, both of these are atomic operations.)
A relevant distinction that you are not making is between the questions that are well-understood in the expert's area and the questions that are merely associated with the expert's area (or are expert's own inventions), where we have no particular reason to expect that the expert's position on the topic is determined by its truth and not by some accident of epistemic misfortune. The expert will probably know the content of their position very well, but won't necessarily correctly understand the motivation for that position. (On the other hand, someone sufficiently unfamiliar with the area might be unable to say anything meaningful about the question.)
Comment author:bogus
13 January 2013 12:42:39AM
*
0 points
[-]
Good point. Also, even when questions are well-understood by domain experts it still can be very effective to argue about them, since this usually leads to the clearest arguments and explanations. This is especially true since the social norms on this site highly value truth-seeking, epistemic hygiene (including basic intellectual honesty) and scholarship: in many other venues (including some blogs), anti-expertise attitudes do lead to bad outcomes, but this does not seem to apply much on LW.
Comment author:Kawoomba
11 January 2013 09:59:50PM
*
0 points
[-]
(...) a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field.
Not exactly a green amateur, so how could he have set that norm? EDIT: Retracted, you answered in another comment.
Comment author:IlyaShpitser
11 January 2013 09:46:41PM
*
4 points
[-]
Come on, Luke has a series of posts taking a shit on the entire discipline of philosophy. Luke is not an expert on philosophy. EY says he isn't happy with do(.) based causality while getting basic terminology in the field wrong, etc. EY is not an expert on causal inference. If you disagree with Larry Wasserman on a subject in stats, chances are it is you who is confused. etc. etc. Communication and scholarship norms here are just awful.
If you want to see how academic disagreements ought to play out, stroll on over to Scott's blog.
edit: To respond to the grandparent: I think the answer is adopting mainstream academic norms.
Comment author:Wei_Dai
11 January 2013 09:59:28PM
*
4 points
[-]
shminux explicitly excluded philosophy, and I wasn't aware of the other two examples you gave. Can you link to them so I can take a look? (ETA: Never mind, I think I found them. ETA2: Actually I'm not sure. Re Wasserman, are you referring to this?)
Comment author:whowhowho
25 January 2013 01:14:06PM
1 point
[-]
I couldn't agree more. Mainstream academia is set of rationality skills and a very case hardened one. Adding something extra, like cognitive science might be good, but LW omits a lot of the academic virtues -- not blowing off about things you don't know, making an attempt to answer objections, modesty, etc.
PS: Tenure is a great rationality-promoting institution because...left as an exercise to the reader.
I think philosophy does belong to the list if you are arguing some matters of philosophy but not others. There is a common field to all mathematics-heavy disciplines, that is mathematics, with huge overlaps, and there's no reason why for example a physicist couldn't correctly critique bad mathematics of a philosopher, even though most non philosophers or amateur philosophers really should learn and not argue as a philosopher is a bit of an expert in mathematics.
Comment author:whowhowho
25 January 2013 01:04:14PM
*
0 points
[-]
OK, I cannot bring myself to add philosophy to the list of "don't argue with the experts, learn from them" topics, but maybe it's because I don't know anything about philosophy.
I find that an odd statement. Why can't you assume by default that arguing with an expert in X is bad for all X?
For some reason, theoritis is much worse with regard to philosophy than just about anything else. Amateurs hardly ever argue with brain surgeons or particle physicists. I think part of the reason for that is that brain surgeons and particle physicists have manifest practical skills that others don't have. The "skill" of philosophy consists of stating opinions and defending them, which everyone can do to some extent. The amateurs are like people who think you can write (well, at a a professional level) because you can type.
The test was a simple assessment of the subjects' ability to sit and then rise unaided from the floor. The assessment was performed in 2002 adults of both sexes and with ages ranging from 51 to 80 years. The subjects were followed-up from the date of the baseline test until the date of death or 31 October 2011, a median follow-up of 6.3 years.
Before starting the test, they were told: "Without worrying about the speed of movement, try to sit and then to rise from the floor, using the minimum support that you believe is needed."
As might be predicted, I'm putting in a little work on improving my ability at the test-- I have no idea whether this an example of Goodhart's Law.
Comment author:Wei_Dai
02 January 2013 01:34:34PM
10 points
[-]
A couple of quick points about "reflective equilibrium":
I just recently noticed that when philosophers (and at least some LWers including Yvain) talk about "reflective equilibrium", they're (usually?) talking about a temporary state of coherence among one's considered judgement or intuitions ("There need be no assurance the reflective equilibrium is stable—we may modify it as new elements arise in our thinking"), whereas many other LWers (such as Eliezer) use it to refer to an eventual and stable state of coherence, for example after one has considered all possible moral arguments. I've personally always been assuming the latter meaning, and as a result have misinterpreted a number of posts and comments that meant to refer to the former. This seems worth pointing out in case anyone else has been similarly confused without realizing it.
I often wonder and ask others what non-trivial properties we can state about moral reasoning (i.e., besides that theoretically it must be some sort of an algorithm). One thing that I don't think we know yet is that for any given human, their moral judgments/intuitions are guaranteed to converge to some stable and coherent set as time goes to infinity. It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments, or none if for example their conclusions keep wandering chaotically among several basins of attraction as they review previously considered arguments. So I think the singular term "reflective equilibrium" is currently unjustified when talking about someone's eventual conclusions, and we should instead use "the possibly null set of eventual reflective equilibria". (Unless someone can come up with a pithier term that has similar connotations and denotations.)
Comment author:Emile
02 January 2013 10:00:55PM
*
2 points
[-]
It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments
Another way to get several equilibria would be moral judgements whose "correctness" depends on whether other people share them. I find it likely that there would be some like that, since you get those in social norms and laws (like, on which side of the road you drive, or whether you should address strangers by their first or last name), and there's a bit of a fuzzy continuum between laws, social norms, and morality.
Lead and crime Arguments that lead has a lot to do with crime levels, and discussion of why this has gotten so little attention.
Just to indulge in a little evolutionary psychology..... Punishing people and helping people are both strong drives, but spending a lot of money on lead abatement (the lead from gasoline is still in the soil, and it keeps coming back-- lead paint is still a problem, too) is pretty boring.
ETA: And worse, progress with lead abatement is literally invisible (you don't have a dam or a highway so it looks like you're doing something) and the good effects take some 15 or 20 years to be obvious.
The basic point is reasonable, but there are so many things that bother me about that article.
Drum's credulity varies a lot in this article. His lowest level is about where I stand. I have to wonder if that actually reflects his beliefs and the rest of it is forcing enthusiasm on himself because to reflect value rather than truth; that is, he is doing an expected value calculation. Certainly, he should be applauded for scope sensitivity.
Perhaps the biggest thing that bothers me is that Drum tries to have it both ways: small amounts of lead matter and big amounts of lead matter. It seems rather unlikely that this is true. Maybe 10μg/dL has a huge effect, but if so, I doubt that 20 has double that effect, and this ruins all the analysis of the first half of the article. This is important because there is a logical trade-off between saying that past lead reduction was useful and saying future lead reduction will be useful. In particular, Drum says that Kleiman says that if the US were to eliminate lead, it would reduce crime by 10%. Did he just make up this number, or does it come out of a model? I'd like to see the model because even if he pulled the model out of thin air, it forces him to deal with the logical trade-off.
In Kleiman's book, he says that eliminating lead paint would reduce crime by 5% and attributes it to Nevin 2000. On the same page, he misquotes Nevin in a way that makes me not trust Kleiman with models. But that's OK because he has a citation, not model. I cannot find the claim in Nevin's paper. There is a model on p19 that says that 6 points of IQ, applied to the lowest 30% of the population could explain the past decline. And that's at a rate of 2 points of IQ for 10μg/dL, a small enough rate I'm willing to extrapolate linearly. If you assume crime in linear in lead, the 5% number is reasonable, except for the assumption that lead explains all of the past decline. (I'm not sure Nevin actually makes this assumption because I don't think he makes a prediction about eliminating lead; in this section, I think he's just doing a reality check that the known IQ effect of lead plus the known correlation of IQ and crime is big enough to explain the whole drop in crime.)
So I am bothered by Drum's language about the effects of low levels of lead, even though the suggestion of a 10% drop in crime maybe survives the trade-off between past and future. (And how does Kleiman's 5% turn into "Kleinman's" 10%? windows vs windows+soil?)
From the first half of the article:
the field of econometrics gives researchers an enormous toolbox of sophisticated statistical techniques
Econometrics gives people enough rope to publish themselves. Plus they implement these algorithms in spreadsheets, to hide the bugs from themselves.
murder rates have always been higher in big cities than in towns and small cities
If lead explains everything, this should not always have been true. In fact, I think it was not true in 1960. The graph Drum cites starts in 1975, after most of the increase in national murder rates has already happened, but there is very little dependence on city size until later. The graph seems to me evidence against the claim that lead explains this detail. Anyhow, such bucketed graphs are a bad way to test this hypothesis. In particular, there are only 9 "big cities" and NYC has 1/3 of this population. The convergence today is probably driven just by NYC now having a lower murder rate than small cities.
Drum says that Newarks's crime rate dropped 75%. That is true and but it is also true that Newark's murder rate has rebounded to its peak. I don't know how to resolve this. I usually prefer murder rates because they are harder to fake, but there are only about 80 murders in the worst years, making the data quite noisy.
That the graphs of leaded gasoline and crime match perfectly, up until year that Nevin's first paper was published screams publication bias.
Crack:
Trying to explain the crack epidemic in terms of childhood seems like a serious error to me. It seems very clear to me that it was contagious. How it spread and why it burnt itself out, I do not know. Regardless, one can disprove Nevin's model's claim to explain the crack epidemic, like Levitt's spreadsheet fraud before it, because it assumes that the age of criminals is constant in time. In fact, the crack epidemic involved young murderers, born after lead levels had started to decline. I think Nevin worries about this in later papers, but I don't know what he does.
Here is a suggestion for a better model for testing Nevin's hypothesis than he used in 2000: instead of lagging on some constant, create a new time series of murder by age of birth. This also corrects for the demographic problems such as the baby boom. The disadvantage is that this loses exogenous effects, such as the crack epidemic, which hit multiple ages simultaneously. Yet another time series, to avoid the problem of missing data, uses the age of the victim rather than of the perp.
So Nevin fails to explain the crack epidemic, but if he just explains the big rise and the big fall, that's a big deal. Unfortunately, the presence of the crack epidemic masks the big fall. In the absence of crack, when would crime have started falling? Perhaps it would have started falling earlier, but was elevated by crack. Or perhaps all those dead or jailed young teens would have become 25 year old criminals and so the effect of crack was to speed things up, including the falling crime rate.
Comment author:[deleted]
10 January 2013 07:08:57PM
1 point
[+]
(2
children)
Comment author:[deleted]
10 January 2013 07:08:57PM
1 point
[-]
There's a lot you can do to remediate lead and the bioavailable forms of it, fortunately (been working on a garden in an urban area, and bioremediation is a chief concern) -- it doesn't just have to involve removing it. Unfortunately, it's still likely to be rather expensive and unglamorous, so it'll be a tough sell as a point of policy.
The sexy project would be to figure out how to undo the effects of lead on people years after they'd been exposed as children. I think succeeding at this would wonderful, but I wouldn't put off cleaning up lead in the environment in the meanwhile.
Comment author:[deleted]
10 January 2013 10:36:17PM
1 point
[+]
(0
children)
Comment author:[deleted]
10 January 2013 10:36:17PM
1 point
[-]
That'd be beyond "sexy"; the effects of lead poisoning on the central nervous system are generally considered irreversible. I daresay anything that could repair that sort of brain damage would have a whole host of other applications...
Comment author:mstevens
03 January 2013 11:09:42AM
*
7 points
[-]
Random idea inspired by the politics thread: Could we make a list of high quality expressions of various positions?
People who wished to better understand other views could then refer to this list for well expressed sources.
It seems like there might be some argument about who "really" understood a given point of view best, but we could resolve debates by having eg pastafarianism-mstevens for the article on pastafarianism I like best, and pastafarianism-openthreadguy for the one openthreadguy prefers.
Comment author:drethelin
03 January 2013 11:23:55PM
3 points
[-]
Wow that's amazingly good. It reminds me of how baffled i was about the degree that everyone hated Ayn Rand after reading atlas shrugged as a teenager, and I now realize the reason is that everyone thought she was arguing against things she wasn't arguing against.
Comment author:TimS
04 January 2013 03:02:43PM
3 points
[-]
By not being formally respectable, TVtropes gets an otherwise skeptical audience (western nerds) to seriously consider certain philosophical positions that they are otherwise quite hostile to.
If LW concepts (eg mindkiller, raising the sanity line, paying rent in anticipated experience) were as popular as similarly philosophical TVtropes concepts, I think SI and CFAR leadership would be thrilled.
I was thinking about it from a different angle-- that sometimes lack of respectability leaves more room for conscientiousness.
It doesn't always work that way-- but so far tvtropes is a home for people who genuinely want to get the details of popular culture right. It seems odd, but it doesn't seem to have the problems with fraud and sloppiness that science does. Is this because people care more about popular culture than science? Or is it just that if tvtropes becomes respectable, the rewards for cheating will go up?
Comment author:TimS
04 January 2013 07:11:53PM
*
1 point
[-]
I hadn't thought of it that way - it's very plausible.
But some of the fraud in science is just lost purpose. If you need a certain number of publications to advance in your job, submitting fraudulent studies seems much more rewarding. And TVtropes doesn't have a similar issue - in part because of the lack of respectability you noted.
Comment author:JoshuaZ
02 January 2013 01:35:12AM
7 points
[-]
Is rubber part of the Great Filter? This thought occurred to me while reading Charles Mann's "1493" about the biological exchange post Columbus.
Rubber was a major part of the industrial revolution (allowing insulation of electric lines, and is important in many industrial applications in preventing leaks) . Rubber only arose on a single continent for a small set of species. While synthetic rubber exists, for many purposes it isn't as of high quality as natural rubber. Moreover, having the industrial infrastructure to make synthetic rubber would be extremely difficult without modern rubber. Thus, a civilization just like ours but without rubber might not have been able to go through the industrial revolution. This situation may also be relevant to Great Filter issues in our future: if civilization collapses and rubber becomes wiped out in the collapse, is this another potential barrier to returning to a functional civilization, especially if there's less available coal and oil to make synthetic rubber easily?
Comment author:gwern
02 January 2013 02:01:56AM
15 points
[-]
Rubber doesn't sound that important to me. The Wikipedia article includes all sorts of useful bits: it only went into European use in the late 1700s, at earliest, well after most datings of the Scientific and Industrial Revolutions; most rubber is now synthesized from petroleum; many uses of insulation like transoceanic telegraphs used gutta-percha which is similar but not the same as rubber (and was superior to rubber for a long time); and much use is for motor-vehicle tires, which while a key part of modern civilization, does not seem necessary for cheap long-distance transportation of either goods or humans (consider railroads).
So rubber doesn't look like a defeater. If it didn't exist, we'd have more expensive goods, we'd have considerably different transportation systems, but we'd still have modern science, we'd still have modern industry, we'd still have cheap consumer goods and international trade, and so on and so forth.
Comment author:negamuhia
01 January 2013 02:15:04PM
*
7 points
[-]
Happy New Year, LWers, I'm on a 5 month vacation from uni, and don't have a job. Also, my computer was stolen in October, cutting short my progress in self-education.
Given all this free time I have now, which of these 2 options is better?
Buy a road bicycle & start a possibly physically risky job as a freelance bike-messenger within my city ( I'm that one guy from Nairobi )in order to get out of the house more, then buy a laptop and continue my self-education in programming, computer science, philosophy, etc.
or
buy a laptop, do quick and easy wordpress websites for local businesses, then buy the bike and use it for leisurely riding under no pressure? I only have money for either one or the other for now, and for some reason I'm hesitating. Maybe it's because I want to do both. This is important to me, and I'll appreciate any discussion on this. Thanks.
Comment author:dbaupp
01 January 2013 02:56:29PM
9 points
[-]
I don't have anything specific to offer, but (in theory) hard choices matter less. And if you literally can't decide between them, you can try flipping a coin to make the decision and as it is in the air, see which way you hope it will end up, and that should be your choice.
Additionally, you can try the reframing technique. Anna describes it here:
When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))
The example she gives isn't quite isomorphic to the choice you're making, but I think the technique still may be worth trying. Imagine you're currently living out one option but given the chance to take the other - how would you feel about it? And vice versa.
Comment author:negamuhia
03 January 2013 09:26:38AM
*
1 point
[-]
dbaupp, ParagonProtege, thank you both for the links and suggestions. I'm going with the laptop. Anything else I could do (naturally, there's a lot i want to do) will be kickstarted by the modest, but easy(ish) money I'll get by doing ~$100 websites, as I upgrade my code-fu for Other Stuff. ;)
I also haven't cycled actively for years & I'm afraid my unfit body might conk out on me, making me unable to Do The Job once I commit. Cliff scaling is much harder than hill climbing.
From Alicorn's post , I can easily tell that after I get the laptop, the correct thing to have would be a bike, since I can ease myself back into cycling regularly. It's also weird how I saw the Other Option (buy bike, work, afford laptop, buy laptop, cut down on bike work as I increase study & laptop work hours) as just as good, even though I know I will feel like a flake if I stop riding after it gets tougher and more tiring, which is more likely than giving up on wordpress. Wordpress isn't even the only option for devastatingly easy Internet work.
Comment author:ahh
10 January 2013 02:50:33AM
6 points
[-]
Can anyone recommend a good therapist in San Francisco (or nearby) who's rationalism-friendly? I have some real problems with depression and anxiety, but the last time I tried to get help the guy told me I was paying too much attention to evidence and should think more spiritually and less rationally. Uh...huh.
If you don't want to post publicly here, PM or email is fine.
I'll second drethelin; CBT is both evidence-based as a treatment method- there's evidence it works- and evidence-based in practice, meaning you don't have to believe in it or anything, you just follow the prescribed behaviors and observe the results. Really, it's highly rationalism-friendly, being mainly about noticing and combatting "cognitive distortions" (e.g. generalizing from one example, inability to disconfirm, emotional reasoning, etc.). A therapist who specializes in CBT can be pretty well assumed to not be in the habit of dragging "spirituality" into their work.
Comment author:ahh
10 January 2013 09:37:08PM
2 points
[-]
I agree that CBT is well-supported by the evidence, and in general should be rationalism-friendly but that isn't always so. The therapist I mentioned in my OP was, in fact, calling himself a CBT practitioner. So I was hoping someone knew a CBT guy (or other equally well-supported method, honestly) he personally liked.
Comment author:Vaniver
11 January 2013 02:26:27AM
2 points
[-]
There are a handful of CBT books that are about as effective in general as having a therapist. You might be interested in feeling good, the depression workbook, or the anxiety workbook. I recommend that you keep looking for social support as well.
Comment author:knb
10 January 2013 06:11:03PM
*
0 points
[-]
You might want to look at Rational-emotive behavior therapy (REBT), and the affiliated organizations' websites. There are usually a few REBT therapists in any major city.
Comment author:Qiaochu_Yuan
01 January 2013 01:40:29PM
14 points
[-]
Can someone who's familiar with Mencius Moldbug's writing briefly summarize his opinions? I've tried reading Unqualified Reservations but I find his writing long-winded. He also refers to a lot of background knowledge I just don't have, e.g. I don't know what I'm supposed to take away from him calling something Calvinist.
Comment author:[deleted]
01 January 2013 02:04:12PM
*
10 points
[-]
This is a tall order. Nearly everyone I talk to seems to while getting the same basic models emphasise wildly different things about them. Their updates on the matter also vary considerably everything from utterly changing their politics to just mentally noting that you can make smart arguments for positions very divergent from the modern political consensus. Lots of people dislike his verbose style.
That is certainly the reason I haven't read all of his material so far.
I think the best way to get a summary is to discuss him with people here who have been read him. They will likely learn things too. When its too political continue the discussion either in the politics thread or in private correspondence.
To this I would add the comment history of fellow LWer Vladimir_M which is littered with high quality Moldbug-like arguments on various issues. Who knows a few new responses might coax him out of inactivity!
I recall some old sort of interesting discussion of Moldbuggian positions in which I participated as well:
Comment author:Alejandro1
01 January 2013 10:16:52PM
3 points
[-]
By the way: I was pondering Les Miserables not long ago in anticipation of the movie, and realized that both the musical and the original novel are an exact artistic/literary expression of what Moldbug calls Universalism (down to details like the family lineage from Christianity (the bishop at the beginning) to revolutionary politics). And the character of Javert summarizes perfectly Moldbuggian philosophy, e.g. "I am the law and the law is not mocked!" Would you agree?
Comment author:TimS
01 January 2013 10:43:12PM
2 points
[-]
If we take the Javert = Moldbug metaphor seriously, how should we interpret Javert's later conclusion that his earlier philosophy contains a hopeless conflict between authority-for-its-own-sake and helping people live happier lives?
Comment author:Alejandro1
01 January 2013 11:08:14PM
2 points
[-]
Well, the story is set up to favor Universalism. If Moldbug had written it, probably it would have ended with Valjean concluding that his earlier philosophy contained a hopeless conflict between rejecting authority and helping people live happier lives.
Comment author:TimS
02 January 2013 01:27:36AM
*
2 points
[-]
I'm smirking at the idea of a Moldbuggian story of the uprising of 1832. Revolutionists Get What They Deserve or some-such. :)
But I don't think that story has room for the complex characters of Hugo's story, narratively speaking. There's no room at all for Valjean, and Javert becomes simply the protagonist to the evil antagonist Enjolras.
Ultimately, you asked if canon!Javert embodies Moldbug. As I suggested above, I think the answer is no. He's a tragic figure - even Hugo would admit that > 75% of the time, the king's law point toward a just outcome. But Javert was blind to the fact that the king's law contained deep flaws.
I don't know if the passage survives the standard abridgements, but Javert writes a note to his superiors listing several minor injustices in the local prison system, immediately before killing himself. Even after conversion, Javert fails to realize that he was the only person who both (1) knew about the issues, and (2) cared about the injustice. That episode, and Javert as a character, are deeply tragic in my opinion.
And I can't imagine Moldbug caring about those issues at all. Obviously, Moldbug's choices would be different - but I don't get the impression Moldbug would think the minor injustices were even worth his attention if he were in Javert's situation.
Comment author:drethelin
01 January 2013 11:06:10PM
1 point
[-]
It's a lesson about happens when you combine the virtuous with a pernicious system of virtue. The liberal backlash against strong authoritarianism/belief in the rule of law is one way of reacting to such a world. "The laws are evil, therefore their enforcers are evil." The other side of this is people who believe the laws are good and anyone who enforces them is good. Both views are lacking nuance. Javert is someone who has spent his life believing that he is good because he enforces the laws, which are good. He can't live with the idea that he has been "bad" all along.
Comment author:Vaniver
01 January 2013 07:08:16PM
4 points
[-]
If you've got a few hours, I found the Gentle Introduction to be sufficiently gentle, but it does have nine parts and is written in his regular style. I think the first part is strongly worth slogging through, in part because his definition of "church" is a great one. I may write a short summary of it at some point, but that's a nontrivial writing project.
Comment author:ChristianKl
04 January 2013 01:23:26PM
3 points
[-]
Moldbug has a variety of opinions that he expresses in his articles. Summarizing all of them is therefore hard. I will try to list a few.
Moldbug reject the progressive project. That means that he's opposed to most politicial ideas of Woodrow Wilson and presidents after Wilson.
Moldbug rejects modern democracy. He thinks that the US military should orchestrate a coup d'état. After the coup d'état the US should split and every state should have his own laws.
In the ideal case Moldbug wants that the states to be run like a stock company. If that isn't possible Moldbug prefers the way Singapur and Qatar are governed to the way the US is governed. According to him competition between a lot of states that are governed like Singapur is better than a huge federal government.
I suspect that Moldbug thinks a military coup is only a means to an end. He wants government rule on a for profit basis, with essentially no tolerance of social disorder - other than vote with your feet (i.e. leaving). This is the concept he calls "Patches."
Comment author:ChristianKl
04 January 2013 03:29:28PM
1 point
[-]
Your timeline starts too late. Moldbug rejects the Glorious Revolution.
Moldbug does reject it, I'm however not sure that he rejects all political pre-20st century events.
He seems to like corporations and corporations have gotten much more legal rights than they had before the Glorious Revolution.
Comment author:[deleted]
01 January 2013 02:14:59PM
*
4 points
[-]
He also refers to a lot of background knowledge I just don't have, e.g. I don't know what I'm supposed to take away from him calling something Calvinist.
Could you please clarify if you are unsure what he means when he calls a position Calvinist (presumably Crypt-Calivinist or something like that) or are you just unsure what Calvinism is?
The short and sufficient answer to the second is that this is a designation for a bunch of Protestant Christians who historically took themselves very seriously and have a reputation for being dour. Take special note of the Five Points of Calvinism.
The short and insufficient answer to the first is people who have ethical, political and philosophical ideas that can't be justified by their declared systems of ethics but can be perfectly well explained if you note the memeplexes in their heads are descendent of highbrow American Protestantism of the previous centuries. He goes into several things he considers indications of this and points out they dislike this explanation very much and want to believe their positions are the result of pure reason or Whiggish notions of history inching towards a universal "true human morality".
The former, but thanks for your clarification on both (I imagine your clarification on the latter is a relevant connotation Moldbug wanted and that I was largely ignorant of).
Comment author:Vaniver
13 January 2013 03:47:14PM
*
5 points
[-]
Watson, the IBM AI, was fed urban dictionary to increase its vocabulary / help it understand slang. It started swearing at researchers, and they were unable to teach it good manners, so they deleted the offending vocabulary from its memory and added a swear filter. IBTimes.
Comment author:D_Malik
04 January 2013 08:14:18AM
*
5 points
[-]
It seems to be common knowledge that exposure to blue light lowers melatonin and reduces sleepiness, and that we can thus sleep better if we wear orange glasses or use programs like Redshift that reduce the amount of blue light emanating from the strange glowing rectangles that follow us around everywhere.
So an idea I had is that maybe wearing blue glasses might increase alertness. I've been weirdly fatigued during the day lately, even though I've been using melatonin and redshift. But does the /absolute/ magnitude of the blue light matter, or the amount of blue relative to other colours? Blue glasses would mostly have no effect on the absolute amount, but would increase the relative amount. Orange glasses decrease both so considering them isn't much help.
I tried looking for studies but I have no experience doing that and I only came up with one that actually compares bright ambient light to dim blue light; it found that dim (1 lux) blue light was better for alertness than 2-lux ambient white light.
Thoughts? Anyone better-informed about these things have comments?
Edit: For a sense of a scale: lux measures luminous flux; 50 lux is living-room lights; a candle at 20cm is 10-15 lux; a full moon on a clear night is 0.3 to 1.0 lux. "White light" is actually only about 11% blue light (source), so the 2 lux of white light in the study is 0.2 lux of blue, which is bad because it means that the linked study's result could be explained either by more absolute or more relative blue light.
Comment author:wedrifid
04 January 2013 03:47:47PM
4 points
[-]
So an idea I had is that maybe wearing blue glasses might increase alertness. I've been weirdly fatigued during the day lately, even though I've been using melatonin and redshift. But does the /absolute/ magnitude of the blue light matter, or the amount of blue relative to other colours? Blue glasses would mostly have no effect on the absolute amount, but would increase the relative amount.
Unless the mechanism which causes our pupils to constrict is itself sensitive exclusively to blue light those blue glasses will increase the absolute amount of blue light that make it into your eyes.
Comment author:tut
04 January 2013 01:47:31PM
*
1 point
[-]
There is light therapy for people who get depressed in the winter. If I don't misunderstand they are nowadays using "full spectrum" (=white) light, not blue light. That might have something to do with what you are talking about, and in that case it is evidence that it is not just the proportion of blue light that matters.
Comment author:Wei_Dai
13 January 2013 11:49:21PM
4 points
[-]
Do the current moderation policies allow editors to add "next in sequence" and "previous in sequence" links to posts that don't already have such links, and are there any editors willing to do this? If not, can we change the policy to allow this? And I'd like to volunteer to add such links at least to the posts that I come across (I'm already a moderator but not an editor).
The hard problem of consciousness is starting to seem slightly less impossible to me than it used to.
Specifically, I remember reading someone's dismissal of the possibility of a reductionist explanation of consciousness, something along the lines of, "What? You think someone's going to come up with an explanation of consciousness, and everyone else will slap their forehead and say, 'Of course, that's it'"?
But that kind of argument from incredulity fails because it conflates explanation (writing down or speaking an argument that other humans will hopefully understand) with understanding (whatever-it-is human brains do to model reality).
For example, there are lots of people who mistakenly think a reductionist explanation of free will is impossible, who will not magically be cured by handing them a well-written explanation of compatibilism, because in order for that to work, they would have to read and understand the argument, and whatever process the human brain uses to read and understand stuff could be flawed in such a way that most people just won't get it. Or more mundanely, it takes years to learn a technical discipline like math or chemistry. A mathematician can't just tell an arbitrary person about their ideas; one would need to study for years to understand what the words mean.
In general, none of us really know what other humans are thinking; we're just making inferences from observing their behavior. I trust the global mathematical community enough such that I believe it when I hear news that the Poincare conjecture has been proven, even though I haven't built up the skills to understand the proof. But suppose some neuroscientist somewhere has come up with an adequate explanation of consciousness, but wasn't able to convince their colleagues, because the explanation requires unusual skills for which there is no standard vocabulary and which are very hard to teach ... how would I be able to tell whether or not this has already happened?
Maybe all of this was obvious to some of you (in which case I apologize for being a slow learner), and maybe some of you have no idea what I'm trying to talk about (in which case I apologize for being a poor explainer).
Comment author:[deleted]
05 January 2013 07:12:23PM
*
4 points
[-]
The header backgrounds of Main and Discussion are similar but different. This irks me slightly.
My selfish strategy is to point it out so it irks more people and the minimal effort of changing it becomes worthwhile. Given the autism scores from the survey, I am confident that among the people reading this comment, a good part will be irked. However, I am not familiar with how changes to the design have been made in the past. I am taking this opportunity to make my first prediction on predictionbook.com
Comment author:leplen
03 January 2013 07:11:24PM
*
9 points
[-]
So I'm fairly new to LessWrong, and have being going through some of the older posts, and I had some questions. Since commenting on 4 year old posts was probably unlikely to answer those questions or to generate any new discussion, I thought posting here might be more appropriate. If this is not proper community etiquette, I'm happy to be corrected.
Specifically, I'm trying to evaluate how I understand and feel about this post:
The Level Above Mine
I have some very mixed feelings on this post, and the subject in general. (You might say I've noticed that I'm confused.) Sure. It's hard to evaluate reliably just how intelligent someone who is more intelligent than you is, just like a test that every student in a class aces doesn't allow you to identify which student knows the information the best, but doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor?
Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset. Indeed, it seems in many ways the raison d'etre of LW relies on the assumption that it is possible to improve your intelligence. I would further argue that LW relies on the assumption that it is possible to recursively improve your intelligence, (i.e. learning things that help you learn better).
Is it possible that the fundamental attribution error is at work here? I mean, if it's ridiculous to believe in "mutants born with unnaturally high anger levels" then why the rush to believe in mutants with unnaturally high levels of intelligence? I'm not sure what to make of a post that discusses assessing how many standard deviations above average intelligence someone is, if I really believe that "Any given aspect of someone's disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability."
Indeed if we make fundamental attribution error when assessing someone because "we don't see their past history trailing behind them in the air", then can we not say the same for experiences that result in greater situational intelligence? Perhaps I'm straining the bounds of metaphor slightly, since problem-solving intelligence tends to be more enduring than vending-machine kicking anger, but is it so fixed that my SAT scores from the 7th grade are meaningful or worth discussing? Is it possible that what we perceive as greater intelligence, as "the level above mine" is just someone who has spent more time working on something, or working on something similar to it? What is the prior probability that someone picks up a new idea quickly because they've been exposed to a similar idea before, versus the prior probability that they are of mutant intelligence?
The entire ranking debate to me, sounds suspiciously like human social hiearchies, and since that's a type of irrationality humans are especially prone to, it makes me very suspicious. I know from personal experience, that being considered of "above average intelligence" is a very useful social tool which I can use to create a place for myself in social hierarchies, and often that place is not only secure, but also grants me reasonably high social status. I have at various times in my life, evaluated others, and granted social status accordingly, on the basis of their SAT scores and other similar measures. Is that what is going on here?
Fundamentally, I believe this question boils down to a handful of related questions:
How accurate over time is our evaluation of general intelligence?
Does our love of static hiearchies, esp. one that priveleges intelligence affect our answer to 1?
Sub-questions to #1
a. How varaible is intelligence, and over what time span? Or more generally, what do we estimate are the most heavily weighted inputs to a function that describes intelligence?
b. Is there an upper bound on human intelligence?
c. Are the people whose intelligence we're evaluating operating near that bound?
d. Can we reliably distinguish between intelligence and knowledge? How?
I'm not sure about question 1, but I'm pretty sure the answer to question 2 is yes.
Comment author:Kaj_Sotala
04 January 2013 07:27:33AM
*
10 points
[-]
"Intelligence" seems to consist of multiple different systems, but there are many tasks which recruit several of those systems simultaneously. That said, this doesn't exclude the possibility of a hierarchy - in some people all of those systems could be working well, in some people all of them could be working badly, and most folks would be somewhere in between. (Which would seem to match the genetic load theory of intelligence.) But of course, this is a partially ordered set rather than a pure hierarchy - different people can have the same overall score, but have different capabilities in various subtasks.
IQ in childhood is predictive of IQ scores in adulthood, but not completely reliably; adult scores are more stable. There have been many interventions which aimed to increase IQ, but so far none of them has worked out.
IQ is one of the strongestgeneral predictors of life outcomes and work performance... but that "general" means that you can still predict performance on some specific task better via some other variable. Also, IQ is one of the best such predictors together with conscientiousness, which implies that hard work also matters a lot in life. We also know that e.g. personality type and skills matter when it comes to rationality.
I would suppose that the kinds of people referred to "the level above mine" would be some of those rare types who've had the luck of getting a high score on all important variables - a high IQ, a high conscientiousness, a naturally curious personality type, high reserves of mental energy, and so on. To what extent these various things are trainable is an open question.
Comment author:leplen
03 January 2013 10:31:59PM
1 point
[-]
Following the line of reasoning in Correspondence Bias, because it's probably much more likely that someone who seems to you to "be an angry person" has just had a bad day.
According to our current understanding, significant mood altering mutations are much less common than many other more probable causes of anger. This is one of the reasons gene therapy is not typically suggested as part of treating anger management issues.
Comment author:fubarobfusco
03 January 2013 10:47:40PM
*
3 points
[-]
Wouldn't it be interesting if everyone had exactly equal hormonal tendencies toward various emotions?
"This particular episode of angry behavior is not as strong of evidence that this person has angry tendencies as my brain wants to treat it" is not the same as "Angry tendencies do not exist at all."
Comment author:Viliam_Bur
06 January 2013 10:15:31PM
*
1 point
[-]
I will start with: +1 for caring about the community etiquette
Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset. Indeed, it seems in many ways the raison d'etre of LW relies on the assumption that it is possible to improve your intelligence.
Intelligence (IQ) is more or less static. If you have a scientifically proven method of increasing IQ, please post it here, and I am sure many people will try it. But at this moment, LW is not about increasing human intelligence. It is about increasing human rationality -- learning a better way to use the intelligence (brain) we already have -- and about machine intelligence. A hypothetical intelligent machine could increase its intelligence by changing its code or adding new hardware. For humans, similar change would require surgery or implants beyond our current knowledge.
if it's ridiculous to believe in "mutants born with unnaturally high anger levels" then why the rush to believe in mutants with unnaturally high levels of intelligence?
How high is unnaturally high? The intelligence is on the Bell curve. One in two persons has IQ above 100. One in ten has IQ above 115. One in fifty has IQ above 130; one in hundred above 135; one in thousand above 146; one in ten thousands above 156... this is all within the Bell curve. It is possible to search for people with this level of intelligence. (Speaking about someone with IQ 300, that would be unnatural.)
The question is, how much real-world effect do these levels of intelligence have. Clearly, intelligence is not enough to make people smart -- a person with a high IQ can still believe and do stupid things. (This is why we usually don't obsess about IQ, and discuss rationality instead.) On the other hand, some IQ may be necessary for some outcome, or at least could make the same person get the same outcome significantly faster. (This is easier to understand by imagining people with very low IQs. Even the best rationality training is not going to make them new Einsteins.) Being faster does not seem like a critical difference, but for sufficiently complex tasks the difference between years and decades, or maybe decades and centuries, can determine whether a human is able or unable to ever complete the task.
Is it possible that what we perceive as greater intelligence, as "the level above mine" is just someone who has spent more time working on something, or working on something similar to it?
In the article, Eliezer considers the alternative explanations. (Maybe Conway had more opportunities to show his mastery. Maybe he specializes in doing something different. Maybe Conway used the time of his youth better.) But maybe... it is the difference in general intelligence. All these explanations deserve to be considered.
What is the prior probability that someone picks up a new idea quickly because they've been exposed to a similar idea before, versus the prior probability that they are of mutant intelligence?
Depends on circumstances. Did it happen once, or does it happen all the time? Does it happen consistently in a field where both persons spent a lot of time learning? Does it happen in different fields? The prior probability of someone having higher intelligence is not so small that evidence like this couldn't change the result.
\2. Does our love of static hiearchies, esp. one that priveleges intelligence affect our answer to 1? I'm not sure about question 1, but I'm pretty sure the answer to question 2 is yes.
Just because we have a bias for X, it does not automatically mean non-X must be true. People do love hierarchies. People are bad at estimating their skills, or skills of others. That does not mean different people can't really have different traits.
Intelligence (IQ) is more or less static. If you have a scientifically proven method of increasing IQ, please post it here, and I am sure many people will try it. But at this moment, LW is not about increasing human intelligence. It is about increasing human rationality -- learning a better way to use the intelligence (brain) we already have -- and about machine intelligence.
Is it solid that IQ tests can distinguish between the intelligence we already have, and our ability to use that intelligence?
Comment author:saturn
17 January 2013 08:50:18AM
0 points
[-]
doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor? Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset.
I'd just like to point out that a growth mindset is fully compatible with fixed intelligence. Fixed intelligence doesn't mean that growth is impossible, only that some people can grow faster than others.
Comment author:knb
10 January 2013 08:51:03PM
*
0 points
[-]
There actually are mutants with high anger levels (read about Brunner's syndrome). Less Wrong is not about improving human intelligence but rather human rationality. The two are obviously distinct.
If you are asking these basic questions about intelligence, (i.e. proposing that it can easily be changed) you simply need to read more about this topic.
Comment author:Alicorn
02 January 2013 03:40:52AM
6 points
[-]
They are repositories for quotes that resonate with and/or amuse us. It might be a little too easy to get karma that way, admittedly, but I think they are nice to have around.
Comment author:TimS
02 January 2013 03:50:06AM
4 points
[-]
Sources of karma don't bother me. It just seems like the standards for voting in that thread - both comments and replies - is really different than the rest of the site. Not looser, but different.
It seems like I'm always surprised but the vote totals there - both upvotes and downvotes, when I think I have a feel for what folks like in the rest of the site.
Comment author:ChristianKl
04 January 2013 02:24:54PM
1 point
[-]
I don't think it's a test for orthodoxy. Take the quote: "To see is to forget the name of the thing one sees.” ― Paul Valéry with 13 upvotes while I write it.
The position that gets articulated in that quote isn't orthodox on LessWrong. There are a bunch of quotes that are interesting instead of just making an orthodox point.
Comment author:OrphanWilde
14 January 2013 04:05:49PM
3 points
[-]
I have a query - exactly how interested are people here in improving the efficiency of their daily lives? To whit, would a discussion about efficient toilet habits be welcome or unwelcome? (No, I'm not joking, nor am I working up to a toilet joke, I'm entirely serious.)
Comment author:[deleted]
12 January 2013 09:38:18AM
*
3 points
[-]
How do you stop suicide, for individuals and or populations? I looked up antidepressants. They don't look so promising. Brief summary follows. Feel free to skip it.
All pharmacological antidepressants have scary side effects. All of them, sometimes individually or sometimes in combination, put you at risk for serotonin toxicity. Most all increase risk of sucide relative to no treatment. Tricyclic antidepressant are old, scary drugs; rarely prescribed. MAOIs kind of scary. Moclobemide is one of the newer, safer MAOIs. Weird dietary reactions. Still not as safe as SSRIs. NDRIs, include Wellbutrin: commonly prescribed. Adverse effects include seizures and cardiovascular events. Less safe than SSRIs. Don't know enough. SSRIs are most commonly prescribed. They include Zoloft, Paxil, Prozac, and Celexa. Efficacy comparable to placebo. Adverse effects of sexual disfunction, nausea, high blood pressure, lots more. SNRIs are newer than SSRIs. comparable efficacy to SSRIs. Include effexor and cymbalta. Effexor has especially high suicide risk. Discontinuing use of SSRIs and SNRIs abruptly might have adverse effects. Sadness, irritability, agitation, dizziness, etc.
Comment author:Fadeway
11 January 2013 07:54:23PM
*
3 points
[-]
I have an important choice to make in a few months (about what type of education to pursue). I have changed my mind once already, and after hearing a presentation where the presenter clearly favored my old choice, I'm about to revert my decision - in fact, introspection tells me that my decision was already changed at some point during the presentation. In regards to my original change of mind, I may also have been affected by the friend who gave me the idea.
All of this worries me, and I've started making a list of everything I know as far as pros/cons go of each choice. I want to weigh the options objectively and make a decision. I fear that, already favoring one of the two choices, I won't be objective.
How do I decrease my bias and get myself as close as possible to that awesome point at the start of a discussion where you can list pros and cons and describe the options without having yet gotten attached to any position?
Harder Choices Matter Less. Unless you expect that there is a way of improving your understanding of the problem at a reasonable cost (such as discussing the actual object level problem), the choice is now less important, specifically because of the difficulty in choosing.
Comment author:Fadeway
12 January 2013 05:34:17PM
*
0 points
[-]
From rereading the article, which I swear I stumbled upon recently, I took away that I shouldn't take too long to decide after I've written my list, lest I spend the extra time conjuring extra points and rationalizations to match my bias.
As for the meat of the post, I don't think it applies as much due to the importance of the decision. I could go out and gather more information, but I believe I have enough, and now it's just a matter of weighing all the factors; for which purpose, I think, some agonizing and bias removal is worth the pain.
Hopefully I can get somewhere with the bias removal step, as opposed to getting stuck on it. (And, considering that I just learned something, I guess this can be labeled "progress"! Thanks :))
Comment author:TimS
02 January 2013 02:30:02AM
2 points
[-]
If you can understand that "This sentence is a lie" is complicated to decide if true - in any depth at all - then you will get interesting insights from GEB.
Comment author:shminux
04 January 2013 12:43:06AM
*
1 point
[-]
There is a mindset prerequisite. Some people get forever lost/bored the first time the book talks about valid mathematical statements as well-formed finite strings of symbols.
Comment author:alanog
01 January 2013 05:43:30PM
*
3 points
[-]
http://www.science20.com/hammock_physicist/rational_suckers-99998
Slightly intrigued by this article about Braess' paradox. I understand the paradox well enough, but am confused by how he uses it to critisize super-rationality. But mostly I was amused that in the same comment where he says, 'Hofstader's "super-rationality" concept is inconsistent and illogical, and no single respectable game theorist takes it seriously.' he links to EY's The True Prisoners' Dilemma post.
Also, do people know if that claim about game theorists is true? Would most game theorists say that they would defect against copies of themselves in a one-shot PD?
Comment author:Vaniver
01 January 2013 06:48:23PM
*
3 points
[-]
Would most game theorists say that they would defect against copies of themselves in a one-shot PD?
It depends on what "against copies of themselves" means. If it means "I know the other person behaves like a game theorist, and the payoff matrix is denominated in utility," then yes. If it means "I know the other person behaves like a game theorist, but the payoff matrix is not denominated in utility because of my altruism towards a copy of myself," then no. If it means "I expect my choices to be mirrored, and the payoff matrix is denominated in utility," then no.
I thought I'd seen a survey result of when LWers thought the Singularity was plausible-- maybe a 50% over/under date, but I haven't been able to find it again. Does anyone remember such a thing?
When asked to determine a year in which the Singularity might take place, the mean guess was 9,899 AD, but this is only because one person insisted on putting 100,000 AD. The median might be a better measure in this case; it was mid-2067.
The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067. There was some discussion about whether people might have been anchored by the previous mention of 2100 in the x-risk question. I changed the order after 104 responses to prevent this; a t-test found no significant difference between the responses before and after the change (in fact, the trend was in the wrong direction).
The 2012 survey also had a "date of the Singularity" question, but Yvain didn't report on the results of that question, so you'll have to look at the raw data for that.
Comment author:gwern
01 January 2013 06:27:46PM
5 points
[-]
The 2012 survey also had a "date of the Singularity" question, but Yvain didn't report on the results of that question, so you'll have to look at the raw data for that.
Comment author:[deleted]
01 January 2013 09:22:04PM
2 points
[-]
Note that the last survey made it explicitly clear that the question was “what is the year such that P(Singularity before year|Singularity ever) = P(Singularity after year|Singularity ever) = 0.5”, whereas in the previous surveys it was ambiguous between that and “P(Singularity before year) = P(Singularity after year) + P(no Singularity ever) = 0.5”.
Comment author:Kaj_Sotala
01 January 2013 08:45:09AM
*
3 points
[-]
Robert Kurzban clarifies the concept of the EEA (mostly by quoting various excerpts from Tooby & Cosmides). I think this is an important post for people to check out, given how often the concept of EEA is referenced on this site.
In 1990, Tooby and Cosmides wrote (p. 387):
The concept of the EEA has been criticized under the misapprehension that it refers to a place, or to a typologically characterized habitat, and hence fails to reflect the variability of conditions organisms may have encountered.
From this it can be seen that even in 1990, they were taking pains to defend against the possibility that careless readers might take them to be saying that the EEA is to be thought of as a time and a place. Instead, they characterize it this way (pp. 386-387):
The “environment of evolutionary adaptedness” (EEA) is not a place or a habitat, or even a time period. Rather, it is a statistical composite of the adaptation-relevant properties of the ancestral environments encountered by members of ancestral populations, weighted by their frequency and fitness-consequences.
I find the matter unclarified. Given the large variability of the Pleistocene climate and habitat (that Kurzban mentions), what does the quoted definition of the EEA mean? "A statistical composite...weighted by frequency and fitness-consequences" looks pretty much like a time and a place -- just an average one instead of one asserted to be the actual environment, habitat, and social structure over the whole Pleistocene. Both concepts ignore the variation.
The flourish of HBD books and talk in the years around 2000 was, to switch metaphors, early growth from seeds too soon planted.Had the shoots been nourished by a healthy stream of scientific results, they might have grown strong enough to crack and split the asphalt of intellectual orthodoxy.But as things turned out, the maintenance crew has had no difficulty smothering the growth.
Even the few small triumphs of HBD—triumphs, I mean, of general acceptance by cognitive elites—have had an ambiguous quality about them.
For example, Freudian psychoanalysis (defined by Nabokov [33] as people’s belief“that all mental woes can be cured by a daily application of old Greek myths [34] to their private parts”), which was radically nurturist in its “explanations” of human personality development, is now defunct, thanks to developments in pharmacology.
But, while this anti-nurturist victory has diminished the quantity of nonsense in the world, like one of Robert E. Lee’s [35] battles it has not been followed by any significant occupation of enemy territory. In the applied human sciences pure “blank slate [36]” nurturism is still entrenched. Educationists, for example, insist that given the right environment, any child can do anything [37]. In criminology, even the boldest of conservative writers tell us that illegitimacy and fatherlessness are the root causes, as if those factors themselves were uncaused.
Comment author:[deleted]
10 January 2013 07:30:10PM
*
10 points
[-]
My very first post on this site was about the mistreatment of Stephanie Grace related to the new chilling and shrinking of acceptable discourse in the late 2000s after the 90s thaw mentioned in the article.
I was impressed by the reasonableness of the discussion. And I continued to be impressed at how well LessWrong handled matters like these where for almost two years. However making the same post today on this site as a new member wouldn't be as well accepted as it was back then. If this had been the case then I would have taken the claim that this community is one "dedicated to refining the art of human rationality" with a larger grain of salt, I'm unsure if I would have lingered since I had read most of the sequences at that point but was unsure about whether to participate.
So since I'm unsure if it would be appreciated in the community had I arrived today why do I remain? Well in the mean time I've grown to greatly respect the sanity of many excellent commenter's and several people generating good articles post do post here, some have arrived after I started participating. And it is the most civil and intellectually honest internet forum I've ever seen. But despite this I'm unsure if it is rational of me to do so.
Speaking to some other people from here, who make comments like "more people follow your writing than mine can you please comment on my post?" or people using me as a go to example for some matters, apparently I've become a sort of Schilling point for a subculture within the rationalist subculture. I feel kind of sad about this. I preferred it back when Vladimir_M filled this role, he was far worthier than me.
I think we are at the start of a long winter in the West, only technological progress can keep us afloat if it won't falter. And even if it doesn't uFAI is the overwhelmingly likely outcome. I think I need a strong drink.
Comment author:Multiheaded
10 January 2013 10:14:21PM
12 points
[-]
From watching you for a while, I think you're driven to off-handedly forecast doom and gloom because it suits your identity as someone strongly dissatisfied with their current world, signaling contrarianism and wallowing in dignified pessimism. And of course elitism and despair look cooler to you, and form a coherent narrative.
And I'm not going to judge this as something negative, or implore you to fix some "problem" with your personal feelings, I just suggest that you keep a skeptical perspective on your self-narrative somewhere in the back of your mind. As you surely already do.
Comment author:[deleted]
12 January 2013 12:42:31PM
2 points
[-]
I've looked at this argument so many times from so many different angles that I would be very surprised if I hadn't in previous correspondence with you talked about it in very similar terms. I think I've given it its proper weight, but I guess readers may not be aware of it so you pointing it out isn't problematic.
Whiteboard animation of a talk by Dan Ariely about dishonesty, rationalization, the "what the hell" effect, and bankers. The visual component made it really easy for me to watch.
Comment author:gwern
04 January 2013 06:10:01PM
2 points
[-]
BEST, a Bayesian replacement for frequentist t-tests I've been using in my self-experiments, now has an online JavaScript implementation: http://www.sumsar.net/best_online/
Comment author:OrphanWilde
04 January 2013 03:24:36PM
2 points
[-]
Hey -
Bit of an unusual request: Does anybody know of any good science books for physics? Specifically, books with not only the facts about physics, but the specific reasons and experiments for which those facts are believed?
I have an associate who is interested in the subject, and completely uninterested in reading something that presents current beliefs as facts. When explaining particle spin, it then took me something like four hours to find the relevant experiments performed for proving the existence of particle spin (and I have to confess the information I was able to find on such a fundamental element of modern physics left me a bit underwhelmed).
What kind of people do you all have in your heads? Do you find that having lots of people in your head (e.g. the way MoR!Harry has lots of people in his head) is helpful for making sense of the world around you and solving problems and so forth? How might I go about populating my head with more people, and what kind of people would it be useful to populate my head with?
Comment author:knb
10 January 2013 08:40:10PM
1 point
[-]
When I'm trying to understand something, I imagine myself explaining it to my younger sister. I started doing this when I was a kid, but it is so useful to me, that I never stopped.
Ten years or so ago, I used to have more distinct personas in my head than I do now.
Back when I did, they roughly speaking exemplified distinct emotional stances.
One was more compassionate, one more ruthless, one more frightened, one more loving, and so forth.
This wasn't quite the way Eliezer writes Harry, but shares some key elements.
My model of what's going on, based on no reliable data, is that there's a transition period between when a particular stance is altogether unacceptable to the ruling coalition in my head (aka "me"), and when that stance has more-or-less seamlessly joined that coalition (aka "I've changed"), during which it is acceptable but not fully internalized and I therefore tag it as "someone else".
As I say, I don't do this nearly so much anymore. That's not to say I'm consistent; I'm not, especially. In particular, I often observe that the way I think and feel is modified by priming effects. I think about problems differently after spending a while reading LW, for example.
What's changed is that there's no sense of a separate identity along with that. To put it in MoR terms: my experience is not of having a Slytherin in my head distinct from me that sometimes thinks things, but rather of sometimes thinking things in a more Slytheriny sort of way.
That suggests to me that maybe the difference is in how rigidly I define the boundaries of "the sorts of things I think".
Comment author:TimS
09 January 2013 05:59:58PM
0 points
[-]
I sometime find it helpful to label a particular perspective: cynical-Tim, optimistic-Tim, etc. They are helpful for clarifying my thoughts by formalizing a certain type of self-reflection. But they don't know more than I, so are generally useless at brain-storming - which is how MoR!Harry seems to use them - I've taken those discussions as literary conceit and exposition for the readers, not models of how to be more effective.
Comment author:Qiaochu_Yuan
09 January 2013 08:23:44PM
*
1 point
[-]
But they don't know more than I, so are generally useless at brain-storming
Brainstorming has at least two components: knowing things, and recognizing that a thing you know is relevant to a situation. People inside your head might not be helpful at the former but they might be helpful at the latter, thanks to the brain's useful ability to mimic other brains.
I think Eliezer might have been inspired by internal family systems, which means this might be more useful at being effective than it sounds.
I am looking for defenders of Hanson's Meat is Moral. On the surface, this seems like a very compelling argument to me. (I am a vegetarian, primarily for ethical reasons, and have been for two years. At this point the thought of eating meat is quite repulsive to me, and I'm not sure I could be convinced to go back even if I were convinced it were moral.)
It struck me, however, nothing in this argument is specific to animals, and that anyone who truly believes this should also support growing people for cannibalism, as long as those lives are just barely worth living. (I tend to believe in relative depression so I'd argue probably any life that isn't extremely torturous is worth living) This goes so strongly against moral intuition, though, that I can't imagine anyone supporting it.
Comment author:leplen
08 January 2013 11:39:59PM
*
3 points
[-]
Sorry, can't defend it. It's not a horrible argument, but it's also not totally well grounded in facts.
For starters, it takes far more land and resources to produce 1 lb of beef than 1 lb of grain, since you have to grow all the grain to feed the cow, and cows don't turn all of that energy into meat, so if you believe that undeveloped land or other forms of resource conservation have some intrinsic worth, then vegetarianism is preferable.
Secondly, I think the metaphor comparing a factory farm to a cubicle farm is disingenuous. It's emotionally loaded, since I work in a cubicle and I don't wish I were dead, and it's not terribly accurate. I think you could make a different comparison, that is arguably more accurate and compare a factory farm to a concentration camp. In both instances the inhabitants are crowded together with minimal resources as they await their slaughter. (Obviously my example is also emotionally loaded). I think if one were to ask the question should we do things that will encourage the birth of children who will grow up in concentration camps, it's a little more difficult to come down with the same definitive yes.
Additionally, the article wanders into conjecture in several place. It's hard to see the statement "most farm animals prefer living to dying" as anything more than a specious claim. No one has any way of knowing a cow's preference vis-a-vis life or death, probably including the cow. Suicide is a particularly egregious red herring. By what means does a cow in a pen commit suicide? Starving to death? Surely that not comparable to wishing it had never been born...
As for your Soylent Green example, it has even worse problems with trophic losses, because if your farm-raised humans were not strictly vegetarian, you're losing an even higher percentage of your original energy. If the food babies are raised on an all meat diet you may be getting less than 1% of the energy you would have gotten out of just eating the plants you started the process with. Humans also have a ridiculously long gestation time etc. to function as an efficient food item, although the modest proposal you mention has certainly been suggested before.
Finally, the argument makes me nervous because I think that in general the morality of causing things to be born isn't well settled. We regard saving the life of an child as definitely a moral good. It isn't clear that giving birth to a child is also a moral good, or also a comparable moral good. If I had to pick between saving one child and having two babies, I would think that saving the kid's life was the higher moral calling, even though it will result in less children over all.
Comment author:[deleted]
08 January 2013 11:02:27AM
*
1 point
[+]
(14
children)
Comment author:[deleted]
08 January 2013 11:02:27AM
*
1 point
[-]
I had to stop (though I may resume later) at "People who buy less meat don't really spend less money on food overall, they mainly just spend more money on other non-meat food" -- it made me go "are you fucking kidding me" and wonder whether he has ever been to a supermarket. See also this -- differences in retail prices aren't quite that extreme, but that's because governments subsidize meat production, so even though not all of the money comes out of meat eaters' pockets, it still comes out of somewhere.
EDIT: I finished reading it, and... if I didn't know who Hanson was and he had posted somewhere that allowed readers to comment, I would definitely conclude he was trolling. Along with things that others have already pointed out, “per land area, farms are more efficient at producing "higher" animals like pigs and cows” -- where the hell did he take that from? Pretty much everyone I've ever read about this topic agrees that growing food for N people on a mostly vegetarian diet requires way less land, energy, and water than growing food for N people on a largely meat-based diet, and there's a thermodynamic argument that makes that pretty much obvious.
(I do agree that “meat eaters kill animals” isn't a terribly good argument because if it wasn't for meat eaters those animals wouldn't have lived in the first place (but that doesn't apply to hunting and fishing); but that's nowhere near one of the main reasons why I limit my consumption of meat.)
Along with things that others have already pointed out, “per land area, farms are more efficient at producing "higher" animals like pigs and cows” -- where the hell did he take that from? Pretty much everyone I've ever read about this topic agrees that growing food for N people on a mostly vegetarian diet requires way less land, energy, and water than growing food for N people on a largely meat-based diet, and there's a thermodynamic argument that makes that pretty much obvious.
The full sentence is
And if you do manage to induce less farmland and more wild land, you'll have to realize that, per land area, farms are more efficient at producing "higher" animals like pigs and cows. So there is a tradeoff between producing more farm animals with worse lives, or fewer wild animals with better lives, if in fact wild animals live better lives.
or
per land area, farms are more efficient [than wilderness is] at producing "higher" animals like pigs and cows.
Comment author:[deleted]
09 January 2013 05:44:50PM
0 points
[-]
Thanks. I did think “more efficient than what?”, but none of the possibilities I came up with other than “than they are at producing other foodstuffs” seemed relevant in context. (I don't even remember what they were.)
Comment author:[deleted]
10 January 2013 07:01:56PM
4 points
[-]
"People who buy less meat don't really spend less money on food overall, they mainly just spend more money on other non-meat food" -- it made me go "are you fucking kidding me" and wonder whether he has ever been to a supermarket.
Not only that, it makes me wonder if he realizes that most people in the world don't live on six figures. I remember once living on nothing but cereal, milk, eggs and kimchi for about eight months because, when rent and bills were totalled, there simply wasn't any money for more food than that.
Comment author:[deleted]
09 January 2013 08:41:11PM
*
0 points
[-]
Interesting...
Just one quibble: “other than pure aesthetics (“I just like it”) ... which are idiosyncratic (i.e. not true for most people)” sounds like a overwhelming exception to me. Given that I've never met anyone trying to convince other people to become vegetarians (though I've read a couple such people), I guess that's by far the most common reason. (I've eaten meat in front of at least a dozen different vegetarians from at least four different countries, and none of them seemed to be bothered by that.)
Depending on how ostentatiously (Which I know isn't the right word, but I think conveys what I'm trying to evoke?) you were eating the meat, it would bother me. The type of meat would also make a difference to me. I know vegetarians who are bothered if you eat any meat near them. They are obviously polite about it, (I certainly never say anything) but it might bother people more than you realize.
Comment author:[deleted]
11 January 2013 07:42:13PM
0 points
[-]
how ostentatiously
Not at all -- not that I tried to hide the fact that I was eating meat, but I tried to be as nonchalant as I would be if I didn't know they were vegetarians. OTOH I'm not terribly good at hiding emotions, so probably some of them could tell I was feeling a little embarrassed.
The type of meat would also make a difference to me.
What kind of difference? Pork vs beef vs chicken? Steaks vs minced meat? Free-range vs factory farmed vs hunted (but how would you tell)?
This reminds me of something I've wondered about. It seems plausible that it's cheaper to be a vegetarian, but the last I checked, meat substitutes seem to cost about as much as meat.
Is it just that no one's been exploring how many people would like good cheap meat substitutes, or is there some reason meat substitutes are so expensive? Or are there cheap ones I haven't noticed?
Comment author:Alicorn
11 January 2013 04:22:00AM
6 points
[-]
Fancy meat substitutes like quorn are expensive. TVP and tofu are dirt cheap. Going with vegetable sources of protein that make no attempt to directly replace meat, like rice and beans or peanut butter, is also cheap.
Comment author:[deleted]
14 January 2013 05:45:10PM
1 point
[+]
(0
children)
Comment author:[deleted]
14 January 2013 05:45:10PM
1 point
[-]
Basically what Alicorn said. People aren't necessarily satisfied with the cheap ones that are available - mimicking the exact mouthfeel and flavor of meat is difficult, and because many of the original meat substitutes are from Asia, they weren't common here until fairly recently Mock duck, aka Seitan (made from wheat gluten) is cheap, and very popular in Asia, but it seems to be a perennial also-ran in the US. Back during my veggie days I tried using it, only to find out I have a minor glutease deficiency (not full-on coeliac, but enough that seitan causes problems). It was by far the closest I've found to mimicking texture and mouthfeel for non-specific cuts of meat (as opposed to mimicking burgers or hot dogs or chicken nuggets or something); when prepared right it can be close to indistinguishable from meat.
Making good, cheap meat substitutes is a lot of work; Western would-be consumers often have high standards for them and aren't satisfied with the more-established forms, such as tofu, while new forms have substantial outlays for R&D (Quorn) and sometimes face regulatory hurdles or other barriers to acceptance (Quorn's initial attempt at a US release went very poorly). In the US, where meat production is directly subsidized, it's hard to compete anyway because there's lots of cheaper meat.
Comment author:drethelin
10 January 2013 06:57:03PM
0 points
[-]
One of the confounding factors is that a lot of meat is raised on land that's not suitable for human food farming. EG, free range cattle grazing in australia.
Essentially all domesticated animals are alive because of demand for products made from them (eggs, milk, meat, etc). If everyone kept kosher, there would be far fewer pig-experience-moments than the current world, including much less pig-experience-suffering. Is that good or bad for someone who values pig utility?
Anyway, I've always taken this kind of reasoning as a reason not to adopt that perspective on these types of questions. But I think that means I'm not a consequentialist - which puts me slightly out of consensus in this community.
Comment author:[deleted]
08 January 2013 07:07:20PM
1 point
[+]
(4
children)
Comment author:[deleted]
08 January 2013 07:07:20PM
1 point
[-]
If everyone kept kosher, there would be far fewer pig-experience-moments than the current world, including much less pig-experience-suffering. Is that good or bad for someone who values pig utility?
I value pig-utility. I'd much rather see a smaller number of comparitively well-kept, well-treated farm pigs and a healthy population of wild boars than the status quo. I'd also rather not see that arrived it by a mass slaughter of all other pigs, though, and pragmatically I'm not going to get that either way, so "a largeish-but-not-contemporary number of reasonably well-treated pigs farmed for food production" would be a much more feasible goal. Temple Grandin does a lot of work in this area, actually.
Comment author:[deleted]
10 January 2013 06:54:05PM
0 points
[-]
Not in the sense I was using it above, namely, "We kill them all at once to remove their population." What's happening at present is more like "we kill them in batches to meet production demands, and bring in more." Aggregated over the very long term a whole lot more pigs can suffer and die in the second case; I'm simply saying I don't find "One sudden, nearly-complete mass slaughter" to be a preferable alternative.
My point is that the lifetime of a pig (EDIT: being farmed for meat) isn't very long (about 6 months from what I can find on the internet). Thus all we would have to do is stop breeding them for a while and we very quickly wouldn't have many pigs.
Comment author:Desrtopa
08 January 2013 01:08:10AM
1 point
[-]
I think that would be true, assuming you have no additional reasons for opposing cannibalism.
Personally, I have no moral opposition to the idea of eating babies, but I suspect that baby farming would cause much more distress to the general population than the food it would produce would justify.
I don't agree with Hanson's position in that essay though. To take an excerpt:
We might well agree that wild pigs have lives more worth living, per day at least, just as humans may be happier in the wild instead of fighting traffic to work in a cubical all day. But even these human lives are worth living, and it is my judgment that most farm animal's lives are worth living too. Most farm animals prefer living to dying; they do not want to commit suicide.
How does he claim to know that? It's not as if he can extrapolate from the fact that they don't kill themselves. Factory farmed animals are in no position to commit suicide, regardless of whether they want to or not. And even if a farm animal's life is pure misery, it probably doesn't have the abstract reasoning abilities to realize that ending its own life, thereby ending the suffering, is a possible thing.
He compares the life of a farmed animal to a worker who has to fight traffic to spend their time working in a cubicle, but an office worker has leisure time, probably a family to spend time with, and enough money to make them willing to work at the job in the first place. I think the abused child in Omelas is a better basis for comparison.
Comment author:[deleted]
08 January 2013 07:03:36PM
4 points
[-]
He compares the life of a farmed animal to a worker who has to fight traffic to spend their time working in a cubicle, but an office worker has leisure time, probably a family to spend time with, and enough money to make them willing to work at the job in the first place.
Also: very few office workers get mutilated to prevent them from mutilating their coworkers out of stress, or locked into their cubicles full-time and forced to wallow in their own faeces (periodically being hosed down from outside), or are so over-bred for meat production purposes that even in their cramped conditions the strain of their under-used, oversized muscles strains their skeletons and joints to the breaking point.
Oh, and instead of a salary designed to seem big but actually undervalue your performance, you get paid in being killed (not infrequently a painful and lingering experience) and having any children you bore taken away for no obvious reason.
Comment author:[deleted]
08 January 2013 07:41:16PM
*
2 points
[-]
Yes. “If you have doubts on this point, I suggest you visit a farm” is a massive Appeal to Generalization from One Example. I'm pretty sure some farms are a helluva much worse than others, and I strongly suspect that the farms a random person is most likely to visit will be closer to the good end of the scale.
Comment author:lsparrish
02 January 2013 06:13:27AM
2 points
[-]
I've recently become interested in holding some competent opinions on FAI. Trying these on for size:
FAI is like a thermostat. The thermostat does not set individual particles in motion, but measures and responds to particles moving in a particular average range. Similarly, FAI measures whether the world is a Nice Place to Live and makes corrections as needed to keep it that way.
Before we can have mature FAI, there is the initial dynamic or immature FAI. This is a program with a very well thought out, tested, reliable architecture that not only contains a representation of Friendliness, but is designed to keep that as part of its fundamental search patterns. As it searches for self-modifications, it passes each potential modification through a filter which rejects any change that fails to provably preserve the Friendliness goal.
Since provability is tricky, many optimizations which would preserve Friendliness could be rejected due to a lack of a strategy to prove them. This seemingly implies that a reliable system with non-trivial things needing proved will be slower to self-improve than a kludgey system with simpler goals like maximizing computronium.
Comment author:[deleted]
01 January 2013 11:46:03AM
2 points
[-]
Can we have a way to save comments?
I often need to retrieve something I've read on Lesswrong but search isn't always helpful. Saving everything I read would limit the scope significantly.
Comment author:Ritalin
19 January 2013 05:47:59PM
*
1 point
[-]
Spec. Ops: The Line; a Rationalist twist?
I've played through Spec. Ops: the Line. Interesting though that game is, there's one aspect that I found very lacking; the intelligence and rationality of the protagonists, both instrumental and cognitive. It's not just in their poor decision-making, or their delusions, but also their complete lack of defenses in front of the horrors of war, both from them and from others. They act from the gut, they mismanage the feelings of guilt, obligation, and fear.
The game has a theme of helplessness in the face of chaos; it doesn't matter whether you try to do the right thing, because the world does not bend to your will, and you'll find yourself forced to do unsavoury things, or having things you do turn out to have horrible unforeseen consequences.
I was wondering whether it was possible to hammer this message home in spite of having intelligent, rational characters. The game, as it is, says "Good intentions and outrageous badassery aren't enough to prevent failure or protect you from moral bankruptcy". I'd like to amend that to "Good intentions, a rational and intelligent approach, and outrageous badassery, aren't enough to prevent failure or protect you from moral bankruptcy or insanity".
If in Newcomb's problem you replace Omega with James Randi, suddenly everyone is a one-boxer, as we assume there is some slight of hand involved to make the money appear in the box after we have made the choice. I am starting to wonder if Newcomb's problem is just simple map and territory- do we have sufficient evidence to believe that under any circumstance where someone two-boxes, they will receive less money than a one box? If we table the how it is going on, and focus only on the testable probability of whether Randi/Omega is consistently accurate, we can draw conclusions on whether we live in a universe where one boxing is profitable or not. Eventually, we may even discover the how, and also the source of all the money that Omege/Randi is handing out, and win. Until then, like all other natural laws that we know but don't yet understand, we can still make accurate predictions.
Comment author:TimS
09 January 2013 06:05:12PM
2 points
[-]
No. I think that is fighting the hypothetical.
More generally, the discipline of decision theory is not about figuring out the right solution to a particular problem - it's about describing the properties of decision methods that reach the right solutions to problems generally.
Newcomb's is an example of a situation where some decision methods (eg CDT) don't make what appears to be the right choice. Either CDT is failing to make the right choice, or we are not correctly understanding what the right choice is. That dilemma motivates decision-theorists, not particular solutions to particular problems.
That's possible, but I am not sure how I am fighting it in this case. Leave Omega in place- why do we assume equal probability of omega guessing incorrectly or correctly, when the hypothetical states he has guessed correctly each previous time? If we are not assuming that, why does cdc treat each option as equal, and then proceed to open two boxes?
I realize that decision theory is about a general approach to solving problems- my question is, why are we not including the probability based on past performance in our general approach to solving problems, or if we are, why are we not doing so in this case?
Comment author:BlackNoise
05 February 2013 11:49:52PM
*
0 points
[-]
Here's an anthropic question/exercise inspired by this fanfic (end of 2nd chapter specifically), I don't have the time to properly think about it but it seems like an interesting tests for current anthropic reasoning theories under esoteric/unusual conditions. The premise is as follows:
There exist a temporal beacon, acting as an anchor in time. An agent/agents may send their memories back to the anchored time, but as time goes on they may also die/be otherwise prevented from sending memories back. Every new iteration, the agent-copy at the time immediately after the beacons' creation gets blasted with memories from 'past' iterations, either from only the immediately preceding one which recursively includes all previous iterations as further back in subjective time, or from every past iteration at once, with or without a convenient way to differentiate between overlapping memories (another malleable aspect of the premise), or for real head-screwes, from all iterations that lived.
the interesting question would be how would an agent estimate their probability of dying in the current iteration, based on information it was blasted with immediately post-anchor time.
A very simple toy model would be something like:
assuming all agent copies send back memories after T years if they haven't died, with the probability of dying/being unable to send back memories each iteration being p, an agent that finds itself with memories from N iterations, what should it estimate as its probability of dying in this iteration?
There should probably be more unsafe time-travel based questions to test anthropic decision making, maybe also to shape intuition regarding many-worlds/multiverse views.
Comments (333)
I'm thinking about writing a more comprehensive guide than Skatche's Rationalist's Guide to Psychoactive Drugs. In addition to the substances described in Skatche's guide I would discuss the risks, benefits and possible fields of applications of e.g. benzodiazepines, GHB, opioids and various research chemicals.
Is anyone interested in this kind of stuff? You don't have to comment, upvoting suffices (saves time and gives me precious karma).
And I'm a bit worried that this kind of post falls under the new censorship laws. What do those in power on LessWrong think about that?
The "Lesswrong censorship laws" speak of illegal violence. Possession of drugs might be illegal but isn't violence.
My analysis:
Do your posts look like solicitation to possess illegal drugs with intend to distribute? (Hint: for anything short of "Please tell me where to buy drugs," the answer is probably no).
Could a malicious prosecutor convince a grand jury to indict Eliezer (or others) as co-conspirators based on what you have written? (Hint: probably not).
In short, you are probably fine. But I am not a "power" on LW.
Just to be clear, I doubt this is Eliezer's thought process. But I suspect it is a fairly accurate heuristic for what is and isn't acceptable.
I agree with your analysis. However, the fact that some people are expressing concern that their comments might violate the new censorship policy suggests that others might abstain, or have already abstained, from posting valuable material to this forum, which in turn increases my credence that the censorship policy does more harm than good.
"Avoid compartmentalisation, but don't talk about your results from doing so too loudly."
In context, this 2010 post (capture) is interesting: current version is about deaths of tobacco company employees, but it was changed after comments from the original, which was about slowing the computer industry to slow AI progress.
Interesting. As far as I can see, though, the screencap shows the revised version about deaths of tobacco company employees, not the original version.
The few times I raised this question in the past, my comments were met with either indifference or hostility. I will try to raise it one more time in this open thread. If you think the question deserves a downvote, could you please, in addition to downvoting me, leave a brief comment explaining your rationale for doing so? I promise to upvote all comments providing such explanations.
So, here's the question: What is the reason for defining the class of beings whose volitions are to be coherently extrapolated as the class of present human beings? Why present and not also future (or past!)? Why human and not, say, mammals, males, or friends of Eliezer Yudkowsky?
Note that the question is not: Why should we value only present people? This way of framing the problem already assumes that "we" (i.e., present human beings) are the subjects whose preferences are to be accorded relevance in the process of coherent extrapolation, and that the interests of any other being (present or future, human or nonhuman) should matter only to the extent that "we" value them. What I am asking for, rather, is a justification of the assumption that only "our" preferences matter.
Luke lists "Why extrapolate the values of humans alone? What counts as a human? Do values converge if extrapolated?" as an open question in So You Want to Save the World.
Thanks!
Of course, the premise that "humans are the only beings who can reason about their own preferences" could only justify the conclusion that some human beings are special, since there are members of the human species who lack that ability. Similar objections could be raised against any other proposed candidate property. This has long been recognized by moral philosophers.
I see no reason to restrict our preference extrapolation to presently-existing humans. CEV should extrapolate from all preferences, which includes the preferences of all sentient beings, present and future. Any attempt to place boundaries on this require justification.
Edit: You might say, "Why not also include rocks in our consideration?" Simple: rocks don't have preferences. Sentient beings (including many non-human animals) have preferences.
I'm not sure that there is community consensus that "human beings currently living" is the right reference class. Eliezer suggests that he thinks the right reference class is all of humanity ever in this post.
If one assumes some kind of moral progress constraint and unpredictable future values, CEV(living humans) seems like our future descendents would hate it. Certainly, modern Westerners probably would hate CEV(Europeans-alive-in-1300). But I'm a moral anti-realist, so I don't believe there are constraints that cause moral progress - and don't expect CEV(all-humans-ever) to output a morality.
Some people would disagree.
Gwern collects some evidence against the proposition. The fact that people disagree and think morality is timeless in some sense is not particularly strong evidence when compared to results of competent historical analysis.
Of course, which historical analysis is considered credible is fairly controversial.
Part of the point of CEV is to make the extrapolation process good enough that future beings X won't hate the extrapolation of arbitrary past group Y. The extrapolation should be effective and broad enough that extrapolating from humans in different parts of history would not appreciably change the outcome. My guess would be that the extrapolation process itself would provide most of the content, the starting reference class being a minor variable.
It would be convenient if such a process could be proven to exist and rigorously described.
Resolving that issue would do a lot to address the OPs concerns. Separately, it would be a strong reason for me to reject moral anti-realism.
What evidence do we have that such convenient extrapolation is actually possible?
Resolving that issue is part of the overall goal of the SI, and a huge project. I'm also a moral anti-realist, by the way. CEV should be starter-insensitive w/ respect to humans from different time periods. My reasons for why I think that this is achievable in principle would be a whole post.
I would also like to see this discussion. It isn't terribly clear to me why the extinction of the human race and its replacement with some non-human AI is an inherently bad outcome. Why keep around and devote resources to human beings, who at best can be seen as sort of a prototype of true intelligence, since that's not really what they're designed for?
While imagining our extinction at the hands of our robot overlords seems unpleasant, if you imagine a gradual cyborg evolution to a post-human world, that seems scary, but not morally objectionable. Besides the Ship of Theseus, what's the difference?
No one else seems to be giving what is IMO the correct answer; I want the values of a created FAI to match my own, extrapolated. ie moral selfishness.
I would actually prefer that the extrapolation seed be drawn only from SI supporters (or ideally just me, but that's unlikely to fly), because I'm uneasy about what happens if some of my values turn out to be memetic, and they get swamped/outvoted by a coherent extrapolated deathist or hedonist memplex. Or if you include, for example, uplifted sharks in the process.
I too would prefer super AI to look to my values when deciding what to implement.
But, given the existence of moral disagreement, I don't see why that deserves to be labeled Friendly. And the whole point of CEV or similar process is to figure out what is awesome for humanity. Implementing something other than what is awesome for all of humanity is not Friendly.
If deathism really is what is awesome for all humanity, I expect a FAI to implement deathism. But there's no particular reason to believe that deathism is what is awesome for humanity.
Tim, your comment highlights the potential conflict between CEV and FAI that I also mentioned previously. FAI is by definition not hostile to human beings, whereas CEV might permit, or even require, the extinction of all humanity. This may happen, for instance, if the process of coherent extrapolation shows that humans value certain superior beings more than they value themselves, and if the coexistence of humans and these beings is impossible.
When I pointed out this problem, both Kaj Sotala and Michael Anissimov replied that CEV can never condone hostile actions towards humanity because FAI is "defined as 'human-benefiting, non-human harming'". However, this reply just proves my point, namely that there is a potential internal inconsistency between CEV and FAI.
Don't look at me to resolve that conflict. I think moral extrapolation is unlikely to output anything coherent if the reference class is sufficiently large to avoid the objections I raised above. And I can't think of any other plausible candidate to produce Friendly instructions for an AI.
Slight sidetrack: By the time AI seems plausible, I think it's likely that the human race will have done enough self-modification (computer augmentation, biological engineering) that the question of what's human is going to be more difficult than it is now.
Just wanted to point out that many contributors to the site are afflicted by what I call "theoritis", a propensity to advance a theory despite being green amateurs in the subject matter, and then have the temerity to argue about it with the (clearly-non-stupid) experts in the field. The field in question can be psychology, neuroscience, physics, math, computer science, you name it.
It is rare that people consider a reverse situation first: what would I think of an amateur who argues with me in the area of my competence? For example, if you are an auto mechanic, would you take seriously someone who tells you how to diagnose and fix car issues without ever having done any repairs first? If not, why would you argue about quantum mechanics with a physicist, with a decision theorist about utility functions,or with a mathematician about first-order logic, unless that's your area of expertise? Of course, looking back it what I post about, I am no exception.
OK, I cannot bring myself to add philosophy to the list of "don't argue with the experts, learn from them" topics, but maybe it's because I don't know anything about philosophy.
I take non-programmers seriously about programming all of the time. That's pretty much in the job description.
Just because I'm not stupid doesn't mean I'm not wrong. Indeed, it takes some serious intelligence to be wrong in the worst kind of ways.
About implementation, or about what to implement?
In practice the two are, in my line of work, very difficult to separate. The what is almost always the how. But both, out of practical necessity. When the client insists on a particular implementation, that's the implementation you go with.
I would assume that's high-level -- "use Oracle, not MySQL"
That's part of it, but no, that's not what I'm referring to. Client necessities are client necessities.
"Encryption and file delivery need to be in separate process flows" would be closer. (This sounds high-level, but in the scripting language I do most of my work in, both of these are atomic operations.)
A relevant distinction that you are not making is between the questions that are well-understood in the expert's area and the questions that are merely associated with the expert's area (or are expert's own inventions), where we have no particular reason to expect that the expert's position on the topic is determined by its truth and not by some accident of epistemic misfortune. The expert will probably know the content of their position very well, but won't necessarily correctly understand the motivation for that position. (On the other hand, someone sufficiently unfamiliar with the area might be unable to say anything meaningful about the question.)
Good point. Also, even when questions are well-understood by domain experts it still can be very effective to argue about them, since this usually leads to the clearest arguments and explanations. This is especially true since the social norms on this site highly value truth-seeking, epistemic hygiene (including basic intellectual honesty) and scholarship: in many other venues (including some blogs), anti-expertise attitudes do lead to bad outcomes, but this does not seem to apply much on LW.
Good post. It's EY's fault, imo. He set the norms.
Not exactly a green amateur, so how could he have set that norm? EDIT: Retracted, you answered in another comment.
Want to give some examples? I don't seem to recall seeing a lot of this myself.
Come on, Luke has a series of posts taking a shit on the entire discipline of philosophy. Luke is not an expert on philosophy. EY says he isn't happy with do(.) based causality while getting basic terminology in the field wrong, etc. EY is not an expert on causal inference. If you disagree with Larry Wasserman on a subject in stats, chances are it is you who is confused. etc. etc. Communication and scholarship norms here are just awful.
If you want to see how academic disagreements ought to play out, stroll on over to Scott's blog.
edit: To respond to the grandparent: I think the answer is adopting mainstream academic norms.
shminux explicitly excluded philosophy, and I wasn't aware of the other two examples you gave. Can you link to them so I can take a look? (ETA: Never mind, I think I found them. ETA2: Actually I'm not sure. Re Wasserman, are you referring to this?)
I couldn't agree more. Mainstream academia is set of rationality skills and a very case hardened one. Adding something extra, like cognitive science might be good, but LW omits a lot of the academic virtues -- not blowing off about things you don't know, making an attempt to answer objections, modesty, etc.
PS: Tenure is a great rationality-promoting institution because...left as an exercise to the reader.
I think philosophy does belong to the list if you are arguing some matters of philosophy but not others. There is a common field to all mathematics-heavy disciplines, that is mathematics, with huge overlaps, and there's no reason why for example a physicist couldn't correctly critique bad mathematics of a philosopher, even though most non philosophers or amateur philosophers really should learn and not argue as a philosopher is a bit of an expert in mathematics.
I find that an odd statement. Why can't you assume by default that arguing with an expert in X is bad for all X?
For some reason, theoritis is much worse with regard to philosophy than just about anything else. Amateurs hardly ever argue with brain surgeons or particle physicists. I think part of the reason for that is that brain surgeons and particle physicists have manifest practical skills that others don't have. The "skill" of philosophy consists of stating opinions and defending them, which everyone can do to some extent. The amateurs are like people who think you can write (well, at a a professional level) because you can type.
The ability to go easily from standing to sitting and from sitting to standing has a good correlation with all-causes mortality
As might be predicted, I'm putting in a little work on improving my ability at the test-- I have no idea whether this an example of Goodhart's Law.
A couple of quick points about "reflective equilibrium":
I just recently noticed that when philosophers (and at least some LWers including Yvain) talk about "reflective equilibrium", they're (usually?) talking about a temporary state of coherence among one's considered judgement or intuitions ("There need be no assurance the reflective equilibrium is stable—we may modify it as new elements arise in our thinking"), whereas many other LWers (such as Eliezer) use it to refer to an eventual and stable state of coherence, for example after one has considered all possible moral arguments. I've personally always been assuming the latter meaning, and as a result have misinterpreted a number of posts and comments that meant to refer to the former. This seems worth pointing out in case anyone else has been similarly confused without realizing it.
I often wonder and ask others what non-trivial properties we can state about moral reasoning (i.e., besides that theoretically it must be some sort of an algorithm). One thing that I don't think we know yet is that for any given human, their moral judgments/intuitions are guaranteed to converge to some stable and coherent set as time goes to infinity. It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments, or none if for example their conclusions keep wandering chaotically among several basins of attraction as they review previously considered arguments. So I think the singular term "reflective equilibrium" is currently unjustified when talking about someone's eventual conclusions, and we should instead use "the possibly null set of eventual reflective equilibria". (Unless someone can come up with a pithier term that has similar connotations and denotations.)
Another way to get several equilibria would be moral judgements whose "correctness" depends on whether other people share them. I find it likely that there would be some like that, since you get those in social norms and laws (like, on which side of the road you drive, or whether you should address strangers by their first or last name), and there's a bit of a fuzzy continuum between laws, social norms, and morality.
Lead and crime Arguments that lead has a lot to do with crime levels, and discussion of why this has gotten so little attention.
Just to indulge in a little evolutionary psychology..... Punishing people and helping people are both strong drives, but spending a lot of money on lead abatement (the lead from gasoline is still in the soil, and it keeps coming back-- lead paint is still a problem, too) is pretty boring.
ETA: And worse, progress with lead abatement is literally invisible (you don't have a dam or a highway so it looks like you're doing something) and the good effects take some 15 or 20 years to be obvious.
The basic point is reasonable, but there are so many things that bother me about that article.
Drum's credulity varies a lot in this article. His lowest level is about where I stand. I have to wonder if that actually reflects his beliefs and the rest of it is forcing enthusiasm on himself because to reflect value rather than truth; that is, he is doing an expected value calculation. Certainly, he should be applauded for scope sensitivity.
Perhaps the biggest thing that bothers me is that Drum tries to have it both ways: small amounts of lead matter and big amounts of lead matter. It seems rather unlikely that this is true. Maybe 10μg/dL has a huge effect, but if so, I doubt that 20 has double that effect, and this ruins all the analysis of the first half of the article. This is important because there is a logical trade-off between saying that past lead reduction was useful and saying future lead reduction will be useful. In particular, Drum says that Kleiman says that if the US were to eliminate lead, it would reduce crime by 10%. Did he just make up this number, or does it come out of a model? I'd like to see the model because even if he pulled the model out of thin air, it forces him to deal with the logical trade-off.
In Kleiman's book, he says that eliminating lead paint would reduce crime by 5% and attributes it to Nevin 2000. On the same page, he misquotes Nevin in a way that makes me not trust Kleiman with models. But that's OK because he has a citation, not model. I cannot find the claim in Nevin's paper. There is a model on p19 that says that 6 points of IQ, applied to the lowest 30% of the population could explain the past decline. And that's at a rate of 2 points of IQ for 10μg/dL, a small enough rate I'm willing to extrapolate linearly. If you assume crime in linear in lead, the 5% number is reasonable, except for the assumption that lead explains all of the past decline. (I'm not sure Nevin actually makes this assumption because I don't think he makes a prediction about eliminating lead; in this section, I think he's just doing a reality check that the known IQ effect of lead plus the known correlation of IQ and crime is big enough to explain the whole drop in crime.)
So I am bothered by Drum's language about the effects of low levels of lead, even though the suggestion of a 10% drop in crime maybe survives the trade-off between past and future. (And how does Kleiman's 5% turn into "Kleinman's" 10%? windows vs windows+soil?)
From the first half of the article:
Econometrics gives people enough rope to publish themselves. Plus they implement these algorithms in spreadsheets, to hide the bugs from themselves.
If lead explains everything, this should not always have been true. In fact, I think it was not true in 1960. The graph Drum cites starts in 1975, after most of the increase in national murder rates has already happened, but there is very little dependence on city size until later. The graph seems to me evidence against the claim that lead explains this detail. Anyhow, such bucketed graphs are a bad way to test this hypothesis. In particular, there are only 9 "big cities" and NYC has 1/3 of this population. The convergence today is probably driven just by NYC now having a lower murder rate than small cities.
Drum says that Newarks's crime rate dropped 75%. That is true and but it is also true that Newark's murder rate has rebounded to its peak. I don't know how to resolve this. I usually prefer murder rates because they are harder to fake, but there are only about 80 murders in the worst years, making the data quite noisy.
That the graphs of leaded gasoline and crime match perfectly, up until year that Nevin's first paper was published screams publication bias.
Crack:
Trying to explain the crack epidemic in terms of childhood seems like a serious error to me. It seems very clear to me that it was contagious. How it spread and why it burnt itself out, I do not know. Regardless, one can disprove Nevin's model's claim to explain the crack epidemic, like Levitt's spreadsheet fraud before it, because it assumes that the age of criminals is constant in time. In fact, the crack epidemic involved young murderers, born after lead levels had started to decline. I think Nevin worries about this in later papers, but I don't know what he does.
Here is a suggestion for a better model for testing Nevin's hypothesis than he used in 2000: instead of lagging on some constant, create a new time series of murder by age of birth. This also corrects for the demographic problems such as the baby boom. The disadvantage is that this loses exogenous effects, such as the crack epidemic, which hit multiple ages simultaneously. Yet another time series, to avoid the problem of missing data, uses the age of the victim rather than of the perp.
So Nevin fails to explain the crack epidemic, but if he just explains the big rise and the big fall, that's a big deal. Unfortunately, the presence of the crack epidemic masks the big fall. In the absence of crack, when would crime have started falling? Perhaps it would have started falling earlier, but was elevated by crack. Or perhaps all those dead or jailed young teens would have become 25 year old criminals and so the effect of crack was to speed things up, including the falling crime rate.
There's a lot you can do to remediate lead and the bioavailable forms of it, fortunately (been working on a garden in an urban area, and bioremediation is a chief concern) -- it doesn't just have to involve removing it. Unfortunately, it's still likely to be rather expensive and unglamorous, so it'll be a tough sell as a point of policy.
The sexy project would be to figure out how to undo the effects of lead on people years after they'd been exposed as children. I think succeeding at this would wonderful, but I wouldn't put off cleaning up lead in the environment in the meanwhile.
That'd be beyond "sexy"; the effects of lead poisoning on the central nervous system are generally considered irreversible. I daresay anything that could repair that sort of brain damage would have a whole host of other applications...
LW has been loading slowly lately-- sometimes it times out. Has anyone else been having this problem?
Yeah, I've been experiencing this as well. It mostly happens when I'm trying to use karma or when I first open up LW.
Good a reason as any if some of our comments aren't sufficiently upvoted!
Random idea inspired by the politics thread: Could we make a list of high quality expressions of various positions?
People who wished to better understand other views could then refer to this list for well expressed sources.
It seems like there might be some argument about who "really" understood a given point of view best, but we could resolve debates by having eg pastafarianism-mstevens for the article on pastafarianism I like best, and pastafarianism-openthreadguy for the one openthreadguy prefers.
TVTropes has an -amazing- political and philosophical library. They have the single-best description of Objectivism I've ever seen, in particular.
You're right, the tvtropes article on Objectivism is actually really good. I knew they had a lot of good non-trope content.
Wow that's amazingly good. It reminds me of how baffled i was about the degree that everyone hated Ayn Rand after reading atlas shrugged as a teenager, and I now realize the reason is that everyone thought she was arguing against things she wasn't arguing against.
I wonder whether not being a formally respectable source is actually good for tvtropes.
By not being formally respectable, TVtropes gets an otherwise skeptical audience (western nerds) to seriously consider certain philosophical positions that they are otherwise quite hostile to.
If LW concepts (eg mindkiller, raising the sanity line, paying rent in anticipated experience) were as popular as similarly philosophical TVtropes concepts, I think SI and CFAR leadership would be thrilled.
I was thinking about it from a different angle-- that sometimes lack of respectability leaves more room for conscientiousness.
It doesn't always work that way-- but so far tvtropes is a home for people who genuinely want to get the details of popular culture right. It seems odd, but it doesn't seem to have the problems with fraud and sloppiness that science does. Is this because people care more about popular culture than science? Or is it just that if tvtropes becomes respectable, the rewards for cheating will go up?
I hadn't thought of it that way - it's very plausible.
But some of the fraud in science is just lost purpose. If you need a certain number of publications to advance in your job, submitting fraudulent studies seems much more rewarding. And TVtropes doesn't have a similar issue - in part because of the lack of respectability you noted.
Is rubber part of the Great Filter? This thought occurred to me while reading Charles Mann's "1493" about the biological exchange post Columbus.
Rubber was a major part of the industrial revolution (allowing insulation of electric lines, and is important in many industrial applications in preventing leaks) . Rubber only arose on a single continent for a small set of species. While synthetic rubber exists, for many purposes it isn't as of high quality as natural rubber. Moreover, having the industrial infrastructure to make synthetic rubber would be extremely difficult without modern rubber. Thus, a civilization just like ours but without rubber might not have been able to go through the industrial revolution. This situation may also be relevant to Great Filter issues in our future: if civilization collapses and rubber becomes wiped out in the collapse, is this another potential barrier to returning to a functional civilization, especially if there's less available coal and oil to make synthetic rubber easily?
Rubber doesn't sound that important to me. The Wikipedia article includes all sorts of useful bits: it only went into European use in the late 1700s, at earliest, well after most datings of the Scientific and Industrial Revolutions; most rubber is now synthesized from petroleum; many uses of insulation like transoceanic telegraphs used gutta-percha which is similar but not the same as rubber (and was superior to rubber for a long time); and much use is for motor-vehicle tires, which while a key part of modern civilization, does not seem necessary for cheap long-distance transportation of either goods or humans (consider railroads).
So rubber doesn't look like a defeater. If it didn't exist, we'd have more expensive goods, we'd have considerably different transportation systems, but we'd still have modern science, we'd still have modern industry, we'd still have cheap consumer goods and international trade, and so on and so forth.
That's a pretty convincing analysis that rubber isn't an aspect of the Filter.
Happy New Year, LWers, I'm on a 5 month vacation from uni, and don't have a job. Also, my computer was stolen in October, cutting short my progress in self-education.
Given all this free time I have now, which of these 2 options is better?
or
I don't have anything specific to offer, but (in theory) hard choices matter less. And if you literally can't decide between them, you can try flipping a coin to make the decision and as it is in the air, see which way you hope it will end up, and that should be your choice.
I concur with dbaupp's suggestion.
Additionally, you can try the reframing technique. Anna describes it here:
The example she gives isn't quite isomorphic to the choice you're making, but I think the technique still may be worth trying. Imagine you're currently living out one option but given the chance to take the other - how would you feel about it? And vice versa.
dbaupp, ParagonProtege, thank you both for the links and suggestions. I'm going with the laptop. Anything else I could do (naturally, there's a lot i want to do) will be kickstarted by the modest, but easy(ish) money I'll get by doing ~$100 websites, as I upgrade my code-fu for Other Stuff. ;)
I also haven't cycled actively for years & I'm afraid my unfit body might conk out on me, making me unable to Do The Job once I commit. Cliff scaling is much harder than hill climbing.
From Alicorn's post , I can easily tell that after I get the laptop, the correct thing to have would be a bike, since I can ease myself back into cycling regularly. It's also weird how I saw the Other Option (buy bike, work, afford laptop, buy laptop, cut down on bike work as I increase study & laptop work hours) as just as good, even though I know I will feel like a flake if I stop riding after it gets tougher and more tiring, which is more likely than giving up on wordpress. Wordpress isn't even the only option for devastatingly easy Internet work.
Can anyone recommend a good therapist in San Francisco (or nearby) who's rationalism-friendly? I have some real problems with depression and anxiety, but the last time I tried to get help the guy told me I was paying too much attention to evidence and should think more spiritually and less rationally. Uh...huh. If you don't want to post publicly here, PM or email is fine.
I'll second drethelin; CBT is both evidence-based as a treatment method- there's evidence it works- and evidence-based in practice, meaning you don't have to believe in it or anything, you just follow the prescribed behaviors and observe the results. Really, it's highly rationalism-friendly, being mainly about noticing and combatting "cognitive distortions" (e.g. generalizing from one example, inability to disconfirm, emotional reasoning, etc.). A therapist who specializes in CBT can be pretty well assumed to not be in the habit of dragging "spirituality" into their work.
I agree that CBT is well-supported by the evidence, and in general should be rationalism-friendly but that isn't always so. The therapist I mentioned in my OP was, in fact, calling himself a CBT practitioner. So I was hoping someone knew a CBT guy (or other equally well-supported method, honestly) he personally liked.
There are a handful of CBT books that are about as effective in general as having a therapist. You might be interested in feeling good, the depression workbook, or the anxiety workbook. I recommend that you keep looking for social support as well.
Oh. Well, that's surprising.
Sorry, I'm not in the area.
CBT style therapy is pretty founded on science
You might want to look at Rational-emotive behavior therapy (REBT), and the affiliated organizations' websites. There are usually a few REBT therapists in any major city.
Can someone who's familiar with Mencius Moldbug's writing briefly summarize his opinions? I've tried reading Unqualified Reservations but I find his writing long-winded. He also refers to a lot of background knowledge I just don't have, e.g. I don't know what I'm supposed to take away from him calling something Calvinist.
This is a tall order. Nearly everyone I talk to seems to while getting the same basic models emphasise wildly different things about them. Their updates on the matter also vary considerably everything from utterly changing their politics to just mentally noting that you can make smart arguments for positions very divergent from the modern political consensus. Lots of people dislike his verbose style.
That is certainly the reason I haven't read all of his material so far.
I think the best way to get a summary is to discuss him with people here who have been read him. They will likely learn things too. When its too political continue the discussion either in the politics thread or in private correspondence.
I'm interested and willing to engage in such discussion. If you are too I'd ask you to perhaps make a list of the posts you have read so far? For now I'm assuming you began with one of the recommended essays like Idealism Is Not Great, Divine-right monarchy for the modern secular intellectual, Formalist Manifsto. Perhaps the introductory Open Letter to Open Minded Progressives or the Gentle Introduction sequences.
To this I would add the comment history of fellow LWer Vladimir_M which is littered with high quality Moldbug-like arguments on various issues. Who knows a few new responses might coax him out of inactivity!
I recall some old sort of interesting discussion of Moldbuggian positions in which I participated as well:
Thanks thats a helpful summary.
Slightly related question, why are his views seemingly being suddenly discussed a lot and taken semi-seriously on LessWrong?
It isn't a sudden change. As far as I know, Moldbug's ideas are a recurring minor theme at LW.
Yes I think this is about right. An example is this discussion of Peter Thiel's support of seastading.
As NancyLebovitz said it isn't really a new thing, there was a recent discussion on why talk of Moldbug's ideas is noticeable here.
By the way: I was pondering Les Miserables not long ago in anticipation of the movie, and realized that both the musical and the original novel are an exact artistic/literary expression of what Moldbug calls Universalism (down to details like the family lineage from Christianity (the bishop at the beginning) to revolutionary politics). And the character of Javert summarizes perfectly Moldbuggian philosophy, e.g. "I am the law and the law is not mocked!" Would you agree?
If we take the Javert = Moldbug metaphor seriously, how should we interpret Javert's later conclusion that his earlier philosophy contains a hopeless conflict between authority-for-its-own-sake and helping people live happier lives?
Well, the story is set up to favor Universalism. If Moldbug had written it, probably it would have ended with Valjean concluding that his earlier philosophy contained a hopeless conflict between rejecting authority and helping people live happier lives.
I'm smirking at the idea of a Moldbuggian story of the uprising of 1832. Revolutionists Get What They Deserve or some-such. :)
But I don't think that story has room for the complex characters of Hugo's story, narratively speaking. There's no room at all for Valjean, and Javert becomes simply the protagonist to the evil antagonist Enjolras.
Ultimately, you asked if canon!Javert embodies Moldbug. As I suggested above, I think the answer is no. He's a tragic figure - even Hugo would admit that > 75% of the time, the king's law point toward a just outcome. But Javert was blind to the fact that the king's law contained deep flaws.
I don't know if the passage survives the standard abridgements, but Javert writes a note to his superiors listing several minor injustices in the local prison system, immediately before killing himself. Even after conversion, Javert fails to realize that he was the only person who both (1) knew about the issues, and (2) cared about the injustice. That episode, and Javert as a character, are deeply tragic in my opinion.
And I can't imagine Moldbug caring about those issues at all. Obviously, Moldbug's choices would be different - but I don't get the impression Moldbug would think the minor injustices were even worth his attention if he were in Javert's situation.
It's a lesson about happens when you combine the virtuous with a pernicious system of virtue. The liberal backlash against strong authoritarianism/belief in the rule of law is one way of reacting to such a world. "The laws are evil, therefore their enforcers are evil." The other side of this is people who believe the laws are good and anyone who enforces them is good. Both views are lacking nuance. Javert is someone who has spent his life believing that he is good because he enforces the laws, which are good. He can't live with the idea that he has been "bad" all along.
I summarized very briefly my understanding of his political philosophy in this comment a few weeks ago.
If you've got a few hours, I found the Gentle Introduction to be sufficiently gentle, but it does have nine parts and is written in his regular style. I think the first part is strongly worth slogging through, in part because his definition of "church" is a great one. I may write a short summary of it at some point, but that's a nontrivial writing project.
Moldbug has a variety of opinions that he expresses in his articles. Summarizing all of them is therefore hard. I will try to list a few.
Moldbug reject the progressive project. That means that he's opposed to most politicial ideas of Woodrow Wilson and presidents after Wilson.
Moldbug rejects modern democracy. He thinks that the US military should orchestrate a coup d'état. After the coup d'état the US should split and every state should have his own laws.
In the ideal case Moldbug wants that the states to be run like a stock company. If that isn't possible Moldbug prefers the way Singapur and Qatar are governed to the way the US is governed. According to him competition between a lot of states that are governed like Singapur is better than a huge federal government.
Your timeline starts too late. Moldbug rejects the Glorious Revolution.
I suspect that Moldbug thinks a military coup is only a means to an end. He wants government rule on a for profit basis, with essentially no tolerance of social disorder - other than vote with your feet (i.e. leaving). This is the concept he calls "Patches."
Moldbug does reject it, I'm however not sure that he rejects all political pre-20st century events. He seems to like corporations and corporations have gotten much more legal rights than they had before the Glorious Revolution.
Could you please clarify if you are unsure what he means when he calls a position Calvinist (presumably Crypt-Calivinist or something like that) or are you just unsure what Calvinism is?
The short and sufficient answer to the second is that this is a designation for a bunch of Protestant Christians who historically took themselves very seriously and have a reputation for being dour. Take special note of the Five Points of Calvinism.
The short and insufficient answer to the first is people who have ethical, political and philosophical ideas that can't be justified by their declared systems of ethics but can be perfectly well explained if you note the memeplexes in their heads are descendent of highbrow American Protestantism of the previous centuries. He goes into several things he considers indications of this and points out they dislike this explanation very much and want to believe their positions are the result of pure reason or Whiggish notions of history inching towards a universal "true human morality".
The former, but thanks for your clarification on both (I imagine your clarification on the latter is a relevant connotation Moldbug wanted and that I was largely ignorant of).
Watson, the IBM AI, was fed urban dictionary to increase its vocabulary / help it understand slang. It started swearing at researchers, and they were unable to teach it good manners, so they deleted the offending vocabulary from its memory and added a swear filter. IBTimes.
Aaron Swartz (aaronsw on LW) has killed himself. tech.mit.edu; news.ycombinator.com.
Link to the LW Discussion post.
It seems to be common knowledge that exposure to blue light lowers melatonin and reduces sleepiness, and that we can thus sleep better if we wear orange glasses or use programs like Redshift that reduce the amount of blue light emanating from the strange glowing rectangles that follow us around everywhere.
So an idea I had is that maybe wearing blue glasses might increase alertness. I've been weirdly fatigued during the day lately, even though I've been using melatonin and redshift. But does the /absolute/ magnitude of the blue light matter, or the amount of blue relative to other colours? Blue glasses would mostly have no effect on the absolute amount, but would increase the relative amount. Orange glasses decrease both so considering them isn't much help.
I tried looking for studies but I have no experience doing that and I only came up with one that actually compares bright ambient light to dim blue light; it found that dim (1 lux) blue light was better for alertness than 2-lux ambient white light.
Thoughts? Anyone better-informed about these things have comments?
Edit: For a sense of a scale: lux measures luminous flux; 50 lux is living-room lights; a candle at 20cm is 10-15 lux; a full moon on a clear night is 0.3 to 1.0 lux. "White light" is actually only about 11% blue light (source), so the 2 lux of white light in the study is 0.2 lux of blue, which is bad because it means that the linked study's result could be explained either by more absolute or more relative blue light.
Unless the mechanism which causes our pupils to constrict is itself sensitive exclusively to blue light those blue glasses will increase the absolute amount of blue light that make it into your eyes.
There is light therapy for people who get depressed in the winter. If I don't misunderstand they are nowadays using "full spectrum" (=white) light, not blue light. That might have something to do with what you are talking about, and in that case it is evidence that it is not just the proportion of blue light that matters.
Do the current moderation policies allow editors to add "next in sequence" and "previous in sequence" links to posts that don't already have such links, and are there any editors willing to do this? If not, can we change the policy to allow this? And I'd like to volunteer to add such links at least to the posts that I come across (I'm already a moderator but not an editor).
The hard problem of consciousness is starting to seem slightly less impossible to me than it used to.
Specifically, I remember reading someone's dismissal of the possibility of a reductionist explanation of consciousness, something along the lines of, "What? You think someone's going to come up with an explanation of consciousness, and everyone else will slap their forehead and say, 'Of course, that's it'"?
But that kind of argument from incredulity fails because it conflates explanation (writing down or speaking an argument that other humans will hopefully understand) with understanding (whatever-it-is human brains do to model reality).
For example, there are lots of people who mistakenly think a reductionist explanation of free will is impossible, who will not magically be cured by handing them a well-written explanation of compatibilism, because in order for that to work, they would have to read and understand the argument, and whatever process the human brain uses to read and understand stuff could be flawed in such a way that most people just won't get it. Or more mundanely, it takes years to learn a technical discipline like math or chemistry. A mathematician can't just tell an arbitrary person about their ideas; one would need to study for years to understand what the words mean.
In general, none of us really know what other humans are thinking; we're just making inferences from observing their behavior. I trust the global mathematical community enough such that I believe it when I hear news that the Poincare conjecture has been proven, even though I haven't built up the skills to understand the proof. But suppose some neuroscientist somewhere has come up with an adequate explanation of consciousness, but wasn't able to convince their colleagues, because the explanation requires unusual skills for which there is no standard vocabulary and which are very hard to teach ... how would I be able to tell whether or not this has already happened?
Maybe all of this was obvious to some of you (in which case I apologize for being a slow learner), and maybe some of you have no idea what I'm trying to talk about (in which case I apologize for being a poor explainer).
The header backgrounds of Main and Discussion are similar but different. This irks me slightly.
My selfish strategy is to point it out so it irks more people and the minimal effort of changing it becomes worthwhile. Given the autism scores from the survey, I am confident that among the people reading this comment, a good part will be irked. However, I am not familiar with how changes to the design have been made in the past. I am taking this opportunity to make my first prediction on predictionbook.com
So I'm fairly new to LessWrong, and have being going through some of the older posts, and I had some questions. Since commenting on 4 year old posts was probably unlikely to answer those questions or to generate any new discussion, I thought posting here might be more appropriate. If this is not proper community etiquette, I'm happy to be corrected.
Specifically, I'm trying to evaluate how I understand and feel about this post: The Level Above Mine
I have some very mixed feelings on this post, and the subject in general. (You might say I've noticed that I'm confused.) Sure. It's hard to evaluate reliably just how intelligent someone who is more intelligent than you is, just like a test that every student in a class aces doesn't allow you to identify which student knows the information the best, but doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor? Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset. Indeed, it seems in many ways the raison d'etre of LW relies on the assumption that it is possible to improve your intelligence. I would further argue that LW relies on the assumption that it is possible to recursively improve your intelligence, (i.e. learning things that help you learn better).
Is it possible that the fundamental attribution error is at work here? I mean, if it's ridiculous to believe in "mutants born with unnaturally high anger levels" then why the rush to believe in mutants with unnaturally high levels of intelligence? I'm not sure what to make of a post that discusses assessing how many standard deviations above average intelligence someone is, if I really believe that "Any given aspect of someone's disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability."
Indeed if we make fundamental attribution error when assessing someone because "we don't see their past history trailing behind them in the air", then can we not say the same for experiences that result in greater situational intelligence? Perhaps I'm straining the bounds of metaphor slightly, since problem-solving intelligence tends to be more enduring than vending-machine kicking anger, but is it so fixed that my SAT scores from the 7th grade are meaningful or worth discussing? Is it possible that what we perceive as greater intelligence, as "the level above mine" is just someone who has spent more time working on something, or working on something similar to it? What is the prior probability that someone picks up a new idea quickly because they've been exposed to a similar idea before, versus the prior probability that they are of mutant intelligence?
The entire ranking debate to me, sounds suspiciously like human social hiearchies, and since that's a type of irrationality humans are especially prone to, it makes me very suspicious. I know from personal experience, that being considered of "above average intelligence" is a very useful social tool which I can use to create a place for myself in social hierarchies, and often that place is not only secure, but also grants me reasonably high social status. I have at various times in my life, evaluated others, and granted social status accordingly, on the basis of their SAT scores and other similar measures. Is that what is going on here?
Fundamentally, I believe this question boils down to a handful of related questions:
Sub-questions to #1
I'm not sure about question 1, but I'm pretty sure the answer to question 2 is yes.
"Intelligence" seems to consist of multiple different systems, but there are many tasks which recruit several of those systems simultaneously. That said, this doesn't exclude the possibility of a hierarchy - in some people all of those systems could be working well, in some people all of them could be working badly, and most folks would be somewhere in between. (Which would seem to match the genetic load theory of intelligence.) But of course, this is a partially ordered set rather than a pure hierarchy - different people can have the same overall score, but have different capabilities in various subtasks.
IQ in childhood is predictive of IQ scores in adulthood, but not completely reliably; adult scores are more stable. There have been many interventions which aimed to increase IQ, but so far none of them has worked out.
IQ is one of the strongest general predictors of life outcomes and work performance... but that "general" means that you can still predict performance on some specific task better via some other variable. Also, IQ is one of the best such predictors together with conscientiousness, which implies that hard work also matters a lot in life. We also know that e.g. personality type and skills matter when it comes to rationality.
I would suppose that the kinds of people referred to "the level above mine" would be some of those rare types who've had the luck of getting a high score on all important variables - a high IQ, a high conscientiousness, a naturally curious personality type, high reserves of mental energy, and so on. To what extent these various things are trainable is an open question.
Why is it ridiculous to believe in mutants born with high anger levels?
Following the line of reasoning in Correspondence Bias, because it's probably much more likely that someone who seems to you to "be an angry person" has just had a bad day.
According to our current understanding, significant mood altering mutations are much less common than many other more probable causes of anger. This is one of the reasons gene therapy is not typically suggested as part of treating anger management issues.
Wouldn't it be interesting if everyone had exactly equal hormonal tendencies toward various emotions?
"This particular episode of angry behavior is not as strong of evidence that this person has angry tendencies as my brain wants to treat it" is not the same as "Angry tendencies do not exist at all."
I will start with: +1 for caring about the community etiquette
Intelligence (IQ) is more or less static. If you have a scientifically proven method of increasing IQ, please post it here, and I am sure many people will try it. But at this moment, LW is not about increasing human intelligence. It is about increasing human rationality -- learning a better way to use the intelligence (brain) we already have -- and about machine intelligence. A hypothetical intelligent machine could increase its intelligence by changing its code or adding new hardware. For humans, similar change would require surgery or implants beyond our current knowledge.
How high is unnaturally high? The intelligence is on the Bell curve. One in two persons has IQ above 100. One in ten has IQ above 115. One in fifty has IQ above 130; one in hundred above 135; one in thousand above 146; one in ten thousands above 156... this is all within the Bell curve. It is possible to search for people with this level of intelligence. (Speaking about someone with IQ 300, that would be unnatural.)
The question is, how much real-world effect do these levels of intelligence have. Clearly, intelligence is not enough to make people smart -- a person with a high IQ can still believe and do stupid things. (This is why we usually don't obsess about IQ, and discuss rationality instead.) On the other hand, some IQ may be necessary for some outcome, or at least could make the same person get the same outcome significantly faster. (This is easier to understand by imagining people with very low IQs. Even the best rationality training is not going to make them new Einsteins.) Being faster does not seem like a critical difference, but for sufficiently complex tasks the difference between years and decades, or maybe decades and centuries, can determine whether a human is able or unable to ever complete the task.
In the article, Eliezer considers the alternative explanations. (Maybe Conway had more opportunities to show his mastery. Maybe he specializes in doing something different. Maybe Conway used the time of his youth better.) But maybe... it is the difference in general intelligence. All these explanations deserve to be considered.
Depends on circumstances. Did it happen once, or does it happen all the time? Does it happen consistently in a field where both persons spent a lot of time learning? Does it happen in different fields? The prior probability of someone having higher intelligence is not so small that evidence like this couldn't change the result.
Just because we have a bias for X, it does not automatically mean non-X must be true. People do love hierarchies. People are bad at estimating their skills, or skills of others. That does not mean different people can't really have different traits.
Is it solid that IQ tests can distinguish between the intelligence we already have, and our ability to use that intelligence?
I'd just like to point out that a growth mindset is fully compatible with fixed intelligence. Fixed intelligence doesn't mean that growth is impossible, only that some people can grow faster than others.
There actually are mutants with high anger levels (read about Brunner's syndrome). Less Wrong is not about improving human intelligence but rather human rationality. The two are obviously distinct.
If you are asking these basic questions about intelligence, (i.e. proposing that it can easily be changed) you simply need to read more about this topic.
What exactly is the function of the Rationality Quotes threads? They seem like nothing more that a litmus test for local orthodoxy.
They are repositories for quotes that resonate with and/or amuse us. It might be a little too easy to get karma that way, admittedly, but I think they are nice to have around.
Sources of karma don't bother me. It just seems like the standards for voting in that thread - both comments and replies - is really different than the rest of the site. Not looser, but different.
It seems like I'm always surprised but the vote totals there - both upvotes and downvotes, when I think I have a feel for what folks like in the rest of the site.
One of their functions is to act as a kind of litmus test for local orthodoxy.
I don't think it's a test for orthodoxy. Take the quote: "To see is to forget the name of the thing one sees.” ― Paul Valéry with 13 upvotes while I write it.
The position that gets articulated in that quote isn't orthodox on LessWrong. There are a bunch of quotes that are interesting instead of just making an orthodox point.
I have a query - exactly how interested are people here in improving the efficiency of their daily lives? To whit, would a discussion about efficient toilet habits be welcome or unwelcome? (No, I'm not joking, nor am I working up to a toilet joke, I'm entirely serious.)
It is far more important what you are doing than how efficiently you do it. Discussions of specific low-level habits have low value of information.
Further, LW is mostly about the meta questions: how to think, how to strategise, etc.
Imagine all the attention such article would get on the RationalWiki! They would rewrite the LW page from scratch... :D
Unwelcome.
Unless it involves meta-analyses, regressions, value of information calculations, or preferably all 3!
How do you stop suicide, for individuals and or populations? I looked up antidepressants. They don't look so promising. Brief summary follows. Feel free to skip it.
All pharmacological antidepressants have scary side effects. All of them, sometimes individually or sometimes in combination, put you at risk for serotonin toxicity. Most all increase risk of sucide relative to no treatment. Tricyclic antidepressant are old, scary drugs; rarely prescribed. MAOIs kind of scary. Moclobemide is one of the newer, safer MAOIs. Weird dietary reactions. Still not as safe as SSRIs. NDRIs, include Wellbutrin: commonly prescribed. Adverse effects include seizures and cardiovascular events. Less safe than SSRIs. Don't know enough. SSRIs are most commonly prescribed. They include Zoloft, Paxil, Prozac, and Celexa. Efficacy comparable to placebo. Adverse effects of sexual disfunction, nausea, high blood pressure, lots more. SNRIs are newer than SSRIs. comparable efficacy to SSRIs. Include effexor and cymbalta. Effexor has especially high suicide risk. Discontinuing use of SSRIs and SNRIs abruptly might have adverse effects. Sadness, irritability, agitation, dizziness, etc.
What else can be done? Are hotlines effective?
I have an important choice to make in a few months (about what type of education to pursue). I have changed my mind once already, and after hearing a presentation where the presenter clearly favored my old choice, I'm about to revert my decision - in fact, introspection tells me that my decision was already changed at some point during the presentation. In regards to my original change of mind, I may also have been affected by the friend who gave me the idea.
All of this worries me, and I've started making a list of everything I know as far as pros/cons go of each choice. I want to weigh the options objectively and make a decision. I fear that, already favoring one of the two choices, I won't be objective.
How do I decrease my bias and get myself as close as possible to that awesome point at the start of a discussion where you can list pros and cons and describe the options without having yet gotten attached to any position?
Harder Choices Matter Less. Unless you expect that there is a way of improving your understanding of the problem at a reasonable cost (such as discussing the actual object level problem), the choice is now less important, specifically because of the difficulty in choosing.
From rereading the article, which I swear I stumbled upon recently, I took away that I shouldn't take too long to decide after I've written my list, lest I spend the extra time conjuring extra points and rationalizations to match my bias.
As for the meat of the post, I don't think it applies as much due to the importance of the decision. I could go out and gather more information, but I believe I have enough, and now it's just a matter of weighing all the factors; for which purpose, I think, some agonizing and bias removal is worth the pain.
Hopefully I can get somewhere with the bias removal step, as opposed to getting stuck on it. (And, considering that I just learned something, I guess this can be labeled "progress"! Thanks :))
Infographic of logical and rhetorical fallacies List organized into categories with an icon for each fallacy.
Quick question: I want to read Godel Escher Bach, but are there any math or knowledge prerequisites to understanding it?
Not really.
Not in the slightest. DH does a good job of providing you with the things that he later asks you to use.
If you can understand that "This sentence is a lie" is complicated to decide if true - in any depth at all - then you will get interesting insights from GEB.
There is a mindset prerequisite. Some people get forever lost/bored the first time the book talks about valid mathematical statements as well-formed finite strings of symbols.
Nope. I mean, I'd suggest knowing WHO Godel, Escher, and Bach are... possibly listen to some of the music/look at some artwork, but its not necessary.
http://www.science20.com/hammock_physicist/rational_suckers-99998 Slightly intrigued by this article about Braess' paradox. I understand the paradox well enough, but am confused by how he uses it to critisize super-rationality. But mostly I was amused that in the same comment where he says, 'Hofstader's "super-rationality" concept is inconsistent and illogical, and no single respectable game theorist takes it seriously.' he links to EY's The True Prisoners' Dilemma post.
Also, do people know if that claim about game theorists is true? Would most game theorists say that they would defect against copies of themselves in a one-shot PD?
It depends on what "against copies of themselves" means. If it means "I know the other person behaves like a game theorist, and the payoff matrix is denominated in utility," then yes. If it means "I know the other person behaves like a game theorist, but the payoff matrix is not denominated in utility because of my altruism towards a copy of myself," then no. If it means "I expect my choices to be mirrored, and the payoff matrix is denominated in utility," then no.
I've stumbled upon this:
http://blogs.discovermagazine.com/badastronomy/2008/09/25/a-lunar-mountains-eternally-sunny-disposition/#.UOKtr-RX0Yg
A place on the Moon where the Sun is always visible, never sets. Well, except for an eclipse, of course.
I thought I'd seen a survey result of when LWers thought the Singularity was plausible-- maybe a 50% over/under date, but I haven't been able to find it again. Does anyone remember such a thing?
2009 survey results
2011 survey results
The 2012 survey also had a "date of the Singularity" question, but Yvain didn't report on the results of that question, so you'll have to look at the raw data for that.
Had to filter because of idiots putting in values like 2147483647 or 30 or 1800.
Note that the last survey made it explicitly clear that the question was “what is the year such that P(Singularity before year|Singularity ever) = P(Singularity after year|Singularity ever) = 0.5”, whereas in the previous surveys it was ambiguous between that and “P(Singularity before year) = P(Singularity after year) + P(no Singularity ever) = 0.5”.
Thank you.
Robert Kurzban clarifies the concept of the EEA (mostly by quoting various excerpts from Tooby & Cosmides). I think this is an important post for people to check out, given how often the concept of EEA is referenced on this site.
I find the matter unclarified. Given the large variability of the Pleistocene climate and habitat (that Kurzban mentions), what does the quoted definition of the EEA mean? "A statistical composite...weighted by frequency and fitness-consequences" looks pretty much like a time and a place -- just an average one instead of one asserted to be the actual environment, habitat, and social structure over the whole Pleistocene. Both concepts ignore the variation.
May your plans come to fruition!
John Derbyshire Wonders: Is HBD Over?
My very first post on this site was about the mistreatment of Stephanie Grace related to the new chilling and shrinking of acceptable discourse in the late 2000s after the 90s thaw mentioned in the article.
I was impressed by the reasonableness of the discussion. And I continued to be impressed at how well LessWrong handled matters like these where for almost two years. However making the same post today on this site as a new member wouldn't be as well accepted as it was back then. If this had been the case then I would have taken the claim that this community is one "dedicated to refining the art of human rationality" with a larger grain of salt, I'm unsure if I would have lingered since I had read most of the sequences at that point but was unsure about whether to participate.
So since I'm unsure if it would be appreciated in the community had I arrived today why do I remain? Well in the mean time I've grown to greatly respect the sanity of many excellent commenter's and several people generating good articles post do post here, some have arrived after I started participating. And it is the most civil and intellectually honest internet forum I've ever seen. But despite this I'm unsure if it is rational of me to do so.
Speaking to some other people from here, who make comments like "more people follow your writing than mine can you please comment on my post?" or people using me as a go to example for some matters, apparently I've become a sort of Schilling point for a subculture within the rationalist subculture. I feel kind of sad about this. I preferred it back when Vladimir_M filled this role, he was far worthier than me.
I think we are at the start of a long winter in the West, only technological progress can keep us afloat if it won't falter. And even if it doesn't uFAI is the overwhelmingly likely outcome. I think I need a strong drink.
From watching you for a while, I think you're driven to off-handedly forecast doom and gloom because it suits your identity as someone strongly dissatisfied with their current world, signaling contrarianism and wallowing in dignified pessimism. And of course elitism and despair look cooler to you, and form a coherent narrative.
And I'm not going to judge this as something negative, or implore you to fix some "problem" with your personal feelings, I just suggest that you keep a skeptical perspective on your self-narrative somewhere in the back of your mind. As you surely already do.
I've looked at this argument so many times from so many different angles that I would be very surprised if I hadn't in previous correspondence with you talked about it in very similar terms. I think I've given it its proper weight, but I guess readers may not be aware of it so you pointing it out isn't problematic.
Pretty easy to test.
There seems to be a reasonable attempt to get to Mars within a decade. See the Mars One website for details.
They intend to have people on Mars by 2023 (four of them), and it seems that a self-sustaining colony will be the eventual goal.
OK, I give up. We're living in a simulation. Science can't possibly work under these conditions.
Did they take it down?
The link works for me, if that's what you're asking about.
http://www.popsci.com/science/article/2013-01/scientists-hilariously-vent-methodology-overlyhonestmethod
Hmm, still doesn't work for me. That's odd.
Before I start posting some of the choicest tweets about realworld science, here's the twitter feed.
Huffington Post
Neatorama)
io9
In other words, I probably didn't need to post about this..... everyone would have seen it anyway.
Possibly of interest
Link: http://www.youtube.com/watch?v=XBmJay_qdNc
Whiteboard animation of a talk by Dan Ariely about dishonesty, rationalization, the "what the hell" effect, and bankers. The visual component made it really easy for me to watch.
BEST, a Bayesian replacement for frequentist t-tests I've been using in my self-experiments, now has an online JavaScript implementation: http://www.sumsar.net/best_online/
Hey -
Bit of an unusual request: Does anybody know of any good science books for physics? Specifically, books with not only the facts about physics, but the specific reasons and experiments for which those facts are believed?
I have an associate who is interested in the subject, and completely uninterested in reading something that presents current beliefs as facts. When explaining particle spin, it then took me something like four hours to find the relevant experiments performed for proving the existence of particle spin (and I have to confess the information I was able to find on such a fundamental element of modern physics left me a bit underwhelmed).
What kind of people do you all have in your heads? Do you find that having lots of people in your head (e.g. the way MoR!Harry has lots of people in his head) is helpful for making sense of the world around you and solving problems and so forth? How might I go about populating my head with more people, and what kind of people would it be useful to populate my head with?
When I'm trying to understand something, I imagine myself explaining it to my younger sister. I started doing this when I was a kid, but it is so useful to me, that I never stopped.
Kind of weird now that she's an adult though.
Ten years or so ago, I used to have more distinct personas in my head than I do now.
Back when I did, they roughly speaking exemplified distinct emotional stances.
One was more compassionate, one more ruthless, one more frightened, one more loving, and so forth.
This wasn't quite the way Eliezer writes Harry, but shares some key elements.
My model of what's going on, based on no reliable data, is that there's a transition period between when a particular stance is altogether unacceptable to the ruling coalition in my head (aka "me"), and when that stance has more-or-less seamlessly joined that coalition (aka "I've changed"), during which it is acceptable but not fully internalized and I therefore tag it as "someone else".
As I say, I don't do this nearly so much anymore. That's not to say I'm consistent; I'm not, especially. In particular, I often observe that the way I think and feel is modified by priming effects. I think about problems differently after spending a while reading LW, for example.
What's changed is that there's no sense of a separate identity along with that. To put it in MoR terms: my experience is not of having a Slytherin in my head distinct from me that sometimes thinks things, but rather of sometimes thinking things in a more Slytheriny sort of way.
That suggests to me that maybe the difference is in how rigidly I define the boundaries of "the sorts of things I think".
I sometime find it helpful to label a particular perspective: cynical-Tim, optimistic-Tim, etc. They are helpful for clarifying my thoughts by formalizing a certain type of self-reflection. But they don't know more than I, so are generally useless at brain-storming - which is how MoR!Harry seems to use them - I've taken those discussions as literary conceit and exposition for the readers, not models of how to be more effective.
Brainstorming has at least two components: knowing things, and recognizing that a thing you know is relevant to a situation. People inside your head might not be helpful at the former but they might be helpful at the latter, thanks to the brain's useful ability to mimic other brains.
I think Eliezer might have been inspired by internal family systems, which means this might be more useful at being effective than it sounds.
Kolmogorov complexity via xkcd
I am looking for defenders of Hanson's Meat is Moral. On the surface, this seems like a very compelling argument to me. (I am a vegetarian, primarily for ethical reasons, and have been for two years. At this point the thought of eating meat is quite repulsive to me, and I'm not sure I could be convinced to go back even if I were convinced it were moral.)
It struck me, however, nothing in this argument is specific to animals, and that anyone who truly believes this should also support growing people for cannibalism, as long as those lives are just barely worth living. (I tend to believe in relative depression so I'd argue probably any life that isn't extremely torturous is worth living) This goes so strongly against moral intuition, though, that I can't imagine anyone supporting it.
Sorry, can't defend it. It's not a horrible argument, but it's also not totally well grounded in facts.
For starters, it takes far more land and resources to produce 1 lb of beef than 1 lb of grain, since you have to grow all the grain to feed the cow, and cows don't turn all of that energy into meat, so if you believe that undeveloped land or other forms of resource conservation have some intrinsic worth, then vegetarianism is preferable.
Secondly, I think the metaphor comparing a factory farm to a cubicle farm is disingenuous. It's emotionally loaded, since I work in a cubicle and I don't wish I were dead, and it's not terribly accurate. I think you could make a different comparison, that is arguably more accurate and compare a factory farm to a concentration camp. In both instances the inhabitants are crowded together with minimal resources as they await their slaughter. (Obviously my example is also emotionally loaded). I think if one were to ask the question should we do things that will encourage the birth of children who will grow up in concentration camps, it's a little more difficult to come down with the same definitive yes.
Additionally, the article wanders into conjecture in several place. It's hard to see the statement "most farm animals prefer living to dying" as anything more than a specious claim. No one has any way of knowing a cow's preference vis-a-vis life or death, probably including the cow. Suicide is a particularly egregious red herring. By what means does a cow in a pen commit suicide? Starving to death? Surely that not comparable to wishing it had never been born...
As for your Soylent Green example, it has even worse problems with trophic losses, because if your farm-raised humans were not strictly vegetarian, you're losing an even higher percentage of your original energy. If the food babies are raised on an all meat diet you may be getting less than 1% of the energy you would have gotten out of just eating the plants you started the process with. Humans also have a ridiculously long gestation time etc. to function as an efficient food item, although the modest proposal you mention has certainly been suggested before.
Finally, the argument makes me nervous because I think that in general the morality of causing things to be born isn't well settled. We regard saving the life of an child as definitely a moral good. It isn't clear that giving birth to a child is also a moral good, or also a comparable moral good. If I had to pick between saving one child and having two babies, I would think that saving the kid's life was the higher moral calling, even though it will result in less children over all.
I think you got these flipped around.
Fixed. Thank you.
I had to stop (though I may resume later) at "People who buy less meat don't really spend less money on food overall, they mainly just spend more money on other non-meat food" -- it made me go "are you fucking kidding me" and wonder whether he has ever been to a supermarket. See also this -- differences in retail prices aren't quite that extreme, but that's because governments subsidize meat production, so even though not all of the money comes out of meat eaters' pockets, it still comes out of somewhere.
EDIT: I finished reading it, and... if I didn't know who Hanson was and he had posted somewhere that allowed readers to comment, I would definitely conclude he was trolling. Along with things that others have already pointed out, “per land area, farms are more efficient at producing "higher" animals like pigs and cows” -- where the hell did he take that from? Pretty much everyone I've ever read about this topic agrees that growing food for N people on a mostly vegetarian diet requires way less land, energy, and water than growing food for N people on a largely meat-based diet, and there's a thermodynamic argument that makes that pretty much obvious.
(I do agree that “meat eaters kill animals” isn't a terribly good argument because if it wasn't for meat eaters those animals wouldn't have lived in the first place (but that doesn't apply to hunting and fishing); but that's nowhere near one of the main reasons why I limit my consumption of meat.)
The full sentence is
or
Thanks. I did think “more efficient than what?”, but none of the possibilities I came up with other than “than they are at producing other foodstuffs” seemed relevant in context. (I don't even remember what they were.)
Not only that, it makes me wonder if he realizes that most people in the world don't live on six figures. I remember once living on nothing but cereal, milk, eggs and kimchi for about eight months because, when rent and bills were totalled, there simply wasn't any money for more food than that.
Richard Carrier comes to mind as making counterintuitive claims about the efficiency of meat vs plant food: http://freethoughtblogs.com/carrier/archives/87/
Interesting...
Just one quibble: “other than pure aesthetics (“I just like it”) ... which are idiosyncratic (i.e. not true for most people)” sounds like a overwhelming exception to me. Given that I've never met anyone trying to convince other people to become vegetarians (though I've read a couple such people), I guess that's by far the most common reason. (I've eaten meat in front of at least a dozen different vegetarians from at least four different countries, and none of them seemed to be bothered by that.)
Depending on how ostentatiously (Which I know isn't the right word, but I think conveys what I'm trying to evoke?) you were eating the meat, it would bother me. The type of meat would also make a difference to me. I know vegetarians who are bothered if you eat any meat near them. They are obviously polite about it, (I certainly never say anything) but it might bother people more than you realize.
Not at all -- not that I tried to hide the fact that I was eating meat, but I tried to be as nonchalant as I would be if I didn't know they were vegetarians. OTOH I'm not terribly good at hiding emotions, so probably some of them could tell I was feeling a little embarrassed.
What kind of difference? Pork vs beef vs chicken? Steaks vs minced meat? Free-range vs factory farmed vs hunted (but how would you tell)?
My opposition to meat varies linearly with the intelligence of the animal. I'm much more OK with fish than I am pigs.
This reminds me of something I've wondered about. It seems plausible that it's cheaper to be a vegetarian, but the last I checked, meat substitutes seem to cost about as much as meat.
Is it just that no one's been exploring how many people would like good cheap meat substitutes, or is there some reason meat substitutes are so expensive? Or are there cheap ones I haven't noticed?
Price of quorn
Fancy meat substitutes like quorn are expensive. TVP and tofu are dirt cheap. Going with vegetable sources of protein that make no attempt to directly replace meat, like rice and beans or peanut butter, is also cheap.
Basically what Alicorn said. People aren't necessarily satisfied with the cheap ones that are available - mimicking the exact mouthfeel and flavor of meat is difficult, and because many of the original meat substitutes are from Asia, they weren't common here until fairly recently Mock duck, aka Seitan (made from wheat gluten) is cheap, and very popular in Asia, but it seems to be a perennial also-ran in the US. Back during my veggie days I tried using it, only to find out I have a minor glutease deficiency (not full-on coeliac, but enough that seitan causes problems). It was by far the closest I've found to mimicking texture and mouthfeel for non-specific cuts of meat (as opposed to mimicking burgers or hot dogs or chicken nuggets or something); when prepared right it can be close to indistinguishable from meat.
Making good, cheap meat substitutes is a lot of work; Western would-be consumers often have high standards for them and aren't satisfied with the more-established forms, such as tofu, while new forms have substantial outlays for R&D (Quorn) and sometimes face regulatory hurdles or other barriers to acceptance (Quorn's initial attempt at a US release went very poorly). In the US, where meat production is directly subsidized, it's hard to compete anyway because there's lots of cheaper meat.
One of the confounding factors is that a lot of meat is raised on land that's not suitable for human food farming. EG, free range cattle grazing in australia.
See also.
Isn't this just a re-statement of the Repugnant Conclusion?
Essentially all domesticated animals are alive because of demand for products made from them (eggs, milk, meat, etc). If everyone kept kosher, there would be far fewer pig-experience-moments than the current world, including much less pig-experience-suffering. Is that good or bad for someone who values pig utility?
Anyway, I've always taken this kind of reasoning as a reason not to adopt that perspective on these types of questions. But I think that means I'm not a consequentialist - which puts me slightly out of consensus in this community.
I value pig-utility. I'd much rather see a smaller number of comparitively well-kept, well-treated farm pigs and a healthy population of wild boars than the status quo. I'd also rather not see that arrived it by a mass slaughter of all other pigs, though, and pragmatically I'm not going to get that either way, so "a largeish-but-not-contemporary number of reasonably well-treated pigs farmed for food production" would be a much more feasible goal. Temple Grandin does a lot of work in this area, actually.
Isn't this what's happening all the time anyway?
Not in the sense I was using it above, namely, "We kill them all at once to remove their population." What's happening at present is more like "we kill them in batches to meet production demands, and bring in more." Aggregated over the very long term a whole lot more pigs can suffer and die in the second case; I'm simply saying I don't find "One sudden, nearly-complete mass slaughter" to be a preferable alternative.
My point is that the lifetime of a pig (EDIT: being farmed for meat) isn't very long (about 6 months from what I can find on the internet). Thus all we would have to do is stop breeding them for a while and we very quickly wouldn't have many pigs.
That's totally true, but it feels a bit tangential to what I was saying.
I think that would be true, assuming you have no additional reasons for opposing cannibalism.
Personally, I have no moral opposition to the idea of eating babies, but I suspect that baby farming would cause much more distress to the general population than the food it would produce would justify.
I don't agree with Hanson's position in that essay though. To take an excerpt:
How does he claim to know that? It's not as if he can extrapolate from the fact that they don't kill themselves. Factory farmed animals are in no position to commit suicide, regardless of whether they want to or not. And even if a farm animal's life is pure misery, it probably doesn't have the abstract reasoning abilities to realize that ending its own life, thereby ending the suffering, is a possible thing.
He compares the life of a farmed animal to a worker who has to fight traffic to spend their time working in a cubicle, but an office worker has leisure time, probably a family to spend time with, and enough money to make them willing to work at the job in the first place. I think the abused child in Omelas is a better basis for comparison.
Also: very few office workers get mutilated to prevent them from mutilating their coworkers out of stress, or locked into their cubicles full-time and forced to wallow in their own faeces (periodically being hosed down from outside), or are so over-bred for meat production purposes that even in their cramped conditions the strain of their under-used, oversized muscles strains their skeletons and joints to the breaking point.
Oh, and instead of a salary designed to seem big but actually undervalue your performance, you get paid in being killed (not infrequently a painful and lingering experience) and having any children you bore taken away for no obvious reason.
Yes. “If you have doubts on this point, I suggest you visit a farm” is a massive Appeal to Generalization from One Example. I'm pretty sure some farms are a helluva much worse than others, and I strongly suspect that the farms a random person is most likely to visit will be closer to the good end of the scale.
I vote we breed animals to be happy under these conditions. Or is that baby-eating?
Hmmm.
I've recently become interested in holding some competent opinions on FAI. Trying these on for size:
FAI is like a thermostat. The thermostat does not set individual particles in motion, but measures and responds to particles moving in a particular average range. Similarly, FAI measures whether the world is a Nice Place to Live and makes corrections as needed to keep it that way.
Before we can have mature FAI, there is the initial dynamic or immature FAI. This is a program with a very well thought out, tested, reliable architecture that not only contains a representation of Friendliness, but is designed to keep that as part of its fundamental search patterns. As it searches for self-modifications, it passes each potential modification through a filter which rejects any change that fails to provably preserve the Friendliness goal.
Since provability is tricky, many optimizations which would preserve Friendliness could be rejected due to a lack of a strategy to prove them. This seemingly implies that a reliable system with non-trivial things needing proved will be slower to self-improve than a kludgey system with simpler goals like maximizing computronium.
Can we have a way to save comments?
I often need to retrieve something I've read on Lesswrong but search isn't always helpful. Saving everything I read would limit the scope significantly.
Use something like http://www.ibiblio.org/weidai/lesswrong_user.php?u=gwern and then save the generated page?
You could click the permalink and bookmark it, or copypaste interesting comments to a text file you can grep
noooooooooooooooooooo! The Singularity Institute, and FHI, jump a shark! :(
I seem to remember that, or something similar, popping up on the internet months ago.
Oh? That recently?
Spec. Ops: The Line; a Rationalist twist?
I've played through Spec. Ops: the Line. Interesting though that game is, there's one aspect that I found very lacking; the intelligence and rationality of the protagonists, both instrumental and cognitive. It's not just in their poor decision-making, or their delusions, but also their complete lack of defenses in front of the horrors of war, both from them and from others. They act from the gut, they mismanage the feelings of guilt, obligation, and fear.
The game has a theme of helplessness in the face of chaos; it doesn't matter whether you try to do the right thing, because the world does not bend to your will, and you'll find yourself forced to do unsavoury things, or having things you do turn out to have horrible unforeseen consequences.
I was wondering whether it was possible to hammer this message home in spite of having intelligent, rational characters. The game, as it is, says "Good intentions and outrageous badassery aren't enough to prevent failure or protect you from moral bankruptcy". I'd like to amend that to "Good intentions, a rational and intelligent approach, and outrageous badassery, aren't enough to prevent failure or protect you from moral bankruptcy or insanity".
Any suggestions on how to tackle such a problem?
Little or no artificial light after dark for better sleep and mood
If in Newcomb's problem you replace Omega with James Randi, suddenly everyone is a one-boxer, as we assume there is some slight of hand involved to make the money appear in the box after we have made the choice. I am starting to wonder if Newcomb's problem is just simple map and territory- do we have sufficient evidence to believe that under any circumstance where someone two-boxes, they will receive less money than a one box? If we table the how it is going on, and focus only on the testable probability of whether Randi/Omega is consistently accurate, we can draw conclusions on whether we live in a universe where one boxing is profitable or not. Eventually, we may even discover the how, and also the source of all the money that Omege/Randi is handing out, and win. Until then, like all other natural laws that we know but don't yet understand, we can still make accurate predictions.
No. I think that is fighting the hypothetical.
More generally, the discipline of decision theory is not about figuring out the right solution to a particular problem - it's about describing the properties of decision methods that reach the right solutions to problems generally.
Newcomb's is an example of a situation where some decision methods (eg CDT) don't make what appears to be the right choice. Either CDT is failing to make the right choice, or we are not correctly understanding what the right choice is. That dilemma motivates decision-theorists, not particular solutions to particular problems.
That's possible, but I am not sure how I am fighting it in this case. Leave Omega in place- why do we assume equal probability of omega guessing incorrectly or correctly, when the hypothetical states he has guessed correctly each previous time? If we are not assuming that, why does cdc treat each option as equal, and then proceed to open two boxes?
I realize that decision theory is about a general approach to solving problems- my question is, why are we not including the probability based on past performance in our general approach to solving problems, or if we are, why are we not doing so in this case?
Here's an anthropic question/exercise inspired by this fanfic (end of 2nd chapter specifically), I don't have the time to properly think about it but it seems like an interesting tests for current anthropic reasoning theories under esoteric/unusual conditions. The premise is as follows:
There exist a temporal beacon, acting as an anchor in time. An agent/agents may send their memories back to the anchored time, but as time goes on they may also die/be otherwise prevented from sending memories back. Every new iteration, the agent-copy at the time immediately after the beacons' creation gets blasted with memories from 'past' iterations, either from only the immediately preceding one which recursively includes all previous iterations as further back in subjective time, or from every past iteration at once, with or without a convenient way to differentiate between overlapping memories (another malleable aspect of the premise), or for real head-screwes, from all iterations that lived.
the interesting question would be how would an agent estimate their probability of dying in the current iteration, based on information it was blasted with immediately post-anchor time.
A very simple toy model would be something like: assuming all agent copies send back memories after T years if they haven't died, with the probability of dying/being unable to send back memories each iteration being p, an agent that finds itself with memories from N iterations, what should it estimate as its probability of dying in this iteration?
There should probably be more unsafe time-travel based questions to test anthropic decision making, maybe also to shape intuition regarding many-worlds/multiverse views.