Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread, May 5 - 11, 2014

2 Post author: Tenoke 05 May 2014 10:35AM

Previous Open Thread

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

Comments (284)

Comment author: jackk 08 May 2014 04:25:54AM 23 points [-]

As per issue #389, I've just pushed a change to meetups. All future meetup posts will be created in /r/meetups to un-clutter /r/discussion a little bit.

Comment author: Tenoke 09 May 2014 03:21:16PM *  1 point [-]

Hmm, I just noticed that the 'Nearest Meetup' feature is mostly removed (you can still see the field when you refresh before everything has loaded), so you cant see any notification anywhere for local meetups happening soon unless you are specifically checking /meetups or r/meetups.

I understand why Luke and co wanted this change asap (people have been complaining about the clutter), but I suspect that this change will have a big overall impact on LW Meetups turnouts. I'm fairly certain that a lot of non-regulars decide to go to a specific meetup because they are randomly reminded of it in the sidebar or in discussion, and not because they actively check.

Anyway, is there any chance you know why the 'Nearest meetup' area was removed (no mention of the removal in the issues)? I am not sure what the benefit is of having Upcoming Meetups over Nearest Meetups, but the latter at least provides a reminder for people of posted local meetups. Alternatively, is there anything else planned to serve as a reminder?

PS: I would've published this as a comment on the issue itself, but that didn't look very appropriate.

Comment author: philh 09 May 2014 10:33:36PM 1 point [-]

I currently see 'nearest meetups'.

I've noticed that when I'm at work (but still logged in), it shows me 'upcoming meetups' instead. My first guess, that I've made no attempt to confirm or disconfirm, is that it tries to determine your location from your IP address. If it succeeds it shows you 'nearest meetups', and if it fails it shows you 'upcoming meetups'.

I feel like there should definitely be a link to 'meetups' next to 'main' and 'discussion'. It's so easy to miss things in the sidebar.

I, too, expect this to reduce meetup turnout.

Comment author: Kaj_Sotala 06 May 2014 05:28:48AM 18 points [-]

Here's a comment that I posted in a discussion on Eliezer's FB wall a few days back but didn't receive much of a response there, maybe it'll prompt more discussion here:

--

So this reminds me, I've been thinking for a while that VNM utility might be a hopelessly flawed framework for thinking about human value, but I've had difficulties putting this intuition in words. I'm also pretty unfamiliar with the existing literature around VNM utility, so maybe there is already a standard answer to the problem that I've been thinking about. If so, I'd appreciate a pointer to it. But the theory described in the linked paper seems (based on a quick skim) like it's roughly in the same direction as my thoughts, so maybe there's something to them.

Here my stab at trying to describe what I've been thinking: VNM utility implicitly assumes an agent with "self-contained" preferences, and which is trying to maximize the satisfaction of those preferences. By self-contained, I mean that they are not a function of the environment, though they can and do take inputs from the environment. So an agent could certainly have a preference that made him e.g. want to acquire more money if he had less than $5000, and which made him indifferent to money if he had more than that. But this preference would be conceptualized as something internal to the agent, and essentially unchanging.

That doesn't seem to be how human preferences actually work. For example, suppose that John Doe is currently indifferent between whether to study in college A or college B, so he flips a coin to choose. Unbeknownst to him, if he goes to college A he'll end up doing things together with guy A until they fall in love and get monogamously married; if he goes to college B he'll end up doing things with gal B until they fall in love and get monogamously married. It doesn't seem sensible to ask which choice better satisfies his romantic preferences as they are at the time of the coin flip. Rather, the preference for either person develops as a result of their shared life-histories, and both are equally good in terms of intrinsic preference towards someone (though of course one of them could be better or worse at helping John achieve some other set of preferences).

More generally, rather than having stable goal-oriented preferences, it feels like we acquire different goals as a result of being in different environments: these goals may persist for an extended time, or be entirely transient and vanish as soon as we've left the environment.

As an another example, my preference for "what do I want to do with my life" feels like it has changed at least three times today alone: I started the morning with a fiction-writing inspiration that had carried over from the previous day, so I wished that I could spend my life being a fiction writer; then I read some e-mails on a mailing list devoted to educational games and was reminded of how neat such a career might be; and now this post made me think of how interesting and valuable all the FAI philosophy stuff is, and right now I feel like I'd want to just do that. I don't think that I have any stable preference with regard to this question: rather, I could be happy in any career path as long as there were enough influences in my environment that continued to push me towards that career.

It's as Brian Tomasik wrote at http://reducing-suffering.blogspot.fi/2010/04/salience-and-motivation.html :

There are a few basic life activities (eating, sleeping, etc.) that cannot be ignored and have to be maintained to some degree in order to function. Beyond these, however, it's remarkable how much variation is possible in what people care about and spend their time thinking about. Merely reflecting upon my own life, I can see how vastly the kinds of things I find interesting and important have changed. Some topics that used to matter so much to me are now essentially irrelevant except as whimsical amusements, while others that I had never even considered are now my top priorities.

The scary thing is just how easily and imperceptibly these sorts of shifts can happen. I've been amazed to observe how much small, seemingly trivial cues build up to have an enormous impact on the direction of one's concerns. The types of conversations I overhear, blog entries and papers and emails I read, people I interact with, and visual cues I see in my environment tend basically to determine what I think about during the day and, over the long run, what I spend my time and efforts doing. One can maintain a stated claim that "X is what I find overridingly important," but as a practical matter, it's nearly impossible to avoid the subtle influences of minor day-to-day cues that can distract from such ideals.

If this is the case, then it feels like trying to maximize preference satisfaction is an incoherent idea in the first place. If I'm put in environment A, I will have one set of goals; if I'm put in environment B, I will have another set of goals. There might not be any way of constructing a coherent utility function so that we could compare the utility that we obtain from being put in environment A versus environment B, since our goals and preferences can be completely path- and environment-dependent. Extrapolated meta-preferences don't seem to solve this either, because there seems to be no reason to assume that they'd any less stable or self-contained.

I don't know what we could use in place of VNM utility, though. At it feels like the alternate formalism should include the agent's environment/life history in determining its preferences.

Comment author: Qiaochu_Yuan 07 May 2014 08:53:05AM *  8 points [-]

I also have lots of objections to using VNM utility to model human preferences. (A comment on your example: if you conceive of an agent as accruing value and making decisions over time, to meaningfully apply the VNM framework you need to think of their preferences as being over world-histories, not over world-states, and of their actions as being plans for the rest of time rather than point actions.) I might write a post about this if there's enough interest.

Comment author: jimmy 07 May 2014 02:48:35PM 6 points [-]

I've always thought of it as preferences over world-histories and I don't see any problem with that. I'd be interested in the post if it covers a problem with that formulation

Comment author: Kaj_Sotala 07 May 2014 09:16:34AM 1 point [-]

I would be very interested in that.

Comment author: Metus 06 May 2014 10:18:46AM 8 points [-]

Robin Hanson writes about rank linear utility. This formalism asserts that we value options by their rank in a list of options available at any one time, making it impossible to construct a coherent classical utility function.

Comment author: Kaj_Sotala 06 May 2014 03:28:29PM *  1 point [-]

Yeah, that was my first link in the comment. :-) Still good that you summarized it, though, since not everyone's going to click on the link.

Comment author: Metus 06 May 2014 06:21:13PM 0 points [-]

Oops, I frankly did not see the link. The one time I thought I could contribute ...

Comment author: Kaj_Sotala 07 May 2014 09:15:48AM 1 point [-]

Well, like I said, it was probably a good thing to post and briefly summarize anyway. If you missed the link, others probably did too.

Comment author: jimmy 07 May 2014 02:46:03PM 1 point [-]

I don't think of things like "what I want to do with my life" as terminal preferences - just instrumental preferences that depend on the niche you find yourself in. Terminal stuff is more likely to be simple/human universal stuff (think Maslow's hierarchy of needs)

I think you'll probably find Kevin Simler's essays on personality interesting, and he does a good job explaining and exploring this idea.

http://www.meltingasphalt.com/personality-the-body-in-society/ http://www.meltingasphalt.com/personality-an-ecosystems-perspective/ http://www.meltingasphalt.com/personality-beyond-social-and-beyond-human/

Comment author: Kaj_Sotala 13 May 2014 03:17:07PM 1 point [-]

Thanks, those are good essays. :-)

Comment author: Squark 09 May 2014 06:28:38PM 0 points [-]

What I think is happening is that we're allowed to think of humans as having VNM utility functions ( see also my discussion with Stuart Armstrong ), but the utility function is not constant over time (since we're not introspective recursively modifying AIs that can keep their utility functions stable).

Comment author: RichardKennaway 09 May 2014 06:38:14PM 11 points [-]

I recently saw an advertisement which was such a concentrated piece of antirationality I had to share it here. Imagine a poster showing a man's head and shoulders gazing inspiredly past the viewer into the distance, rendered in posterised red, white, and black with a sort of socialist realism flavour. The words: "No Odds Too Long. No Dream Too Great. The Believer."

If that was all, it would just be a piece of inspirational nonsense. But what was it advertising?

Ladbrokes. A UK chain of betting shops.

Comment author: pragmatist 16 May 2014 02:27:23PM *  3 points [-]

That is a hilariously apposite name for a chain of betting shops.

Comment author: gwern 16 May 2014 07:09:03PM 2 points [-]

Isn't it? The first time I read about the British betting industry and Betfair & Ladbrokes, I had to look the latter up on WP to verify it was randomly named after a building and wasn't a mockery of their customers.

Comment author: lukeprog 08 May 2014 01:08:03AM 10 points [-]

I can't figure out who runs the Less Wrong Twitter. Does anyone know?

Comment author: Brillyant 07 May 2014 02:21:50PM *  26 points [-]

Is Less Wrong dying?

Some observations...

  • The top level posts are generally well below the quality of early material, including the sequences, in my estimation.
  • 'Main' posts are rarely even vaguely interesting to me anymore.
  • 'Top Contributors' karma values seem very low compared to what I remember them being ~9-12 months ago.
  • 'Discussion' posts are littered with Meetup reminders.

About all I look at on LW anymore is the Open Discussion Thread, Rationality Quotes and the link to Slate Star Codex. I noticed CFAR and MIRI's websites gave me the impression they were getting more traction and perhaps making some money.

Has LW run it's course?

Comment author: NancyLebovitz 07 May 2014 03:28:14PM *  10 points [-]

I think it's a little early to predict the end, but there's less I'm interested in here, and I'm having trouble thinking of things to write about, though I can still find worthwhile links for open threads.

Is LW being hit by some sort of social problem, or have we simply run out of things to say?

Comment author: blacktrance 07 May 2014 03:40:37PM *  9 points [-]

I'd add "Metacontrarianism is on the rise" to your list. Many of the top posts now are contrary to at least the spirit of the sequences, if not the letter, or so it feels to me.

Comment author: Viliam_Bur 07 May 2014 06:13:15PM 6 points [-]

Maybe it's because the important things have started, and moved to real life, outside of the LW website. There are people writing and publishing papers on Friendly AI, there are people researching and teaching rationality exercises; there are meetups in many countries. -- Although, if this is true, I would expect more reports here about what happens in the real life. (Remember the fundamental rule of bureaucracy: If it ain't documented, it didn't happen.)

Anyway, this is only a guess; it would be interesting to really know what's happening...

Comment author: Kawoomba 07 May 2014 04:36:11PM *  6 points [-]

I blame Facebook. Many of the discussions that are had there were of the type that used to invigorate these here boards.

Comment author: Brillyant 07 May 2014 06:42:49PM 1 point [-]

Hm. I think you have a much higher level of sophistication in your FB friend group. I get a lot of Tea Party quotes and pictures of peoples' dinner.

Comment author: Kawoomba 07 May 2014 06:52:00PM 11 points [-]

It's mostly that Eliezer has taken to disseminating his current work via open Facebook discussions. I can see how that choice makes sense, from his position, but it's still sad for the identity-paranoid and the nostalgic remnants still roaming these forgotten halls. Did I have a purpose once? It's been so long.

Comment author: NancyLebovitz 08 May 2014 12:07:37AM 11 points [-]

Also, it's much harder (impossible?) to find older discussions on FB.

Comment author: mare-of-night 08 May 2014 01:50:29PM 3 points [-]

And perhaps harder to grow, at least through the usual means - the Facebook discussions wouldn't show up on Google searches (or at least not highly ranked, I think), and it's a less convenient format to link someone to for an explanation of a concept.

Comment author: NancyLebovitz 15 May 2014 04:12:48PM 0 points [-]

It turns out that while there may be no good way to use Facebook to find old discussions on Facebook, I used google and found an old Facebook post.

Comment author: shminux 07 May 2014 05:10:56PM 6 points [-]

Has LW run it's course?

It seems to be a common sentiment, actually. I mentioned this a few times on #lesswrong and the regulars there appear to agree. Whether this is a some sort of confirmation bias, I am not sure. Fortunately, there is a way to measure it:

Count interesting articles from each period and compare the numbers.

Comment author: moridinamael 07 May 2014 04:28:43PM 6 points [-]

I would say LW is evolving.

The Sequences are and always were the finger that points at the objective, not the objective unto itself. The project of LW is "refining the art of human rationality." But we don't have the defininition of human rationality written on stone tablets, needing only diligence in application to obtain good results. The project of LW is thus a dynamic process of discovery, experimentation, incorporating new data, sometimes backtracking when we update on evidence that isn't as solid was we had thought.

You correctly observe that the style of participation has changed over time. This is probably mostly the result of certain specific high volume contributors moving on to other things. It could also be the result of an aggregated shift in understanding as to what kinds of results can actually be produced by discussing rationality in a vacuum, which may perhaps be why these contributors have moved on. Or maybe they just said all they felt they needed to say, I don't know. I have a 101.1 F fever right now.

Comment author: Kaj_Sotala 13 May 2014 03:22:20PM 2 points [-]

I remember people saying things like "Less Wrong is dying" for a long time, from 2010 at least. Which doesn't invalidate the claim that LW's much more quiet than it used to be, of course, but it does challenge the claim that this would be a recent development.

Comment author: Brillyant 16 May 2014 06:32:39PM 0 points [-]

I personally believe it's basically dead—at least for me. The sequences are great... But I wouldn't recommend LW to anyone at this point in terms of it's recent content, and that is a big change for me.

It's been a good run.

Comment author: DanielLC 05 May 2014 07:47:43PM 8 points [-]

According to the principle of enlightened self-interest, you should help other people because this will help you in the long run. I've seen it argued that this is the reason why people have an instinct to help others. I don't think that this would mean helping people the way an Effective Altruist would. It would mean giving the way people instinctually do. You give gifts to friends, give to your community, give to children's hospitals, that sort of thing.

This makes me wonder about what I'm calling enlightened altruism. If you get power from helping people in that way, then you can use the power to help people effectively.

Comment author: Manfred 05 May 2014 09:56:28PM *  2 points [-]

Well, we can use the outside view here. If we look at people who are particularly successful, did they get that way by helping others? What's the proportion relative to poor people?

I don't think this backs up the idea of enlightened self-interest very well. Sure, you have to "play by the rules" to be successful, but going above and beyond doesn't seem to lead to additional success.

Another question we might ask is "where do peoples' instincts for giving come from?" If you believe Dawkins et al., it's the selfishness of genes, which does not have to causally pay of for the organism (instead, the payoff is acausal). This is not the sort of thing where giving according to our instincts will lead to us getting more money.

Comment author: DanielLC 06 May 2014 01:30:24AM 0 points [-]

Sure, you have to "play by the rules" to be successful, but going above and beyond doesn't seem to lead to additional success.

My point is more about giving the standard amount to the standard charities, rather than earmarking it all for the most efficient one.

which does not have to causally pay of for the organism (instead, the payoff is acausal)

I'm not sure what you mean here. Can you give an example?

Comment author: Manfred 07 May 2014 01:25:08AM 0 points [-]

which does not have to causally pay of for the organism (instead, the payoff is acausal)

I'm not sure what you mean here. Can you give an example?

Suppose I have a gene that makes me cooperate in a prisoner's dilemma with my relatives. This gene benefits me, because now I can cooperate with my cousins and get the better payoff (assuming my cousins also have this gene!). But you know what would be even better? If my cousins cooperated with me but I defected. So from a causal decision theory standpoint, my best route is to ignore my instincts and defect.

But if I had a gene that said "defect with my cousins," that would mean my cousins defect back, and so we all lose. So our instincts be beneficial even when the individual best strategy doesn't line up with them (Because our instincts can be correlated with other humans').

Comment author: Lumifer 07 May 2014 02:04:22AM 1 point [-]

But you know what would be even better? If my cousins cooperated with me but I defected. So from a causal decision theory standpoint, my best route is to ignore my instincts and defect.

This reasoning assumes that you are special and significantly different from your cousins. If you're not, your cousins follow the same strategy and you all defect, gene or no gene.

Comment author: DanielLC 07 May 2014 04:46:50PM 2 points [-]

That's what acausal benefit means.

Comment author: NancyLebovitz 12 May 2014 01:46:58AM *  7 points [-]

Five biotypes of depression

The five defined depression biotypes are:

“It’s not serotonin deficiency, but an inability to keep serotonin in the synapse long enough. Most of these patients report excellent response to SSRI antidepressants, although they may experience nasty side effects,” Walsh said.

Pyrrole Depression: This type was found in 17 percent of the patients studied, and most of these patients also said that SSRI antidepressants helped them. These patients exhibited a combination of impaired serotonin production and extreme oxidative stress.

Copper Overload: Accounting for 15 percent of cases in the study, these patients cannot properly metabolize metals. Most of these people say that SSRIs do not have much of an effect—positive or negative—on them, but they report benefits from normalizing their copper levels through nutrient therapy. Most of these patients are women who are also estrogen intolerant.

“For them, it’s not a serotonin issue, but extreme blood and brain levels of copper that result in dopamine deficiency and norepinephrine overload,” Walsh explained. “This may be the primary cause of postpartum depression.”

Low-Folate Depression: These patients account for 20 percent of the cases studied, and many of them say that SSRIs worsened their symptoms, while folic acid and vitamin B12 supplements helped. Benzodiazepine medications may also help people with low-folate depression.

Walsh said that a study of 50 school shootings over the past five decades showed that most shooters probably had this type of depression, as SSRIs can cause suicidal or homicidal ideation in these patients.

Toxic Depression: This type of depression is caused by toxic-metal overload—usually lead poisoning. Over the years, this type accounted for 5 percent of depressed patients, but removing lead from gasoline and paint has lowered the frequency of these cases.

Those people ranting about anti-depressants and school shootings may have been partially on to something.

Comment author: NancyLebovitz 13 May 2014 03:26:24PM 1 point [-]

Unfortunately, the source for this information about biotypes and depression is looking very sketchy

Comment author: TylerJay 16 May 2014 11:57:05PM 0 points [-]

That's unfortunate. Knowledge like this would be incredibly useful for treatment. Rather than just throwing drugs at a problem and seeing what sticks, doctors and psychiatrists could actually try treating each type in turn, or even better, test for markers of each condition.

Comment author: NancyLebovitz 17 May 2014 12:06:55AM *  1 point [-]

I'm hoping the information will pan out. Sketchy source doesn't = guaranteed false.

Comment author: iarwain1 05 May 2014 03:54:40PM 7 points [-]

Recently I've been trying to catch up in math, with a goal of trying to get to calculus as soon as possible. (I want to study Data Science, and calculus / linear algebra seems to be necessary for that kind of study.) I found someone on LW who agreed to provide me with some deadlines, minor incentives, and help if I need it (similar to this proposal), although I'm not sure how well such a setup will end up working.

Originally the plan was that I'd study the Art of Problem Solving Intermediate Algebra book, but I found that many of the concepts were a little advanced for me, so I switched to the middle of the Introduction to Algebra book instead.

The Art of Problem Solving books deliberately make you think a lot, and a lot of the problems are quite difficult. That's great, but I've found that after 2-3 hours of heavy thinking my brain often feels completely shot and that ruins my studying for the rest of the day. It also doesn't help that my available study time usually runs from about 10am-2pm, but I often only start to really wake up around noon. (Yes, I get enough sleep usually. I also use a light box. But I still often only wake up around noon.)

One solution I've been thinking of would be to take the studying slower: I'd study math only from 12-2, and before that I'd study something else, like programming. The only problem with that is that cutting my study time in half means it'll take twice as long to get through the material. At that rate I estimate it'll take approximately a year, perhaps a bit more, before I can even start Calculus. Maybe that's what's needed, but I was hoping to get on with studying data science sooner than that.

Another possible solution would be to try an easier course of study than the AoPS books. I've had some good experiences with MOOCs, so perhaps that might be a good route to take. To that end I've tentatively signed up to this math refresher course, although I don't really know anything about it. Or perhaps I could just CliffNotes my way through Algebra II and Precalculus, and then take a Calculus MOOC. I wouldn't get the material nearly as well, of course, but at least I'd be able to get to Calculus and move on with my data science studies from there. I could even do one of these alternatives while also doing the AoPS books at a slower pace. That way I could get to data science studying as soon as possible, and I'd also eventually get a more thorough familiarity with the material through the AoPS books.

What would you suggest?

Comment author: passive_fist 05 May 2014 10:40:01PM 5 points [-]

Be very very careful of studying beyond the level you think is comfortable. My experience has been that you cannot push yourself to learn difficult things, especially math, faster than a certain pace. Sure, your limit may be 20% higher than what you think it is, but it's not 200% higher. Spending more time on a task when you just don't feel up to it is useless, because instead of thinking you'll just be spending more time staring at the page and having your mind drift off.

I've found that the various methods of 'productivity boosting' (pomodoros, etc) are largely useless and do one of two things: Either decrease your productivity, or momentarily increase it at the expense of a huge decrease later on (anything from 'feeling fuzzy for a couple of days' to 'total burnout for 3 weeks'). Unless you have a mental illness, your brain is already a finely-tuned machine for learning and doing. Don't fool yourself into thinking you can improve it just by some clever schedule rearrangement.

The point to all of this is that you should refrain from 'planning ahead' when it comes to learning. Sure, you should have some general overall sketch of what you want to learn, but at each particular moment in time, the best strategy is to simply pick some topic and try to learn it as best you can, until you get tired. Then rest until you feel you can go at it again. And avoid internet distractions that use up your mental energy but don't cause you to learn anything.

Comment author: raisin 06 May 2014 05:00:04PM *  2 points [-]

your brain is already a finely-tuned machine for learning and doing.

Does this by extension imply that the type of instrumental rationality training advocated by LW is useless? Why, why not?

Comment author: lmm 06 May 2014 08:51:31PM 4 points [-]

Largely, but not entirely. There are cases where evolution optimises for something different from what you want. And there are cases where the environment has changed faster than evolution can track.

Comment author: Risto_Saarelma 06 May 2014 05:46:33PM 4 points [-]

The general rule of thumb for raw intelligence probably applies, you can damage it with unwise actions (like eating lead paint or taking up boxing), but there aren't really any good ways to boost it beyond its natural unimpeded baseline. Good instrumental rationality can help you look out for and avoid self-sabotaging behavior, like overworking your way into burnout.

Comment author: passive_fist 06 May 2014 10:11:21PM 2 points [-]

Decreasing work-load when you feel tired - the thing you naturally want to do - is also a reliable way to avoid burnout.

Comment author: passive_fist 06 May 2014 10:09:05PM *  1 point [-]

If some particular method of learning can be shown, through evidence, to be an improvement long-term, then by all means go for it. But until then, your prior belief has to be that it isn't.

Comment author: zedzed 05 May 2014 04:24:02PM *  3 points [-]

One of my professors once mentioned that there's an upper limit to how much learning you can do in a sleep cycle [citation needed]. This is congruent with my experience, both before and after he mentioned that, so I tend to believe it. Personally, I tend to max out around 3-4 hours, so the times you're talking about seem reasonable. If you can restructure your work times, napping is a good strategy; I've talked to a few people who report getting through grad school by napping once they'd saturated their brain's capacity to learn new stuff.

Interleaved practice is good. This study had subjects practice finding the volume of unconventional geometric solids. One group clustered their practice; they found the volumes of a bunch of wedges, then a bunch of spheroids, etc. The other group had their practice problems mixed. On a final test, the former group got 20% right, and the latter group got 63% right. citation.

What this suggests is you should perhaps study programming and algebra at the same time, switching between the two fairly frequently. It feels like you're going slower, but, as the authors of the book emphasize, you're trading the illusion of learning for more durable learning.

The AoPS textbooks are really, really good. In fact, I'm pretty sure they're the only good algebra textbooks you're going to find, unless you count abstract or linear algebra; most textbooks at that level are mediocre. As luke_prog has mentioned, good textbooks are the usually the quickest and best way to learn new material. Quality learning takes time, and you're doing yourself no favors by spending that time looking for faster alternatives.

Comment author: Omid 05 May 2014 12:08:27PM *  7 points [-]

The United States green card lottery is one of the best lotteries in the world. The payoff is huge (green cards would probably sell for six figures if they were on the market), the cost of entry is minimal ($0 and 30 minutes) and the odds of winning are low, but not astronomically low. If you meet the eligibility criterion and are even a little interested in moving to America, you should enter the lottery this October.

Comment author: Kaj_Sotala 05 May 2014 02:55:08PM 9 points [-]
Comment author: Vulture 06 May 2014 04:53:40PM 0 points [-]

Since this cost and the payoff of the original lottery are in like units, could someone compute whether it's still worth it to enter?

Comment author: Douglas_Knight 06 May 2014 07:06:09PM 0 points [-]

The cost is a completely qualitative claim, so, no, no one can do this computation.

Comment author: Vulture 06 May 2014 07:08:58PM 0 points [-]

Oh, whoops, misread as "immigrant visa" rather than "nonimmigrant visa". Disregard.

Comment author: Douglas_Knight 06 May 2014 08:11:45PM 0 points [-]

Well, it's true that they aren't quite the same units, but I was ignoring that. The cost is that the State Department pays attention and applies a penalty to the highly nontransparent visa process. These are qualitative claims. In principle they could be measured by outside observers. In fact, my best measurement is zero: they don't pay attention nor penalize nonimmigrant visas.

Comment author: niceguyanon 05 May 2014 04:08:03PM 7 points [-]

The payoff is huge ...,the cost of entry is minimal

This reminds me of another pretty decent lottery that some U.S. residents can take advantage of. Many major cities, including NYC, have affordable housing programs in brand new buildings. The cost to apply is $0, the payoff of is paying 20% - 25% of market rate of housing in that area. No, it's not for poor people, there are other programs for that, the income requirements vary but in general is set to qualify the working residents of the city (maybe 50k - 95k).

Some of the most desirable and stunning locations in the city, where rents are 4k for 600 sq/f, can go for $700. Just Google the city you live in to see the specific requirements.

Comment author: ChristianKl 05 May 2014 12:36:33PM 0 points [-]

I don't think you can resell green cards so their open market price should be irrelevant.

Comment author: knb 06 May 2014 02:01:37AM 2 points [-]

I think that was just a way of saying how coveted the prize of this lottery is.

Comment author: lukeprog 11 May 2014 01:48:09AM *  19 points [-]

Below is an edited version of an email I prepared for someone about what CS researchers can do to improve our AGI outcomes in expectation. It was substantive enough I figured I might as well paste it somewhere online, too.

I'm currently building a list of what will eventually be short proposals for several hundred PhD theses / long papers that I think would help clarify our situation with respect to getting good outcomes from AGI, if I could persuade good researchers to research and write them. A couple dozen of these are in computer science broadly: the others are in economics, history, etc. I'll write out a few of the proposals as 3-5 page project summaries, and the rest I'll just leave as two-sentence descriptions until somebody promising contacts me and tells me they want to do it and want more detail. I think of these as "superintelligence strategy" research projects, similar to the kind of work FHI typically does on AGI. Most of these projects wouldn't only be interesting to people interested in superintelligence, e.g. a study building on these results on technological forecasting would be interesting to lots of people, not just those who want to use the results to gain a bit of insight into superintelligence.

Then there's also the question of "How do we design a high assurance AGI which would pass a rigorous certification process ala the one used for autopilot software and other safety-critical software systems?"

There, too, MIRI has lots of ideas for plausibly useful work that could be done today, but of course it's hard to predict this far in advance which particular lines of research will pay off. But then, this is almost always the case for long-time-horizon theoretical research, and e.g. applying HoTT to program verification sure seems more likely to help our chances of positive AGI outcomes than, say, research on genetic algorithms for machine vision.

I'll be fairly inclusive in listing these open problems. Many of the problems below aren't necessarily typical CS work, but they could plausibly be published in some normal CS venues, e.g. surveys of CS people are sometimes published in CS journals or conferences, even if they aren't really "CS research" in the usual sense.

First up are 'superintelligence strategy' aka 'clarify our situation w.r.t. getting good AGI outcomes eventually' projects:

  • More and larger expert surveys on AGI timelines, takeoff speed, and likely social impacts, besides the one reported in the first chapter of Superintelligence (which isn't yet published).

  • Delphi study of those questions including AI/ML people, AGI people, and AI safety+security people.

  • How big is the field of AI currently? How many quality-adjusted researcher years, funding, and available computing resources per year? How many during each past previous decade in AI? More here.

  • What is the current state of AI safety engineering? What can and can't we do? Summary and comparison of approaches in formal verification in AI, hybrid systems control, etc. Right now there are a bunch of different communities doing AI safety and they barely talk to each other, so it's hard for any one person to figure out what's going on in general. Also would be nice to know which techniques are being used where, especially in proprietary and military systems for which there aren't any papers.

  • Surveys of AI subfield experts on “What percentage of the way to human-level performance in your subfield have we come in the last n years”? More here.
  • Improved analysis of concept of general intelligence beyond “efficient cross-domain optimization.” Maybe just more specific: canonical environments, etc. Also see work on formal measures of general intelligence by Legg, by Hernandez-Orallo, etc.
  • Continue Katja’s project on past algorithmic improvement. Filter not for ease of data collection but for real-world importance of the algorithm. Interesting to computer scientists in general, but also potentially relevant to arguments about AI takeoff dynamics.
  • What software projects does the government tend to monitor? Do they ever “take over” (nationalize) software projects? What kinds of software projects do they invade and destroy?
  • Are there examples of narrow AI “takeoff”? Eurisko maybe the closest thing I can think of, but the details aren't clear because Lenat's descriptions were ambiguous and we don't have the source code.

  • Cryptographic boxes for untrusted AI programs.

  • Some AI approaches are more and less transparent to human understanding/inspection. How well does each AI approach's transparency to human inspection scale? More here.
  • Can computational complexity theory place any bounds on AI takeoff? Daniel Dewey is looking into this; it currently doesn't look promising but maybe somebody else would find something a bit informative.
  • To get an AGI to respect the values of multiple humans & groups, we may need significant progress in computational social choice, e.g. fair division theory and voting theory. More here.

Next, high assurance AGI projects that might be publishable in some CS conferences/journals. One way to categorize this stuff is into "bottom-up research" and "top-down research."

Bottom-up research aimed at high assurance AGI simply builds on current AI safety/security approaches, pushing them along to be more powerful, more broadly applicable, more computationally tractable, easier to use, etc. This work isn't necessarily focused on AGI specifically but is plausibly pushing in a more safe-AGI-helpful direction than most AI research is. Examples:

To be continued...

Comment author: lukeprog 11 May 2014 01:48:35AM *  12 points [-]

Continued...

Top-down research aimed at high assurance AGI tries to envision what we'll need a high assurance AGI to do, and starts playing with toy models to see if they can help us build up insights into the general problem, even if we don't know what an actual AGI implementation will look like. Past examples of top-down research of this sort in computer science more generally include:

  • Lampson's original paper on the confinement problem (covert channels), which used abstract models to describe a problem that wasn't detected in the wild for ~2 decades after the wrote the paper. Nevertheless this gave computer security researchers a head start on the problem, and the covert channel communication field is now pretty big and active. Details here.
  • Shor's quantum algorithm for integer factorization (1994) showed, several decades before we're likely to get a large-scale quantum computer, that (e.g.) the NSA could be capturing and storing strongly encrypted communications and could later break them with a QC. So if you want to guarantee your current communications will remain private in the future, you'll want to work on post-quantum cryptography and use it.
  • Hutter's AIXI is the first fully-specified model of "universal" intelligence. It's incomputable, but there are computable variants, and indeed tractable variants that can play arcade games successfully. The nice thing about AIXI is that you can use it to concretely illustrate certain AGI safety problems we don't yet know how to solve even with infinite computing power, which means we must be very confused indeed. Not all AGI safety problems will be solved by first finding an incomputable solution, but that is one common way to make progress. I say more about this in a forthcoming paper with Bill Hibbard to be published in CACM.

But now, here are some top-down research problems MIRI thinks might pay off later for AGI safety outcomes, some of which are within or on the borders of computer science:

  • Naturalized induction: "Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about." (Agents build with the agent-environment framework are effectively Cartesian dualists, which has safety implications.)
  • Better AI cooperation: How can we get powerful agents to cooperate with each other where feasible? One line of research on this is called "program equilibrium": in a setup where agents can read each other's source code, they can recognize each other for cooperation more often than would be the case in a standard Prisoner's Dilemma. However, these approaches were brittle, and agents couldn't recognize each other for cooperation if e.g. a variable name was different between them. We got around that problem via provability logic.
  • Tiling agents: Like Bolander and others, we study self-reflection in computational agents, though for us its because we're thinking ahead to the point when we've got AGIs who want to improve their own abilities and we want to make sure they retain their original purposes as they rewrite their own code. We've built some toy models for this, and they run into nicely crisp Gödelian difficulties and then we throw a bunch of math at those difficulties and in some cases they kind of go away, and we hope this'll lead to insight into the general challenge of self-reflective agents that don't change their goals on self-modification round #412. See also the procrastination paradox and Fallenstein's monster.
  • Ontological crises in AI value systems.

These are just a few examples: there are lots more. We aren't happy yet with our descriptions of any of these problems, and we're working with various people to explain ourselves better, and make it easier for people to understand what we're talking about and why we're working on these problems and not others. But nevertheless some people seem to grok what we're doing, e.g. I pointed Nik Weaver to the tiling agents paper stuff and despite not having past familiarity with MIRI he just ran with it.

Comment author: gjm 09 May 2014 10:31:41AM 6 points [-]

Elsewhere in comments here it's suggested that one reason why LW (allegedly) has less interesting posts and discussions than it used to is that "Eliezer has taken to disseminating his current work via open Facebook discussions". I am curious about how the rest of the LW community feels about this.

Poll! The fact that Eliezer now tends to talk about his current work on Facebook rather than LW is ...

(For the avoidance of doubt, I am not suggesting that Eliezer has any obligation to do what anyone votes for here. Among many reasons there's this: If he's posting things on FB rather than LW because there are lots of people who want to read his stuff but for whatever reason will never read anything on LW then this poll can't possibly detect that other than weakly and indirectly.)

Submitting...

Comment author: wedrifid 09 May 2014 05:42:06PM 7 points [-]

The main problem is that facebook encourages drastically different quality of thought and expressions than lesswrong does. The quality of thought in Eliezer's comments on facebook is sloppy. I chose to unfollow him on facebook because seeing Eliezer at his worst makes it rather a lot more difficult to appreciate Eliezer at his best (contempt is the mind killer). I assumed that any particularly insteresting work he did (that is safe to share with the public) would end up finding its way into a less transient medium than facebook eventually...

...Have I been missing anything exciting?

Comment author: Viliam_Bur 12 May 2014 08:23:41AM 4 points [-]

facebook encourages drastically different quality of thought and expressions

Not sure if this applies to Eliezer's debate threads, but not having downvotes is a horrible setup for a debate. Every stupid comment is either ignored, which seems like "silence is consent", or starts a flamewar. There is simply no way to reduce noise.

Comment author: wedrifid 12 May 2014 04:03:04PM 1 point [-]

Not sure if this applies to Eliezer's debate threads, but not having downvotes is a horrible setup for a debate. Every stupid comment is either ignored, which seems like "silence is consent", or starts a flamewar. There is simply no way to reduce noise.

There is no way to reduce noise for everyone else. For myself I've adopted a strategy of using the 'block use' feature whenever I encounter a comment that I especially wish to downvote. These days I consider 'block' to be a far more critical feature than downvoting is (despite remaining a big fan of downvoting liberally).

Comment author: witzvo 08 May 2014 05:11:20AM *  6 points [-]

Links: Young blood reverses age-related impairments in cognitive function and synaptic plasticity in mice (press release)(paper)

I think the radial arm water maze experiment's results are particularly interesting; it measures learning and memory (see fig 2c which is visible even with the paywall). There's a day one and day two of training and the old mice (18 months) improve somewhat during the first day and then more or less start over on the second day in terms of the errors they are making. This is also true if the old mice are treated with 8 injections of old blood over the course of 3 weeks (the new curves lie pretty much on top of the old curves in supplemental figure 7d). Young mice (3 months) perform better than the old mice (supplemental figure 5d) they learn faster on the first day and retain it when the second day starts (supp 7d).

However, if you give 8 injections of 100 micro liters of blood from 3 month old mice to 18 month old mice, the treated mice perform dramatically better than the old-blood treated old mice (2c) and much more like young mice (this comparison is less certain; I'm comparing one line from 2c to one line from supp. 7d, but that's how it looks by eye).

One factor in the new blood that plays a role is GDF11. From another paper: "we show that GDF11 alone can improve the cerebral vasculature and enhance neurogenesis"

The New York Times gives an overview and other known effects of young blood such as rejuvenating the musculature / heart / vasculature of old mice with young blood. Young Blood May Hold Key to Reversing Aging, e.g. Restoring Systemic GDF11 Levels Reverses Age-Related Dysfunction in Mouse Skeletal Muscle

Comment author: Raythen 06 May 2014 03:38:29PM *  6 points [-]

I wonder what you think of the question of the origin of consciousness i. e. "Why do we have internal experiences att all?" and "How can any physical process result in an internal/subjective experience?"

I've read some material on the subject before, and reading the quantum physics and identity sequence got me thinking about this again.

Comment author: E_Ransom 06 May 2014 04:42:09PM 5 points [-]

Douglas Hofstadter is the go to, mainstream, "hey I recognize that name" authority, though it obviously should be noted that he is a cognitive scientist, not a biologist, neurologist, or nuero-biologist. So, you couldn't build a brain from reading Godel, Escher, and Bach. The only other material I intimately know that discusses the origin of consciousness is Carl Sagan's The Dragons of Eden, which, again, is mainstream and pop science. It's fun reading and enjoyable, but you can't build a brain from it. Someone else can probably suggest better sources for more study.

Of course, some components of these questions can be answered by reducing the question to find out more about what you're looking for.
What's the make up of an internal experience? What are its moving parts? How do you build it?
How are subjective experiences not physical processes? If they aren't physical, what are they?
Taboo "internal/subjective experiences." What are you left with to solve? What mechanics remain to be understood?

Since you've read through the quantum physics sequence, I'm sure you've been exposed to these ideas already. I'm not a neuroscientist or a cognitive scientist. I know very little about the brain that wasn't used for blunt symbolism in Neon Genesis or Xenogears. But I'd guess that, whatever mechanism(s) allows for consciousness, it's built using the matter available. No tricks or slight of hand.

Comment author: Raythen 06 May 2014 08:38:40PM 1 point [-]

Thank you - this is helpful.

Comment author: Alejandro1 07 May 2014 02:36:32AM 4 points [-]

My suggestion would be to start with Dennett's Consiousness Explained. It tackles exactly the questions you are interested in, and it is much more entertaining than the average philosophy/neurology book on the topic.

Comment author: hegemonicon 06 May 2014 01:37:02PM 6 points [-]

Is anyone familiar with any effective-altruist work on pushing humanity towards becoming a spacefaring species? Seems relevant given the likely difference between a civilization that develops it vs. one that doesn't.

Comment author: ChristianKl 07 May 2014 03:46:05PM 5 points [-]

I think it might even have negative return. If you do PR in that regard you are going to encourage misallocation of NASA funds. NASA should spend more resources on tracking near-earth objects and less on PR moves like trying to put a man on Mars. Understanding the climate of our own planet better is also an useful target for NASA spending.

Building human civilisation in Alaska is much easier than doing it on Mars. We don't even get things right in Africa where there fertile ground on which plants grow.

Colonizing Mars will need much better biotech and smarter robots than we have at the moment.

Comment author: Izeinwinter 07 May 2014 06:46:44AM *  2 points [-]

.. the obvious E-A answer to this question is "Don't do any pushing". - Increased space presence is a nigh-certain consequence of a more generally prospering and peaceful world, and diverting resources towards pushing this above trend is going to have just awful returns in utility per dollar. Space will happen on it's own accord as people find useful things to do there (I figure telescopes will be the main thing, tbh.) but beyond that? People are already mapping the asteroid trajectories, which is the only issue really directly relevant to E-A work. If the world dies, and a remnant lives on in tincans in space, that is.. not actually very helpful.

Comment author: Kaj_Sotala 07 May 2014 09:24:47AM 4 points [-]

If the world dies, and a remnant lives on in tincans in space, that is.. not actually very helpful.

But arguably still vastly better than everybody dying, particularly if that tincan civilization can eventually rebuild and recolonize habitable planets.

Comment author: pan 06 May 2014 11:38:02AM 6 points [-]

I've been wondering a lot about whether or not I'm acting rationally with regards to the fact that I will never again be as young as I am now.

So I've been trying to make a list of things I can only do while I'm young, so that I do not regret missing the opportunity later (or at least rationally decided to skip it). I'm 27 so I've already missed a lot of the cliche advice aimed at high school students about to enter college, and I'm already happily engaged so that cuts out some other things.

Any thoughts on opportunities only available at a certain age?

Comment author: E_Ransom 06 May 2014 12:45:40PM *  5 points [-]

One point, just a nitpick: I would suggest not to aim to act "rationally." Aim to win. I may be assuming overmuch about your intended meaning, but remember, if your goal is to do what is rational rather than to do what is best/right/winning, you'll be confused.

That said, I understand what you mean. There are activities I know can done now, in youth, that, while maybe not impossible in my 40s, 50s, or 60s, would be more difficult.

First, your health. Work out, eat right, stay clean. Do everything that can maximize your health NOW and do it to the utmost that you can. If you start working on your health now, the long term payoffs will be exponential rather than linear. The longer you wait to maximize your health, the greater your disadvantage, the less your payoff. EDIT: (As I have no citation to back this claim up, it'd be best not to take my word on this. I would still suggest not delaying improving your health because doing so will result in benefits now, regardless of whether health improvements are exponential or linear with age.)

Second, try everything. We have a whole article on this that spells it out better than I can. And I'll be the first to admit I haven't dove into its methods full force so I can't vouch for them. But, basically, expose yourself to the world. Not in any mean or gross sense, but as a human being, gathering experience. Go to art classes, go to yoga classes, go to MIRI classes, take karate, learn to dance, learn to sing, play an instrument, learn maths, learn history, go to LW meetups.

Of course, you will be limited, and should be limited, by circumstances. You aren't a brain with infinite capacity yet, so you can't literally do everything. So, focus on a few things at a time. Set a schedule to try out new activities while continuing old, beneficial ones. For example, you might have three days for working out, two days for programming learning (as a hobby), one for online studying, one for social networking. Replace with whatever activities most interest or most benefit you (and don't be afraid of overlap if you want to double up). I live in a place with very little stimulus, so I double up on audiobooks and exercise and use recreational times (gaming or working out) to listen to audiobooks or expose myself to new music. The point is to jump in with both feet and do whatever you do well.

Ultimately, your youth gives you two real things: health (presumably) and energy. Now, I have seen 60 year old men in better shape and with more pep than me (marathon runners!), but for the average, your health and energy will come easier to you now than later. Use it.

Comment author: Lumifer 06 May 2014 04:30:58PM 3 points [-]

If you start working on your health now, the long term payoffs will be exponential rather than linear.

[Citation needed]. That doesn't look true to me.

Comment author: E_Ransom 06 May 2014 04:44:07PM *  1 point [-]

Hmm, fair enough. I made an assumption given my understanding of the body and the effects of age. Since I'm at work and cannot provide a validation for my claim, I'll strike it for the time being. Thank you.

Comment author: pewpewlasergun 07 May 2014 05:28:43AM 4 points [-]

So I often find that interesting people live near me. Anyone have tips on asking random people to meet up? Ask them for coffee? I suppose a short email is better than a long one, which may come off creepy? Anyone have friends they met via random emails?

Comment author: Alicorn 07 May 2014 06:29:38AM 3 points [-]

I have a lot of friends who I met through fan mail - people contacting me to tell me they like something about my online footprint. My recommendation is to establish online correspondence for a while, then when they don't send "leave me alone" signals like terse or perfunctory responses you can ask to hang out.

Comment author: pewpewlasergun 07 May 2014 06:43:26AM 0 points [-]

Thanks.

After a bit of random googling it seems there are a lot of results about 'saying no to people who want to get coffee/pick your brain' so it seems like reasonably successful people with an internet presence get a lot of these requests.

Comment author: Alicorn 07 May 2014 07:13:02AM 0 points [-]

I imagine different successful people with internet presences have different intersections of request quantity and request tolerance. I don't get people paying attention to me and wanting to hang out with me as often as I'd like yet so that's probably biasing my recommendations.

Comment author: Anders_H 05 May 2014 04:05:06PM 9 points [-]

There is a lot of interest in prediction markets in the Less Wrong community. However, the prediction markets that we have are currently only available in meatspace, they have very low volume, and the rules are not ideal (You cannot leave positions by selling your shares, and only the column with the final outcome contributes to your score)

I was wondering if there would be interest in a prediction market linked to the Less Wrong account? The idea is that we use essentially the same structure as Intrade / Ipredict. We use play money - this can either be Karma or a new "currency" where everyone is assigned the same starting value. If we use a currency other than Karma, your balance would be publicly linked to your account, as an indicator of your predictive skills.

Perhaps participants would have to reach a specified level of Karma before they are allowed to participate, to avoid users setting up puppet accounts to transfer points to their actual accounts

I think such a prediction market would act as a tax on bullshit, it would help aggregate information, it would help us identify the best predictors in the community, and it would be a lot of fun.

Comment author: gwern 06 May 2014 02:56:41AM 6 points [-]

Why would LWers use such a prediction market more than PredictionBook?

Comment author: Anders_H 06 May 2014 04:25:08AM 2 points [-]

Good point . I actually didn't know about PredictionBook. Now that it has been pointed out to me, I see that there is already a decent option, so my suggestion would be less valuable. However, I still think it would be useful to have a prediction market that operates with Intrade rules. Whether that is worth writing the code is another matter..

Comment author: Jayson_Virissimo 06 May 2014 03:20:38AM 2 points [-]

Because karma?

Comment author: gwern 06 May 2014 03:33:13AM 11 points [-]

I don't think karma matters as much as people think it does, but if that's the only reason, LW could be programmed to look on PB.com for a matching username and increase karma based on the scores or something, much more easily than an entire prediction market written.

Comment author: Eugine_Nier 07 May 2014 03:52:18AM 1 point [-]

That has the problem that people can inflate their scores by repeatedly predicting that the sun will rise tomorrow.

Comment author: gwern 07 May 2014 02:42:13PM *  2 points [-]

Karma is even more easily - and invisibly - gameable.

Comment author: Lumifer 05 May 2014 04:08:51PM *  -3 points [-]

However, the prediction markets that we have are currently only available in meatspace, they have very low volume, and the rules are not ideal

The global financial markets are basically prediction markets.

If you have a prediction (a "view") on something important, you can often express that view in financial markets.

tax on bullshit

Not with "play money", it won't.

Comment author: 9eB1 05 May 2014 05:50:26PM 4 points [-]

If you have a prediction (a "view") on something important, you can often express that view in financial markets.

This seems false to me. Suppose there is a probability of Russia using a nuclear weapon on Crimea. You have a view on this probability, and other market participants also have a view on this probability, but you don't know what their views are. In order to determine which way you need to invest in Ukrainian and Russian stocks/currencies/etc. to express your view, you have to become an absolute expert in any of the assets in question so that you can estimate the implied probability of current market prices. Since you don't have time to become an expert on every aspect of economics remotely tied to every view you have, you generally will not be able to express your views in the financial markets, unless your views all happen to revolve around the broad movements of asset prices.

The number of assets we have to invest in is really quite limited (realistically, you can invest cheaply only in equities, bonds, currencies, commodity futures, and rate futures, and other things tied to these), and in many cases there are "important events" which can have ambiguous impacts on those assets. The US presidential election, which seems to be a favorite of prediction market enthusiasts, has only an ambiguous impact on US equity market prices, and yet many people consider it an important topic. It occurs to me that in particular, events related to and actions by governments will often be ambiguously reflected by private markets.

Comment author: gwern 06 May 2014 02:54:39AM 2 points [-]

Suppose there is a probability of Russia using a nuclear weapon on Crimea. You have a view on this probability, and other market participants also have a view on this probability, but you don't know what their views are. In order to determine which way you need to invest in Ukrainian and Russian stocks/currencies/etc. to express your view, you have to become an absolute expert in any of the assets in question so that you can estimate the implied probability of current market prices.

Efficient markets. Either you think you have non-public information/superior analyses to current participants or you don't; in the latter, you should not trade at all. In the former situation, then the current prices reflect all publicly-available information about the net future prospects of Russian/Ukrainian-related assets, and you don't need to become an expert on anything except the use of nuclear weapons (which you believe the markets are currently ignorant of) since the prices of those assets are already correctly priced with neither excess gains nor losses expected. You can simply buy/sell as your unique insight tells you to.

(Your real problem is whether you can buy enough to make it worthwhile and lack of diversification & volatility means you may be right, buy appropriately, and lose anyway, but that's why you work for a hedge fund.)

Comment author: 9eB1 06 May 2014 04:04:28PM 4 points [-]

The efficient markets criticism works if you have non-public information that clearly points to a greater or lesser risk than what market participants think, but it doesn't work for non-public information that is different from market sentiments by only a degree. If you have private information that the Russian government has a 2% chance of using a nuclear weapon on Crimea (perhaps you know they will roll a 50-sided die and use them on a 1), but you can't tell whether the current market prices imply a 0% to 4% probability, you have no way of using your private information without performing a full analysis of the asset. If there were a prediction market it would be straightforward to do so, however. The same is true of private superior analyses, because they will generally differ in terms of probability by a relatively small amount.

It's really just a question of efficiency. A market for a single asset will be less efficient than the market for that asset and 10 related questions because the information costs are lower for people who have private information that bears on those questions.

Comment author: gwern 17 May 2014 07:06:04PM *  3 points [-]

but it doesn't work for non-public information that is different from market sentiments by only a degree. If you have private information that the Russian government has a 2% chance of using a nuclear weapon on Crimea (perhaps you know they will roll a 50-sided die and use them on a 1), but you can't tell whether the current market prices imply a 0% to 4% probability, you have no way of using your private information without performing a full analysis of the asset.

I disagree for several reasons:

  1. your example is extremely unrealistic. When do people interested in geopolitics ever get new information expressed as a precise limiting frequency like your die example? That sort of estimate doesn't exist outside of prediction markets.
  2. you are setting up a strawman when you say one needs to compare a 2% to a 4% estimate: you don't need to know the market's exact estimate, you merely need to know whether yours is lower. Usually, estimating a bound or inequality is a lot easier than estimating an exact value...
  3. and specifically, by the reasoning I gave before about market efficiency, estimation is easy: when you get private information, you only need to know whether it would increase or decrease probabilities on its own. All other information is already priced in but your new information is not, by definition, and hence will shift the market price in the estimated direction, allowing you to profit.

    To make your example more realistic: you learn from an informant that tactical nuclear bombs came up at the latest private discussion of the Russian cabinet; you know that the market prices on Ukrainian/Russian assets implicitly assign some probability to the use of nukes, and hence some smaller probability that the Russian cabinet is discussing their use, but the market does not know that the cabinet actually has discussed the use of nukes; you do. Now, you may have no idea whether the market assigns 0.5 or 50% to the use of nukes, but its assignment is being done in the absence of this information about their discussion. All you have to do is decide: is the Russian cabinet discussing the use of nukes evidence for or against the future use of nukes, in a purely-evidential odds or decibel Bayesian sense, independent of priors or posteriors? If it's 'for', then whatever the market probability is (you may have no idea what it is and no ability to figure it out), it will shift upwards; and since the prices reflect the probability, you have an opportunity to short.

If there were a prediction market it would be straightforward to do so, however.

You can do the same thing with prediction markets, assuming they're big enough that you can treat them as efficient. Did you learn new information which is positive? Buy. Negative? Short. Knowing your own subjective probability is useful mostly when you suspect markets are inefficient and you can make a profit without learning any new information. (Typically, whenever I'm trading on a prediction market, I don't even try to elicit my own subjective probability, I just anchor on the market probability and look for signs of bias or ignorance.)

It's really just a question of efficiency. A market for a single asset will be less efficient than the market for that asset and 10 related questions because the information costs are lower for people who have private information that bears on those questions.

That's not what the question was. The question was whether you were right that "to express your view, you have to become an absolute expert in any of the assets in question so that you can estimate the implied probability of current market prices". Certainly, more targeted assets will make it easier to make money off targeted insights. But there's no possible/impossible distinction where you can make money off your nuke insights on a prediction market but no one can make money off the same nuke insight on equity markets.

Comment author: 9eB1 18 May 2014 10:12:52AM *  1 point [-]

It seems we agree in many areas (you seem to disagree a great deal with my tone and examples, however), so I will focus on what appears to be the core of the disagreement. You are using a framework that assumes strongly efficient markets with respect to private information, and where most private information is of the sort that has a clear impact with respect to the priors implied on the market. I am using a framework of limited market efficiency, where only information that can be profitably exploited, because it e.g. provides a high enough Sharpe ratio, will be reflected in market prices, and where private information can often have an ambiguous relationship to the current odds implied by market prices. Note that my example was based on information of a probabilistic nature, analogous to using a novel statistical model or the like, whereas your example was based on discrete information. Note also that a novel statistical model can still have elements of discrete private information, as when hedge fund analysts set up cameras to monitor the comings and goings of hotel patrons, but where such information still cashes out in terms of a probability.

As part of my framework, the question of whether you can profitably reveal information to the market and the efficiency of said market are intrinsically linked, and this is not a form of dodging the core issue.

I agree that there is not a bright line separating possible and impossible when it comes to whether information can be profitably raised in the market, but there are clearly things that fall on one side or the other of the fuzzy line. My contention is that there are many matters of importance that one can have views and information on that fall on the "not profitable" side of the line. I will retract that you necessarily have to become an absolute expert on the asset. Technically, you only need enough expertise to correctly estimate the impact of your information, but realistically you will in most cases need to become an expert on the asset (and likely multiple assets, since you will want to long some and short some in order to extract as much value as possible) in order to create those estimates (As an aside, if you didn't have to be an expert, there would be books and seminars about how to build investment strategies around more accurately predicting world events, because there are books and seminars on every conceivable investment strategy that doesn't require one to be an expert. And yet you yourself presumably profitably participate in prediction markets instead of just using those same predictions to invest in capital markets.).

Is it important who becomes the next president of the United States? Many would say that it is, and it is a perennial favorite of prediction markets. Could you build a profitable investment strategy if you ONLY knew who was going to become the president a day ahead of time? You better sit down now and do a lot more work (i.e. invest in a tremendous amount of information cost, i.e. become an expert in the relevant assets), because the impact that will have on the markets is by no means unambiguous.

As a point which hasn't been directly addressed, with respect to Lumifer's original statement:

If you have a prediction (a "view") on something important, you can often express that view in financial markets.

I assume that this can be translated to something like, "If you have views on something important, you can often express that view in the financial markets to achieve a higher risk-adjusted return than you could in absence of those views." A possible problem with estimating probabilities of events which only make up a tiny portion of the expected value of a given asset, is that the expected return is totally swamped out by the volatility. You alluded to this earlier in the parenthetical:

(Your real problem is whether you can buy enough to make it worthwhile and lack of diversification & volatility means you may be right, buy appropriately, and lose anyway, but that's why you work for a hedge fund.)

Suppose that you decide a Democratic president will win with higher probability than is expected by the market, due to either your information model or mine, and after analyzing the markets you determine that the best way to take advantage of this is to go short developed country stocks and bonds, and go long the stocks and bonds of the United States. It could easily be the case that despite having good information and the optimal strategy, the residual volatility between your hedge positions makes this a bad strategy on a risk-reward basis, because it overrides the return differential, particularly after costs. This makes it possible but unprofitable to reflect your views in the financial markets, and is a fairly fundamental issue with Lumifer's idea, since in general we expect our views to be only marginally more accurate than the market, but we observe fairly large volatility in the differences of even correlated assets. For example, correlations between US and European stock markets are on the order of .6, which leaves a substantial amount of residual volatility if you are hedging them. Fundamentally, a perfect prediction market would require two assets which are perfectly anti-correlated EXCEPT for the scenario in question, and the further you get from that ideal, the harder it is to create profitable predictions on a risk-reward basis. And the more assets that are required to build your trade, the more assets you require expertise in, and the more you pay in fees. I would contend that this is generally the case for specific predictions that do not bear directly on the movement of major assets.

Comment author: gwern 05 August 2014 09:56:46PM 0 points [-]

You are using a framework that assumes strongly efficient markets with respect to private information, and where most private information is of the sort that has a clear impact with respect to the priors implied on the market. I am using a framework of limited market efficiency, where only information that can be profitably exploited, because it e.g. provides a high enough Sharpe ratio, will be reflected in market prices, and where private information can often have an ambiguous relationship to the current odds implied by market prices. Note that my example was based on information of a probabilistic nature, analogous to using a novel statistical model or the like, whereas your example was based on discrete information.

OK. And I think you are wrong in thinking that markets are limited in efficiency and that clearly relevant private information nevertheless has ambiguous implications.

I agree that there is not a bright line separating possible and impossible when it comes to whether information can be profitably raised in the market, but there are clearly things that fall on one side or the other of the fuzzy line.

Transaction costs, available capital, risk, market efficiency, and other factors set a fuzzy line for what kind of information and how large a change in probability will be profitable, yes. We're dealing with real world markets and events, after all.

Technically, you only need enough expertise to correctly estimate the impact of your information, but realistically you will in most cases need to become an expert on the asset (and likely multiple assets, since you will want to long some and short some in order to extract as much value as possible) in order to create those estimates

I think most news is fairly easily evaluated. The Russian atomic bomb example may be too clean an example, but it doesn't seem terribly hard to guess whether Steve Jobs unexpectedly going to a hospital is bad or good for APL.

Is it important who becomes the next president of the United States? Many would say that it is, and it is a perennial favorite of prediction markets. Could you build a profitable investment strategy if you ONLY knew who was going to become the president a day ahead of time? You better sit down now and do a lot more work (i.e. invest in a tremendous amount of information cost, i.e. become an expert in the relevant assets), because the impact that will have on the markets is by no means unambiguous.

I've already linked to papers which have done the footwork for one interested in that question. It'd take maybe an hour to read them. Is that 'a lot more work'? And how hard would a finance professional with access to the relevant databases find to replicate the analysis, even if they had no a priori beliefs about how a Democratic victory might affect various stocks?

It could easily be the case that despite having good information and the optimal strategy, the residual volatility between your hedge positions makes this a bad strategy on a risk-reward basis, because it overrides the return differential, particularly after costs. This makes it possible but unprofitable to reflect your views in the financial markets, and is a fairly fundamental issue with Lumifer's idea, since in general we expect our views to be only marginally more accurate than the market, but we observe fairly large volatility in the differences of even correlated assets. For example, correlations between US and European stock markets are on the order of .6, which leaves a substantial amount of residual volatility if you are hedging them.

I don't follow this point. Why can't I find an appropriate set of hedges? Doesn't that imply inefficiencies?

Fundamentally, a perfect prediction market would require two assets which are perfectly anti-correlated EXCEPT for the scenario in question, and the further you get from that ideal, the harder it is to create profitable predictions on a risk-reward basis.

I assume you mean by 'perfect' a prediction market in which even the slightest bit of new evidence can be profitably exploited because there are no kinds of transaction costs or other friction? That may be true, but I don't think it meaningful refutes Lumifer's observation that markets allow for expression of views on "something important".

Comment author: Oscar_Cunningham 05 May 2014 06:02:31PM 3 points [-]

If I wanted to know how likely it was that Republicans would win the next election, how could I go about estimating this from the financial markets?

Comment author: gwern 06 May 2014 02:49:21AM *  8 points [-]
Comment author: Stefan_Schubert 05 May 2014 04:45:42PM *  1 point [-]

I think it's a very good idea. I also like the "tax on bs" metaphor. I like the idea of bullshitters getting punished! :)

I think it should be remembered, though, that wrt many predictions, luck is as least as important as skill/knowledge. Of course if you have many question the luck/noise element is reduced and the signal/skill element is strengthened, but it nevertheless is something to consider.

Comment author: ChristianKl 07 May 2014 04:07:00PM 0 points [-]

I would personally allow free account creation but give people an ingame salary of currency for every day in which they engage into trades.

Developing a good prediction market that's people actually want to use is a bigger problem. PredictionBook sort of works but it could work better than it works at the moment.

PredictionBook already exists and is opensource. If you want you could probably write a plugin that adds prediction market functionality on top of what already exists in predictionbook.

Comment author: Raythen 07 May 2014 05:50:05PM 3 points [-]

Hi, I wonder how you would use your rationality skills to solve this problem.

I'm very sensitive to cold and have been for at least 2-3 years. (I'm a 25 year old male). This is manageable with (really) warm clothes, but sometimes very inconvenient.

I've seen multiple doctors about this, and the response I've got was basically "our tests indicate there's nothing wrong with you, so there's nothing I can do". I've left multiple blood samples, and all the things that were tested are within normal (well, my trombocyte count is a bit low. Doubt it's related to this).

I'm slightly underweight, and have a history of fatigue and depression.

I'm looking for both practical advice and general rationality advice on how to deal with a confusing health problem.

Comment author: ChristianKl 11 May 2014 03:20:07PM 3 points [-]

There are a bunch of ways temperature is regulated. Blood circulation is on of the main ways the body regulates the temperature of the extremities. Blood moves very fast through the body and has therefore a relatively constant temperature. The blood in your hand is warmer than the rest of the hand.

If there's more blood in the capillaries in your hand than your hand gets warmer. Low blood pressure in the arterioles means that less blood flows into the capillaries. If muscle tissue is tense that also usually makes it harder for blood to flow into it.

I personally used to often feel cold five years ago but solved the issue for myself. There are days where something emotional is going on and my thermoregulation is messed up but that's not my default. I do have done a bunch of different things, so I can't give you a single solution.

Firstly an easy suggestion. Drink a lot. Drinking can increase blood pressure. There were weeks where I needed to drink 4-5 liters a day for my body to work at it's peak. I would recommend you to try drinking 4 liters a day for a week and see whether that changes how you feel.

On of the main things I personally did was dancing a lot of Salsa. Salsa gave me a new relationship with my body. Part of Salsa is also having body contact and that allows me to feel which parts of the body of the woman I'm dancing with are warm and relaxed and which aren't.

Good Salsa dancers are usually well circulated. On the other hand I do know woman who danced for years and didn't solve issues like that in their body. Knowing dancing patterns doesn't seem to be enough. In the Salsa sphere body movement classes seem like the produce such results but I don't know whether they are optimal.

I do personally think that there's a case that 5 Rhythms or Contact Improvisation is better for your purpose than Salsa. But to be open, the theory based on which I make that recommendation are not academic in origin.

Another thing that I believe but which does not come from an academic source is that the problem is likely emotional in origin. I consider it to be a self defense mechanism of the body. If they get removed I consider it likely that emotions will come up and that have to be dealt with. Based on what you wrote about severe trauma, I would recommend you to have professional help.

Comment author: Raythen 22 May 2014 02:39:20PM 1 point [-]

I appreciate you bringing attention to my blood circulation. My hands and feet rarely freeze (I do wear warm socks and gloves in winter, though). My ears are very sensitive to cold, though, which could well be a symptom of poor circulation.

Comment author: Raythen 22 May 2014 02:26:56PM 0 points [-]

I personally used to often feel cold five years ago but solved the issue for myself. There are days where something emotional is going on and my thermoregulation is messed up but that's not my default.

Another thing that I believe but which does not come from an academic source is that the problem is likely emotional in origin. I consider it to be a self defense mechanism of the body. If they get removed I consider it likely that emotions will come up and that have to be dealt with. Based on what you wrote about severe trauma, I would recommend you to have professional help.

The link between emotions and blood pressure as well as thermoregulation you describe sounds a lot like vasovagal response

In that case I doubt that is what I'm experiencing, since I haven't noticed ANY correlation between my day-to-day emotional state, and how hot or cold I'm feeling.

So unless there's a possibility of very long-term correlations, on the scale of months/years (which doesn't seem to be what you're describing), I doubt this particular mechanism is causing my cold sensitivity.

I am receiving therapy. Thanks for the suggestion.

Comment author: ChristianKl 28 May 2014 05:08:41AM 1 point [-]

The link between emotions and blood pressure as well as thermoregulation you describe sounds a lot like vasovagal response

I think the fact that vasovagal responses exist illustrate one well documented instance where there's interplay between those forces.

In that case I doubt that is what I'm experiencing, since I haven't noticed ANY correlation between my day-to-day emotional state, and how hot or cold I'm feeling.

I speak about repressing certain things for longer periods of time. Not something where you repress your trauma one day and don't do it the next. You can do the change in a single day. Even in a minute but that's not what happens most of the time.

Comment author: RichardKennaway 07 May 2014 08:51:51PM 3 points [-]

Long underwear. Even if your legs don't specifically feel cold, adding more insulation there helps the whole body. Your legs are a pair of huge heat exchangers, and there's a limit to how useful it is to pile more layers on your torso if all your body heat can still leak out through your legs.

I've had something like that for the last 35 years or so. I just live with it. I suspect a connection with a serious illness I had back then, but I've never bothered to raise the matter with a doctor, because it doesn't seem like the sort of thing that a doctor is likely to have any remedy for. I am also slightly built (BMI 19 to 20) and have occasional attacks of great fatigue, but not depression.

Thick woolly hats are good too. A lot of heat is lost through the head.

Comment author: CAE_Jones 07 May 2014 06:29:40PM 3 points [-]

I'm in a similar situation, and am leaning toward it being a circulation issue. Would you happen to know what your last blood pressure measurements were?

My previous lead candidate was proto-diabetes, but the most recent tests suggest otherwise. The only comment made about my bloodpressure was by the trainee EMT, saying "I wish my bloodpressure was that low!". I've been suspicious that the safe range for bloodpressure might be shifted a bit too far downward, since most people suffer from high bloodpressure-related conditions, but I'll need to refind the evidence that pushed me in that direction.

Anyway, my current strategy is to try and get more/better exercise, fresh air and sunlight. Those are good ideas in general, and should have an impact if it's circulation-related. It's too early and I'm still struggling to get good exercise, and I didn't think to try and quantify changes until... just now, so right now, this solution is experimental on my end.

Comment author: Raythen 07 May 2014 08:18:22PM *  1 point [-]

Thanks for sharing.
(just posted my blood pressure results in an another comment)

Comment author: mare-of-night 08 May 2014 01:47:38PM 2 points [-]

My first tactic with confusing health problems is adjusting my diet, but I seem to be more affected by diet than the typical person, so your mileage may vary Taking a very complete multivitamin for a few days and seeing if you feel any different is an easy way to check for nutrition deficiencies, if your blood tests didn't check for that (or only checked for a few usual suspects). If you do feel different, then you at least know you were deficient in something. You could also do an elimination diet for the most common food allergies, but that takes a lot of effort, so it might not be worth it if you and your family don't have a history of food issues.

If you're more sensitive to cold at some times than others, try to notice the fluctuation and see if it correlates with anything (especially stress, based on ChristianKi's comment). Maybe try writing down how cold you felt and what you did that day? (I usually don't write this sort of thing down, even though I know I should.)

Comment author: Raythen 09 May 2014 02:36:00PM 0 points [-]

Interesting perspective, thanks.

I am taking vitamins and have been for some time.

My diet has had a random drift over time due to practical concerns, taste changing etc... and random diet adjustments don't seem to have a noticeable effect. There might some specific nutritional strategies that would help - I don't have enough information to choose one, though.

More data and more detailed observations seem like a good idea. There might have been some fluctuations, but I'm not noticing any obvious correlations (besides, you know, exposure to cold temperatures).

Comment author: NancyLebovitz 11 May 2014 05:59:38PM 2 points [-]

This is a long shot, but is there a chance you're eating less than you need?

Comment author: Raythen 14 May 2014 05:03:40PM 1 point [-]

It's possible. I don't know. I eat when I'm hungry, which is quite regularly (once per 3-4 hours, maybe 5), so I'm definitely not starving myself. And if I try to eat more, I feel unpleasantly full, and I feel less hungry later - so I don't think it makes a difference.

I'm not sure how to check whether I'm eating enough save for counting calories (which seems complicated and unreliable).

I'm hoping I'll gain some muscle mass by exercise, both for its own sake and because weight gain by other means doesn't seem to be working for me (I suspect I naturally have a slim build).

Comment author: NancyLebovitz 16 May 2014 02:23:43PM 2 points [-]

At this point, I'd say it's unlikely that you're eating so little as to lower your temperature.

If you still want to test the hypothesis without counting calories, you could try a higher fat diet and see what happens.

Does your temperature ever get higher or lower?

Comment author: Lumifer 07 May 2014 06:37:24PM 0 points [-]

First question: what's your blood pressure?

Second question: did you do a thyroid panel and what did it show?

Third question: did you measure your body temperature in controlled settings (e.g. first thing upon waking up before getting out of bed)?

Common causes of sensitivity to cold are low blood pressure and hypothyroidism.

Comment author: Raythen 07 May 2014 08:12:31PM *  0 points [-]

90/60 mmHg according to what a doctor told me during a measurement a month ago (though my journal says 98/60 for some reason). 105/60 in an another measurement a week before that.

Thyroid panel:
P-TSH mIE/L 1.5 (0.3-4.2)
P-T4, free pmol/L 15 (12-22)
P-T3, free pmol/L 5.2 (3.1-6.8)
S-Ak, (IgG) TPO kIE/L 8 (<34)

The last one is TPO antibodies. The parentheses are the reference ranges at my lab.

All values are within what is considered normal range. I've also had the thyroid physically examined (though palpation) and it appears there are no abnormalities (it's not swollen or enlarged).

I have not measured my body temperature.

Comment author: moridinamael 07 May 2014 07:35:34PM 1 point [-]

I'm similar. I have found scarves to be both stylish and practical. The neck area is highly sensitive to cold. I've taken to toting a scarf if I am going to bring a jacket.

Comment author: Raythen 22 May 2014 11:49:46AM *  0 points [-]

Thank you for all the comments and suggestions. :)

At this point I have made an appointment to have my hormone levels checked (as suggested by Lumifer and NancyLebovitz).

I also think my blood pressure and circulation is worth looking into.

I'm still processing a lot of the suggestions and ideas, and might make another thread on this in the future.

Comment author: ChristianKl 07 May 2014 09:47:20PM 0 points [-]

Did something happen 3 years ago? Maybe a major emotional trauma?

Comment author: Raythen 08 May 2014 07:56:43AM 0 points [-]

I've had a really bad childhood and experienced a lot of severe emotional trauma throughout my life since then, including at that time.

Comment author: ChristianKl 07 May 2014 09:33:09PM 0 points [-]

Do you do sports?

Comment author: Raythen 08 May 2014 07:43:20AM 0 points [-]

Not regularly.
I exercise at a gym (upper body strength program, started quite recently).

Comment author: eeuuah 11 May 2014 02:04:59PM 3 points [-]

In my anecdotal experience, being underweight is correlated with being unusually susceptible to cold. Building some mass might help. Consider doing a more general strength program too.

Comment author: Benito 07 May 2014 12:20:58PM 3 points [-]

Dear LW,

I've just this morning been offered funding for a research placement in a British University this summer (I'm 17). I have to contact researchers myself, and it generally has to be in a STEM subject area. I am looking very generally for any recommendations of researchers to contact in areas of Maths, Physics and Computer Science. If you know any people who do research that would be of interest to the average LessWronger, especially in the aforementioned fields, I would appreciate it greatly.

Comment author: Oscar_Cunningham 07 May 2014 03:50:39PM 1 point [-]

Obviously there are hundreds of possibilities, but the Future of Humanity Institute springs to mind.

Comment author: Benito 07 May 2014 04:04:57PM *  1 point [-]

I checked them out actually, and it doesn't seem like they normally do that kind of thing. Still, I've sent them an email, and I'll see what they say :)

Added: They've said they're happy I'm interested, but haven't got anything for me at the minute,

Comment author: palladias 05 May 2014 08:19:43PM *  3 points [-]

Looks like when my current job ends (May 31), I'll have the summer free before my next one starts (Sept). My June is pretty much booked with a big writing project with a looming deadline, but I get to decide how to fill July and August, and I'd appreciate crowdsourced suggestions.

I'm lucky enough to not need to find alternate work to cover living expenses for those two months, so I'm not particularly in the market for short-term work suggestions. I'll be based out of D.C. during this period. Not super interested in travel. I'm considering some self-study but I'm not planning to become a programmer.

Here are some of the things I currently have in mind (not highly optimized, the things that occur immediately when I think "What do I want to do this summer?":

  • ASL class at Gallaudet
  • Ignatian retreat
  • Stepping up my freelance writing work
  • Looking into any shop classes for adults
  • Sewing, embroidering, soldering a fairly involved Halloween costume

What kinds of things am I not thinking of that might be delightful?

Comment author: Emile 05 May 2014 09:05:19PM 3 points [-]

ASL class at Gallaudet

(means "American Sign Language" for the curious like me)

Comment author: iarwain1 07 May 2014 01:54:48PM *  0 points [-]

Maybe try doing nothing. For some people that would drive them crazy, but for others a month of rest and peacefulness can be life-changing. Try turning off the computer for a month. Take walks. Read a book under a tree. Smell the flowers. Meditate.

Also, I'm not sure if this counts as travel, but Shenandoah is only about 1:30 from DC. Getting a small cabin or a room in a bed and breakfast for a month is not so expensive. Immersing yourself in a more natural, less hectic environment can itself be extremely restorative. And you can even sew / embroider while you're doing it.

Comment author: palladias 07 May 2014 02:23:46PM *  0 points [-]

I am definitely in the "would drive them crazy" camp. One of the worst vacations I've taken was to St. John with my family. It's a long way to go just to read on the beach rather than read in a park or a library.

I do have Ignatian retreat on my list, though.

Comment author: Baughn 06 May 2014 01:10:56AM 0 points [-]

I don't know what falls under 'freelance writing', but have you considered writing fiction?

It's a huge time-sink even if you're deliberately trying to improve your speed, but the skills are also surprisingly applicable - modelling your characters in your head isn't dramatically different from modelling other real people, even if you ignore the skills that merely fall under "knowing how to write". I've had a great deal of fun with that, lately.

You don't necessarily need to immediately jump into original fiction, either. Fanfiction is often considered "training wheels", but that doesn't just mean it's easier - well, it is, but it's also much easier to tell if you're getting characterisation right when there's the original work to compare to (and rabid fans to do the comparison), while the usual "benefit" of writing fanfiction (not needing to invent your own setting) can be trivially set aside if you feel like it.

Comment author: palladias 06 May 2014 02:02:09AM 0 points [-]

I've written fanfiction, but I've only enjoyed writing fiction with a writing partner, as I did for those two stories. I get very very bored writing things that aren't dialogue.

I'm currently at a magazine for a journo internship, and have done some freelance book/theatre reviews for pay.

Comment author: Vulture 06 May 2014 06:46:20PM *  2 points [-]

If you're planning to link this account to your real world identity, or already have, you might think twice about linking to those writings. Sorry if this was already obvious and considered.

edit: that said, I'm really enjoying APoF :-)

Comment author: palladias 06 May 2014 08:25:29PM 0 points [-]

Glad to hear it! I'm traceable to those writings, but not though easy googling. The nice thing about being a writer with daily blog updates is security through obscurity. It's hard to trawl through to find whatever would be the worst thing ;)

Comment author: Douglas_Knight 05 May 2014 03:35:51PM 3 points [-]

What is the meaning and use of (total) GDP, adjusted PPP?

I cannot think of a single use for it (unlike nominal total GDP or PPP GDP per capita).

Comment author: Lumifer 05 May 2014 04:06:21PM 2 points [-]

Well, PPP has meaning only in the context of multiple currencies, so presumably you're trying to get a handle on some country's nominal GDP expressed in a different currency. This means you need a foreign exchange rate, a multiplier to convert units to different units.

At this point things start getting murkier. Sometimes there's a market FX rate. Sometimes there is an official FX rate (and a different black market one). Sometimes there is no reasonable FX rate at all.

The PPP rate is just one of the possibilities. Depending on the circumstances it might be more or less appropriate.

The crude meaning of GDP converted at PPP rate is "how much stuff at local prices does this country produce/consume".

Comment author: Douglas_Knight 05 May 2014 05:10:33PM 0 points [-]

PPP has meaning only in the context of multiple currencies

That's not true. One does not use a uniform conversion factor across the Eurozone.

The crude meaning of GDP converted at PPP rate is "how much stuff at local prices does this country produce/consume".

Why would you ever want this?

Comment author: lmm 07 May 2014 02:35:55AM 0 points [-]

What's the use of nominal total GDP? I would expect the argument for PPP total GDP to be that it's a more accurate measure of the same thing, but I'm not actually seeing what the use is.

Comment author: Douglas_Knight 07 May 2014 04:02:38AM 2 points [-]

Yes, total GDP is problematic. For utilitarian purposes PPP is better, but why do it one country at a time? (aside from making utility linear in money)

My comment was triggered by the announcement that China is now "the biggest economy" in PPP terms. One thing "the biggest economy" does is set prices. China can afford to buy more steel than India, so much that it drives the world price of steel. But the fact that there is a world price is closely related to the fact China pays for steel in dollars, not Chungking haircuts or Szechuan real estate. So that's what total nominal GDP is good for.

Comment author: shminux 06 May 2014 10:34:04PM 5 points [-]

Have you guys noticed that, while the notion of AI x-risk is gaining credibility thanks to some famous physicists, there is no mention of Eliezer and only a passing mention of MIRI? Yet Irving Good, who pointed out the possibility of recursive self-improvement without linking it to x-risk, is right there. Seems like a PR problem to me. Either raising the profile of the issue is not associated with EY/MIRI, or he is considered too low status to speak of publicly. Both possibilities are clearly detrimental to MIRI's fundraising efforts.

Comment author: Kaj_Sotala 07 May 2014 09:31:39AM *  16 points [-]

See also this old post where Robin Hanson basically predicted that this would happen.

The contrarian will have established some priority with these once-contrarian ideas, such as being the first to publish on or actively pursue related ideas. And he will be somewhat more familiar with those ideas, having spent years on them.

But the cautious person will be more familiar with standard topics and methods, and so be in a better position to communicate this new area to a standard audience, and to integrate it in with other standard areas. More important to the "powers that be" hoping to establish this new area, this standard person will bring more prestige and resources to this new area.

If the standard guy wins the first few such contests, his advantage can quickly snowball into an overwhelming one. People will prefer to cite his publications as they will be in more prestigious journals, even if they were not quite as creative. Reporters will prefer to quote him, students will prefer to study under him, firms will prefer to hire him as a consultant, and journals will prefer to publish him, as he will be affiliated with more prestigious institutions. And of course the contrarian may have a worse reputation as a "team player."

Comment author: Qiaochu_Yuan 07 May 2014 08:58:44AM 13 points [-]

I think this is fine. Convincing people that this is a Real Thing and then specifically making them aware of Eliezer and MIRI should be done separately anyway. Doing the second thing too soon may make the first thing harder, while doing the second thing late makes the first thing easier (because then AI x-risk can be put in a mental category other than "that weird thing that those weird people care about").

Comment author: Vulture 06 May 2014 04:49:43PM 5 points [-]

Idea for a question for the next LW survey: Have you ever been diagnosed with a mental disorder? If so, what was it? [either a list of some common ones and an "other" box, or, ideally, a full drop-down of DSM-5 diagnoses. Plus a troll-bait non-disorder and a "prefer not to say", of course]

Comment author: Lumifer 06 May 2014 05:14:23PM 1 point [-]

Idea for a question for the next LW survey: Have you ever been diagnosed with a mental disorder?

...and a follow-up question: Have you ever self-diagnosed yourself with a mental disorder?

:-)

Comment author: Vulture 06 May 2014 06:38:56PM 0 points [-]

Would that be interesting enough as a question to be worth including? I imagine there's a lot of variability in self-diagnosis.

Comment author: Lumifer 06 May 2014 06:57:20PM 2 points [-]

The first interesting point is the one-bit yes/no answer.

I would not expect a majority of the general population to self-diagnose itself with a mental disease at any point in their life. However for certain specific groups this changes. One group of interest is high-IQ reflexive self-doubting people. Another group is freaks, that is, people who are clearly weird/strange/different from those around them for whatever reason. Yet another group is borderline cases, those whose symptoms are not strong or pronounced enough for a clinical diagnosis and yet they are not entirely "normal" anyway. And another group is a variety of neurodiverse people.

Comment author: mare-of-night 08 May 2014 02:06:20PM 0 points [-]

There are also people who do have a disorder, but have reasons for not seeing a doctor about it. (Lack of funds, not expecting treatment to help, not needing treatment, etc.)

Comment author: Lumifer 08 May 2014 02:50:59PM 0 points [-]

Do you mean "reasons" or do you mean "rational reasons"?

The opinion of someone who does have a mental disorder on whether treatment will help or is needed, that opinion is... suspect.

Comment author: mare-of-night 09 May 2014 05:31:11AM 0 points [-]

In this context, they don't have to be good reasons - my point was that a self diagnosis doesn't necessarily disagree with what a doctor would say if asked.

Comment author: Vulture 06 May 2014 08:18:37PM 0 points [-]

Okay, that makes sense. And although it might take some clever structuring, I think it might be interesting to try to determine how frequently those self-diagnoses were accurate... something about "confirmed by a medical professional", perhaps?

Comment author: Lumifer 06 May 2014 08:49:43PM 0 points [-]

something about "confirmed by a medical professional", perhaps?

This is tricky ground. If you want more follow-up questions, the first probably should be "Have you, of your own will, talked to a mental health professional about an assessment or a diagnosis?". Again, the majority of the general population would answer "no" to this.

Comment author: Nornagest 06 May 2014 09:23:29PM 1 point [-]

I seem to recall something like 30% of the adult American population being in therapy or having been recently. That's not a majority, but it's pretty substantial, and they didn't get there by magic.

Comment author: Lumifer 06 May 2014 09:29:20PM 1 point [-]

seem to recall something like 30% of the adult American population being in therapy or having been recently.

My impression is that mostly involves people going to their doctor and saying "Doctor, I feel horrible!". And the good doctor says "Sure, try these antidepressants!" (yes, I know I'm exaggerating).

That's a different thing from "Doctor, I believe I'm mentally ill".

Comment author: Nornagest 06 May 2014 09:37:00PM *  1 point [-]

Depression is a mental illness. You might not go to the doctor and ask about depression (though I doubt this is anywhere near as uncommon as you're making it out to be), but going to the doctor and saying "Doc, I can't sleep, feel sad all the time, everything I do seems pointless, etc." is as much asking for a consultation on mental illness as "Doc, I've got this nasty bullseye-shaped rash on my leg and I've got a fever and a bad headache" is asking for a consultation on Lyme disease.

The standards of diagnosis might not be as rigorous, but that's a separate issue.

Comment author: fubarobfusco 07 May 2014 04:42:33AM 2 points [-]

Then there's me.

"Doctor, I can't sleep!" "Here, take this Ambien." "Ambien scares the crap out of me; it makes my friend call me up late at night and ramble incoherently at me, and I've heard it makes people have sex and forget it happened." "Eh. Take it anyway, that doesn't happen to most people."

"Doctor, I still can't sleep, I worry all the time, and it's wrecking my motivation at work. And the Ambien works, but it makes me trip out more than I probably should most nights." "You have an anxiety disorder. Here, go to this psychiatrist, Doctor #2. And don't take so much Ambien."

"Doctor #2, I can't sleep, I worry all the time, and it's wrecking my motivation at work. Oh, and Ambien makes me trip out before I fall asleep." "You have anxiety and depression. Here, take these antidepressants, and these benzodiazepines if you need them, plus these folates and vitamin D ... oh, and replace that Ambien with this Lunesta, and come back every week. And let's talk about the work situation, something's messed up there ..."

Anecdotal, sure; and pretty recent. But I didn't start out with the idea "I'm depressed and should seek antidepressants". I thought I had a sleep disorder, but it turns out our reality doesn't issue time machines for those.

Comment author: Lumifer 07 May 2014 01:03:09AM 0 points [-]

Yes, if the question were "How many people go to a doctor to complain of symptoms of mental illnesses" then sure, a large chunk of the general American population (still don't know if a majority) would qualify.

However recall the context. We started with the question "Have you ever self-diagnosed yourself with a mental disorder?" and are talking about the follow-up to it. Here the question about going to the doctor means mostly "Did you take your self-diagnosis seriously enough to talk to a medic about it?" And, still within this context, the question is much more like "I think I'm mentally ill, is that so?" than "I can't sleep and life is pointless, how do I fix that?"

Comment author: Vulture 06 May 2014 09:14:07PM 0 points [-]

Again, the majority of the general population would answer "no" to this

Are you sure about that?

Comment author: Lumifer 06 May 2014 09:19:48PM 0 points [-]

I don't have data, but my prior is fairly strong.

There are a lot of (temporarily) depressed teenagers, but it's rarely clinical and they rarely go for a formal evaluation to a psychiatrist or a psychotherapist.

How many people, do you think, go to a doctor and say "I think I'm mentally ill"?

Comment author: Vulture 06 May 2014 09:43:30PM *  0 points [-]

How many people, do you think, go to a doctor and say "I think I'm mentally ill"?

Ah, when you phrase it like that I realize that my estimate is rather low. Near vs. Far mode, I guess. Since it's relatively unlikely that someone would do that if they weren't actually mentally ill, and some mental illness is mild enough that one wouldn't bother, and a lot of the severe ones could prevent someone from consulting a doctor on their own, a pretty low proportion of the population seems reasonable.

Does that line up with your reasoning?

edit: I think that part of what was muddling me was that your original phrasing ("talked to a mental health professional about an assessment or a diagnosis") was sort of unclear, so I resorted to nearby heuristics rather than trying to parse it properly. We might want to fix that up before putting it on the survey.

Comment author: Lumifer 07 May 2014 01:06:41AM 0 points [-]

Well, I meant this in the context of being a follow-up to the previous question about self-diagnosis. So it mostly means "Did you take your self-diagnosis seriously enough to go to a doctor?"

Such a question outside of this context needs to be more precisely formulated, I think. As we were discussing with Nornagest, going to a doctor and saying "I can't sleep, life sucks, can you help with that?" is sufficiently common.

Comment author: NancyLebovitz 08 May 2014 03:42:44PM 1 point [-]

I think I'd want a second question about the severity of the disorder, including whether person thinks the disorder has some advantages.

Comment author: Vulture 08 August 2014 02:18:17AM *  0 points [-]

Update: 3 months later, it just popped into my head that this ought to be a checklist, not a drop-down.

Comment author: NancyLebovitz 07 May 2014 03:29:12PM 2 points [-]
Comment author: Gunnar_Zarncke 06 May 2014 09:38:28PM 2 points [-]

Effective parenting advice: Babys names affect life outcomes:

Names Race and Economists on Baby Name Wizard.

I'd guess that means choosing names that are

  • used in high status circles (sample celebrity babies names)

  • probably matching your ethnicity

  • sufficiently popular; best just starting to climb in popularity (not topping or declining)

  • or altzernatively timeless (e.g. old roman or biblical names)

Also choose multiple names because

  • it allows to later choose the best fit

  • allows for easier compromises with your spouse

  • allows to satisfy more relatives

  • a high number or surnames indicates higher status in itself

Comment author: ChristianKl 07 May 2014 03:27:40PM 2 points [-]

used in high status circles (sample celebrity babies names)

You don't want hollywood celebrities. Low status people name their kids after hollywood celebrities. In Germany given kids Anglosaxon names is a sign of low status. http://www.sueddeutsche.de/leben/studie-kindernamen-und-vorurteile-von-wegen-schall-und-rauch-1.44178

According to that article good names for German children that make teachers think the child is high performing are: "Charlotte, Sophie, Marie, Hannah, Alexander, Maximilian, Simon, Lukas and Jakob". On the other hand bad names are: "Kevin, Chantal, Mandy, Justin and Angelina".

Comment author: Douglas_Knight 07 May 2014 03:51:48PM 1 point [-]

Gunnar said to name children after the children of celebrities, not directly after celebrities. But certainly using foreign celebrities is a very bad idea.

Comment author: ChristianKl 07 May 2014 04:27:31PM 2 points [-]

Gunnar said to name children after the children of celebrities, not directly after celebrities.

The kind of person who follows magazine that tells them about the name of celebrities still isn't high status.

It's been a while till I researched the topic in more detail. Artists don't wear suits to appear high status and the don't give their children high status names. Royals and aristocrats might be a valid choice if you live in a country that has them.

In the US the way that people who go to Harvard and Yale name their children is what's counts as high status signal.

Comment author: Gunnar_Zarncke 07 May 2014 10:11:32PM 0 points [-]

But certainly using foreign celebrities is a very bad idea.

Yes. Use the childrens names of your local high status people.

In the US the way that people who go to Harvard and Yale name their children is what's counts as high status signal.

I second that. Celebrity is misleading. I wanted to give a concrete example "high status people" is too abstract.

Comment author: Gunnar_Zarncke 28 May 2014 08:43:41PM 0 points [-]
Comment author: djm 06 May 2014 12:22:55AM 2 points [-]

Anyone else doing the course Functional Programming Principles in Scala ? It started last week, but still should be time to join and get the first assignment done.

Comment author: Markas 06 May 2014 06:32:51PM 2 points [-]

This was the first Coursera course I took! Highly recommended, if anyone's still on the fence.

Comment author: Viliam_Bur 06 May 2014 08:59:38PM 1 point [-]

OK, I'll try. Signed in, but will look at it deeper on Thursday.

Comment author: iarwain1 05 May 2014 01:36:25PM 2 points [-]
Comment author: Lumifer 08 May 2014 03:10:17PM -3 points [-]

Soylent to food is what a blow-up doll is to sex.

Comment author: gjm 08 May 2014 03:51:44PM 9 points [-]

Isn't it rather the reverse? What in vitro fertilization is to sex, perhaps. It purports to offer the underlying biological benefits, but you have to give up the pleasurable sensations that normally attach to eating food.

I suppose really it's more complicated than that. You have (1) the biological need, which via evolution gives rise to (2) the pleasant sensations, and then cultural processes produce (3) all sorts of other stuff -- culinary traditions, sexual taboos, etc. And also (4) the usual way of satisfying the need and/or getting the pleasant sensations may be time-consuming or expensive or inconvenient.

Soylent removes 2, 3, 4. Blow-up dolls remove 1, 3, 4. IVF removes 2, 3, 4. (You might want to add "mostly" to some of those.) Make of that what you will.

Comment author: Lumifer 08 May 2014 04:01:47PM 0 points [-]

None of the above removes the biological need. Both Soylent and blow-up dolls satisfy the biological need.

Do note that humans have the biological need to have sex, not to impregnate (or be impregnated). Otherwise birth control would be a non-starter.

Comment author: army1987 17 May 2014 09:00:12AM *  0 points [-]

Do note that humans have the biological need to have sex, not to impregnate (or be impregnated).

IIRC, stereotype has it that some childless women above a certain age do have such a biological need, popularly known as "hearing one's biological clock ticking" or something like that.

Comment author: gjm 08 May 2014 04:27:04PM 0 points [-]

The "biological need" I have in mind is (obviously?) nutrition for eating, and reproduction for sex. Of course it's an individual need for eating, and a species-level (or gene-level) need for sex.

The biological need to have sex (as opposed to, say, "to reproduce") is parallel to the biological need to eat (as opposed to, say, "to be nourished"). Soylent doesn't do anything for that.

Comment author: Lumifer 08 May 2014 05:18:26PM 0 points [-]

The need for nutrition or reproduction exists only in the outside view.

From the point of the inside view, however, there is the need to eat things which will satisfy hunger and produce a feeling of satiation. There is no hardwired instinct for nutrition.

In the same way, from the inside view, there is the need to have sex and the impulse to care for children. Evolutionary speaking, that's sufficient because birth control is a recent invention.

Comment author: Eugine_Nier 11 May 2014 10:36:54PM 1 point [-]

Evolutionary speaking, that's sufficient because birth control is a recent invention.

I'm not convinced that's true. I believe something resembling condoms, made of cotton or animal intestine, goes as far back as ancient Egypt.

Comment author: Jayson_Virissimo 12 May 2014 01:50:36AM 3 points [-]

Avicenna's medical encyclopedia (available in Europe starting in the High Middle Ages) lists dozens of birth control methods, many of which probably even "worked".

Comment author: CellBioGuy 12 May 2014 01:52:10AM 1 point [-]

And chemical methods via plants going even further back, by analogy to modern and recent hunter gatherers.

Comment author: gjm 08 May 2014 05:47:40PM 0 points [-]

There is, indeed, no hardwired instinct for nutrition (which Soylent provides) but there is a hardwired instinct for eating tasty food (which Soylent doesn't provide).

How does the parallel with blow-up dolls go? There is no hardwired instinct for reproduction (which blow-up dolls don't provide), but there is a hardwired instinct for having orgasms (which blow-up dolls do provide, or at least may help some people with).

Seems almost exactly opposite to me.

Comment author: Lumifer 08 May 2014 06:28:45PM *  1 point [-]

there is a hardwired instinct for eating tasty food

There is a strong hardwired instinct for eating food, tasty or not. As I said, the criteria is that it stops you being hungry and makes you feel satiated. Soylent satisfies this instinct.

Whether Soylent provides adequate nutrition remains to be seen.

Comment author: DanielLC 12 May 2014 08:25:07PM 0 points [-]

Soylent provides ideal amounts of every known nutrient. It's possible that there's some obscure nutrient that people who live solely on Soylent haven't gone without long enough to have noticeable effects. Many people guard against this by having a regular meal once a day.

Comment author: Lumifer 12 May 2014 08:32:18PM 2 points [-]

Soylent provides ideal amounts of every known nutrient.

coughbullshitcough

Soylent provides the currently available estimates of the needed amounts of known essential nutrients for an average person of an average metabolism with no metabolic quirks.

Comment author: army1987 09 May 2014 08:17:57AM 0 points [-]

There is a strong hardwired instinct for eating food, tasty or not. As I said, the criteria is that it stops you being hungry and makes you feel satiated. Soylent satisfies this instinct.

Do you only ever buy the cheapest available food that stops you being hungry and makes you feel satiated? Why or why not?

Granted, some preferences may not be “hardwired”, but it doesn't make them any less real.

Comment author: wedrifid 09 May 2014 05:44:59PM 0 points [-]

Do you only ever buy the cheapest available food that stops you being hungry and makes you feel satiated? Why or why not?

Primarily because it is inconvenient.

Comment author: TylerJay 17 May 2014 12:16:39AM 2 points [-]

Your analogy actually seems more plausible to me than gjm's. With a blowup doll and with Soylent, you get a less pleasurable version of the action (sex and eating), while also fulfilling your needs/impulses. People feel more of a "need" to have sex than they feel a "need" to procreate.

Even if gjm's was better, I've never seen someone receive 5 downvotes for an analogy that they think isn't quite as good as another, so it seems more likely that you've been downvoted because people who like Soylent didn't like you talking mean about it.

Comment author: army1987 17 May 2014 10:58:41PM *  1 point [-]

Even if gjm's was better, I've never seen someone receive 5 downvotes for an analogy that they think isn't quite as good as another, so it seems more likely that you've been downvoted because people who like Soylent didn't like you talking mean about it.

Are you implying that's a bad thing? I didn't downvote Lumifer's comment, but if someone thought that a comment amounting to little more that ‘boo $thing!’ doesn't belong on LW, even in the Open Thread, even if it happens to be denotationally correct (e.g. “The ultra-rich, who control the majority of our planet's wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve”), I could see where they're coming from.

Comment author: TylerJay 18 May 2014 08:15:05PM *  1 point [-]

Are you implying that's a bad thing?

No, not necessarily; that's a good point. I just thought it was interesting that all of the subsequent discussion centered around the merits of the analogies, but it seems that what most people really cared about was the pro- or anti-Soylent positions.

Comment author: eli_sennesh 18 May 2014 01:41:26PM 1 point [-]

Short question, is Newcomb's Problem still considered an open issue, or has the community settled on a definite decision theory that will yield the right answers yet?

Comment author: army1987 18 May 2014 06:38:35PM 1 point [-]

From the 2013 survey results:

  • Don't understand/prefer not to answer: 92, 5.6%
  • Not sure: 103, 6.3%
  • One box: 1036, 63.3%
  • Two box: 119, 7.3%
  • Did not answer: 287, 17.5%
Comment author: eli_sennesh 18 May 2014 07:39:07PM 1 point [-]

Well that's nice, but I had meant: have we come to a consensus on what sort of decision theory will auto-generate the right result, rather than merely writing down the result of the decision theory preinstalled in our brains and calling it correct? Has the "Paradox" part been formally resolved?

Because, you know, I don't want to post about it and then get told my thoughts were already thought five years ago and didn't actually help solve the problem.

Comment author: Douglas_Knight 19 May 2014 11:20:58PM 0 points [-]

Incarnations of UDT sufficient for this problem have been made completely formal.

Comment author: eli_sennesh 20 May 2014 07:02:44PM 1 point [-]

Ummm.... link please?

Comment author: army1987 19 May 2014 07:39:09AM 0 points [-]

Not for any mathematically rigorous value of “what sort”, as far as I can tell.

Comment author: eli_sennesh 19 May 2014 10:30:30AM 1 point [-]

Great, I'm drafting a post.

Comment author: ChristianKl 27 May 2014 12:22:29PM 0 points [-]

A good decision theory performs well in many problems and not only in one problem. Having a decision theory that solves Newcomb's Problem but that doesn't perform well for other problems isn't helpful.

There isn't yet the ultimate decision theory that solves everything so I don't see how individual problems can be declared solved.

Comment author: JoshuaFox 06 May 2014 04:49:48PM *  -3 points [-]