Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: adam_strandberg 25 September 2014 04:50:07PM 3 points [-]

Yes, thank you for writing this- I've been meaning to write something like it for a while and now I don't need to! I initially brushed Newcomb's Paradox off as an edge case and it took me much longer than I would have liked to realize how universal it was. A discussion of this type should be included with every introduction to the problem to prevent people from treating it as just some pointless philosophical thought experiment.

Comment author: adam_strandberg 05 August 2014 04:53:55AM 1 point [-]

You may find this tool useful for making nicer drawings of graphs:

http://sandbox.kidstrythisathome.com/erdos/

Comment author: NancyLebovitz 04 August 2014 11:11:23AM 0 points [-]

A couple of minutes in, the podcast mentions the somewhat dubious idea that obesity spreads through social networks. Does this cast much doubt on the rest of the piece?

Comment author: adam_strandberg 04 August 2014 07:28:15PM *  0 points [-]

As far as I can tell from the evidence given in the talk, contagious spreading of obesity is a plausible but not directly proven idea. Its plausibility comes from the more direct tests that he gives later in the talk, namely the observed spread of cooperation or defection in iterated games.

However, I agree that it's probably important to not too quickly talk about contagious obesity because (a) they haven't done the more direct interventional studies that would show whether this is true, and (b) speculating about contentious social issues in public before you have a solid understanding of what's going on leads to bad things. He could have more explicitly gotten at the point that we're not sure what effects cause the correlations that we see- I caught it but I suspect people paying less attention would come away thinking that they had proved the causal model.

Comment author: ArisKatsaris 01 August 2014 08:42:25PM 0 points [-]

Other Media Thread

Comment author: adam_strandberg 03 August 2014 06:33:15AM 1 point [-]

The Moire Eel - move your cursor around and see all the beautfiul, beautiful moire patterns.

Comment author: ArisKatsaris 01 August 2014 08:42:30PM 0 points [-]

Podcasts Thread

Comment author: adam_strandberg 03 August 2014 05:19:43AM 0 points [-]

Social Networks and Evolution: a great Oxford neuroscience talk. I will also shamelessly push this blog post that I wrote about the connection between the work in the lecture and Jared Diamond's thesis that agriculture was the worst mistake in human history.

Comment author: RichardKennaway 22 July 2014 11:00:28AM 6 points [-]

Some putatively Knightian uncertainty and ambiguity aversion can be explained as maximising expected utillity when playing against an adversary.

For the Ellsberg paradox, the person offering the first bet can minimise his payout by putting no black balls in the urn. If I expect him to do that (and he can do so completely honestly, since he volunteers no information about the method used to fill the urn) then I should bet on red, for a 1/3 chance of winning, and not black, for a zero chance.

The person offering the second bet can minimise his payout by putting no yellow balls in the urn. Then black-or-yellow has a 2/3 chance and red-or-yellow a 1/3 chance and I should bet on black-or-yellow.

The lesson here is, don't take strange bets from strangers. I'd quote again the lines from Guys And Dolls about this, but the Google box isn't helping me find when it was last in a quotes thread. (Is there some way the side bar could be excluded from Google's search spiders? It's highly volatile content and shouldn't be indexed.)

In the tennis example, someone betting on the mysterious game or the unbalanced game is in the position of someone betting on horse races who knows nothing about horses. He should decline to bet, because while it is possible to beat the bookies, it's a full-time job to maintain the necessary knowledge of horse-racing.

Comment author: adam_strandberg 01 August 2014 04:28:45AM 2 points [-]

This is exactly what I was thinking the whole time. Is there any example of supposed "ambiguity aversion" that isn't explained by this effect?

Comment author: Yvain 22 July 2014 04:21:27AM *  49 points [-]

"Hard mode" sounds too metal. The proper response to "X is hard mode" is "Bring it on!"

Therefore I object to "politics is hard mode" for the same reason I object to "driving a car with your eyes closed is hard mode". Both statements are true, but phrased to produce maximum damage.

There's also a way that "politics is hard mode" is worse than playing a video game on hard mode, or driving a car on hard mode. If you play the video game and fail, you know and you can switch back to an easier setting. If you drive a car in "hard mode" and crash into a tree, you know you should keep your eyes open the next time.

If you discuss politics in "hard mode", you can go your entire life being totally mind-killed (yes! I said it!) and just think everyone else is wrong, doing more and more damage each time you open your mouth and destroying every community you come in contact with.

Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"? There may be a tiny handful of people wise enough to try it - and ironically, those are probably the same handful who have a tiny chance of navigating the minefield. Everyone else is just going to say "No, I'm high-enough level, YOU'RE the one who needs to bow out!"

Both "hard mode" and "mind-killer" are intended to convey a sense of danger, but the first conveys a fun, exciting danger that cool people should engage with as much as possible in order to prove their worth, and the latter conveys an extreme danger that can ruin everything and which not only clouds your faculties but clouds the faculty to realize that your faculties are clouded. As such, I think "mind-killer" is the better phrase.

EDIT: More succintly: both phrases mean the same thing, but with different connotations. "Hard mode" sounds like we should accord more status to politics, "mind-killer" sounds like we should accord less. I feel like incentivizing more politics is a bad idea and will justify this if anyone disagrees.

Comment author: adam_strandberg 31 July 2014 11:31:11PM 0 points [-]

Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"? There may be a tiny handful of people wise enough to try it.

This is precisely why people should be encouraged to do it more. I've found that the more you admit to a lack of ability where you don't have the ability, the more people are willing to listen to you where you do.

I also see interesting parallels to the relationship between skeptics and pseudoscience, where we replace skeptics -> rationalists, pseudoscience -> religion. Namely, "things that look like politics are the mindkiller" works as "things that look like pseudoscience are obviously dumb". It provides an opportunity to view yourself as smarter than other people without thinking too hard about the issue.

Comment author: adam_strandberg 31 July 2014 09:50:04PM 4 points [-]

1) This is fantastic- I keep meaning to read more on how to actually apply Highly Advanced Epistemology to real data, and now I'm learning about it. Thanks!

2) This should be on Main.

3) Does there exist an alternative in the literature to the notation of Pr(A = a)? I hadn't realized until now how much the use of the equal sign there makes no sense. In standard usage, the equal sign either refers to literal equivalence (or isomorphism) as in functional programming, or variable assignment, as in imperative programming. This operation is obviously not literal equivalence (the set A is not equal to the element a), and it's only sort of like variable assignment. We do not erase our previous data of the set A: we want it to be around when we talk about observing other events from the set A.

In analogy with Pearl's "do" notation, I propose that we have an "observe notation", where Pr(A = a) would be written as Pr(obs_A (a)), and read as "probability that event a is observed from set A," and not overload our precious equal sign. (The overloading with equivalence vs. variable assignment is already stressful enough for the poor piece of notation.)

I'm not proposing that you change your notation for this sequence, but I feel like this notation might serve for clearer pedagogy in general.

Comment author: John_Maxwell_IV 08 July 2014 06:18:45AM 0 points [-]

One idea: figure out why specifically you want to learn neuroscience (for some project? thing you want to write? question you want to answer?) and then let your learning facilitate the thing you are doing as a test of whether you're learning well or not. (E.g. post an essay about your neuroscience-based conclusions on an online community for neuroscientists and see what they think.) Neuroscience is a bit of a bad fit for this kind of learning by doing though.

Comment author: adam_strandberg 09 July 2014 05:41:14AM 0 points [-]

That is the general approach I've been taking on the issue so far- basically I'm interested in learning about consciousness, and I've been going about it by reading papers on the subject.

However, part of the issue that I have is that I don't know what I don't know. I can look up terms that I don't know that show up in papers, but in the literature there are presumably unspoken inferences being made based on "obvious" information.

Furthermore, since I have a bias toward novelty or flashiness, I may miss things that blatantly and obviously contradict results that any well-trained neuroscientist or cognitive scientist should know and end up believing something that couldn't be true.

Do you have recommendations for places where non-experts can ask more knowledgeable people about neuro/cog sci? There exists a cognitive sciences stack exchange, but it appears to be poorly trafficked- there's an average of about one posting per week.

Comment author: adam_strandberg 09 July 2014 04:14:07AM *  6 points [-]

(How many different DAGs are possible if you have 600 nodes? Apparently, >2^600.)

Naively, I would expect it to be closer to 600^600 (the number of possible directed graphs with 600 nodes).

And in fact, it is some complicated thing that seems to scale much more like n^n than like 2^n: http://en.wikipedia.org/wiki/Directed_acyclic_graph#Combinatorial_enumeration

View more: Next