All of ellx's Comments + Replies

could someone please explain this one?

Milosz is obviously talking about Communism and the philosophy it was based on. (If you haven't read The Captive Mind, it's pretty good albeit obviously dated).

The lesson is that philosophy can be Serious Business and you ignore bad philosophy at your own peril. To paraphrase the famous Trotsky paraphrase: You may not be interested in diseased Philosophy, but diseased Philosophy is interested in you.

It seems to me that on lesswrong there is an overemphasis on status as a human motivator. For example, I think it's possible for a scientist to want to make an important discovery not to gain status in the scientific community but for the beauty of knowledge.

It seems it's a similar situation to the 'if you're a hammer you see all problems as nails' kind of situation, where 'doing it for status' is such a readily thought of thing that it gets over applied.

thoughts?

2Desrtopa
It's certainly possible, but that doesn't mean that status isn't a powerful motivator, and one which we're far more likely to underestimate. The "hammer that makes you see all problems as nails" bit is a description I've used myself though, in regards to Robin Hanson in particular with his treatment of status and signalling. On Overcoming Bias far more than here I get the impression that a lot of the essays develop out of posing a question and asking "is there a way I can use status and signalling to explain this?"

"Possible" is not a refutation of a general statement, only of an absolute one.

Rather, I suspect the emphasis is to compensate for nerds of various sorts - that being who makes up most of the LessWrong audience - placing far less emphasis on status than most people, thus failing to understand the overwhelming power of tribal politics in almost every human interaction.

Remember: we grew this great big brain just to do tribal politics. We grew general intelligence as a better way to do tribal politics. We discovered quantum mechanics and built a hug... (read more)

4JoshuaZ
I'm unsure of whether it is overused. However, I'm not convinced that the heavy emphasis on status pays much of its rent. What do the overarching status hypotheses predict that we would expect not to see if they weren't the case?

It seems to me that on lesswrong there is an overemphasis on status as a human motivator. For example, I think it's possible for a scientist to want to make an important discovery not to gain status in the scientific community but for the beauty of knowledge.

I think you miss the point of how status is related to motivation. Relatively few people actually think "I want status and so I will do X". Instead, they just actually want to do X because that is what they feel like doing. However when we wish to model or predict how humans will behave th... (read more)

yeah, i was thinking that this could be correctly titled 'the article which tries to convince you to stop reading it'.

is your post missing some of what you intended it to say?

if you wanted someone on lesswrong to know and be able to confirm that this game has rubberband AI then it's obviously very off-topic here

-3Carinthium
I wanted somebody to tell me if it did or not, or to determine if from what I already knew the conclusion was already obvious- as I said, I'm aware of my own bias.

I'm curious what peoples opinions are of Jeff Hawkins' book 'on intelligence', and specifically the idea that 'intelligence is about prediction'. I'm about halfway through and I'm not convinced, so I was wondering if anybody could point me to further proofs of this or something, cheers

2nhamann
With regards to further reading, you can look at Hawkins' most recent (that I'm aware of) paper, "Towards a Mathematical Theory of Cortical Micro-Circuits". It's fairly technical, however, so I hope your math/neuroscience background is strong (I'm not knowledgeable enough to get much out of it). You can also take a look at Hawkins' company Numenta, particularly the Technology Overview. Hierarchical Temporal Memory is the name of Hawkins' model of the neocortex, which IIRC he believes is responsible for some of the core prediction mechanisms in the human brain. Edit: I almost forgot, this video of a talk he presented earlier this year may be the best introduction to HTM.
1gwern
Intelligence-as-prediction/compression is a pretty familiar idea to LWers; there are a number of posts on them which you can find by searching, or you can try looking into the bibliographies and links in: * http://en.wikipedia.org/wiki/Marcus_Hutter * http://en.wikipedia.org/wiki/Hutter_Prize * http://en.wikipedia.org/wiki/J%C3%BCrgen_Schmidhuber (I have no comments anent On Intelligence specifically. I remember it as being pretty vague as to specifics, and not very dense at all - unobjectionable.)

I'd like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.

For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn't appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the... (read more)

3MartinB
Being right on group effects is difficult. Is there a consistent path for what LW wants to be? a) rationalist site filled up with meta topics and examples b) a) + detailed treats of some important topics c) open to everything as long as reason is used and so on. I personally like and profit from the discussing of akrasia methods. But it might be detrimental to the main target of the site. Also I would very much like to see a cannon develop for knowledge that LWers generally agree upon including, but not limited to the topics I currently care about myself. Voicing ideas depends on where you are. In social settings I more and more advice against it. Arguing/discussing is just not helpful. And if you are filled up with weird ideas then you get kicked out, which might be bad for other goals you have. It would be great to have a place for any idea to be examined for right and wrong.

also, don't forget to consider that the cat is conscious and might not like getting hit by pennies :)

I never really got into playing starcraft because of the primitive interface, i could never really enjoy playing it, but I am into watching korean matches with english commentarys on youtube.

I think that the primitive interface makes the game less enjoyable for me, but doesn't add 'fake difficulty'. I like that its a very difficult game to play well in terms of micro and macro, and then on top of that starcraft is also rich in strategy and 'tradition' (for some reason I like that starcraft is a very old game)