Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Mitchell_Porter 06 August 2012 11:56:42AM 8 points [-]

When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he's talking about and what people like Aquinas were talking about - that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this - I'd say it sometimes comes from an advanced state of whimsical despair at ever being understood - but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.

Comment author: SusanBrennan 06 August 2012 12:29:06PM 1 point [-]

Thank you for the clarification, and my apologies to Will. I do have some questions, but writing a full post from the smartphone I am currently using would be tedious. I'll wait until I get to a proper computer.

In response to comment by [deleted] on The curse of identity
Comment author: Will_Newsome 06 August 2012 10:46:10AM *  -2 points [-]

I mean that, and an infinite number of questions more and less like that, categorically, in series and in parallel. (I don't know how to interpret "<gd&rVF!>", but I do know to interpret it that it was part of your point that it is difficult to interpret, or analogous to something that is difficult to interpret, perhaps self-similarly, or in a class of things that is analogous to something or a class of things that is difficult to interpret, perhaps self-similarly; also perhaps it has an infinite number of intended or normatively suggested interpretations more or less like those.)

(This comment also helps elucidate my previous comment, in case you had trouble understanding that comment. If you can't understand either of these comments then maybe you should read more of the Bible, or something, otherwise you stand a decent chance of ending up in hell. This applies to all readers of this comment, not just army1987. You of course have a decent change of ending up in hell anyway, but I'm talking about marginals here, naturally.)

Comment author: SusanBrennan 06 August 2012 11:10:08AM *  0 points [-]

otherwise you stand a decent chance of ending up in hell.

Comments like this are better for creating atheists, as opposed to converting them.

Comment author: Shanya 30 July 2012 01:05:47PM 0 points [-]

A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?

What does framster mean?

Comment author: SusanBrennan 30 July 2012 01:25:55PM 4 points [-]

That's the point.

Comment author: jacoblyles 18 July 2012 08:59:02PM 1 point [-]

I follow the virtue-ethics approach, I do actions that make me like the person that I want to be. The acquisition of any virtue requires practice, and holding open the door for old ladies is practice for being altruistic. If I weren't altruistic, then I wouldn't be making myself into the person I want to be.

It's a very different framework from util maximization, but I find it's much more satisfying and useful.

Comment author: SusanBrennan 18 July 2012 09:48:14PM *  1 point [-]

It's a very different framework from util maximization, but I find it's much more satisfying and useful

And if it wasn't more satisfying and useful, would you still follow it?

Comment author: pjeby 15 July 2012 07:28:09PM 12 points [-]

Somebody (possibly an LWer?) proposed showing up to the car dealership without any cash or credit cards, just a check made out for the agreed-upon amount; the dealer now has no choice but to either take the money or forget about the whole deal.

While I don't remember this specific example anywhere on LessWrong, I actually did this last February. I vaguely recall some of the inspiration being discussions of strategy on LW, specifically the one about removing your car's steering wheel in order to win at the game of "Chicken".

(The rest of the inspiration was that I didn't trust the dealer not to screw with something once I got there, and a strong lack of desire to get into any sort of argument about it.)

Comment author: SusanBrennan 15 July 2012 09:45:00PM *  3 points [-]

Is this the post you were thinking of?

EDIT: Never mind. I'm pretty sure Gwern got the right one.

Comment author: johnlawrenceaspden 15 July 2012 03:35:11PM 2 points [-]

Well, I hate to say this for obvious reasons, but if the magic sugar water cured my hayfever just once, I'd try it again, and if it worked again, I'd try it again. And once it had worked a few times, I'd probably keep trying it even if it occasionally failed.

If it consistently worked reliably I'd start looking for better explanations. If no-one could offer one I'd probably start believing in magic.

I guess not believing in magic is something to do with not expecting this sort of thing to happen.

Comment author: SusanBrennan 15 July 2012 04:13:09PM 2 points [-]

The placebo effect strikes me as a decent enough explanation.

Comment author: Multiheaded 12 May 2012 07:55:32AM *  -1 points [-]

I don't care all that much about political democracy; what I meant is that Japan, India or, looking at the relative national conditions, even Turkey did NOT require some particular ruthlessness to modernize.

edit: derp

Comment author: SusanBrennan 12 May 2012 09:36:13PM 4 points [-]

even Turkey did NOT require some particular ruthlessness to modernize.

Could you explain the meaning of this sentence please. I'm not sure I have grasped it correctly. To me it sounds like that you are saying that there was no ruthlessness involved in Atatürk's modernizing reforms. I assume that's not the case, right?

In response to comment by TimS on A sense of logic
Comment author: [deleted] 12 May 2012 06:10:26PM 6 points [-]

Tell that person that feathers are light, what is light cannot be dark, therefore feathers cannot be dark.

In response to comment by [deleted] on A sense of logic
Comment author: SusanBrennan 12 May 2012 09:29:21PM 0 points [-]

This is my favorite response so far.

Comment author: Viliam_Bur 04 May 2012 08:52:14AM *  2 points [-]

The theory of "rational addiction" seems like an example that for any (consistent) behavior you can find such utility function that this behavior maximizes it. But it does not mean that this is really a human utility function.

it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational

For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.

Comment author: SusanBrennan 04 May 2012 09:50:47AM 2 points [-]

For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.

True. I guess it's just that the consequences of such actions can often lead to a large amount of negative utility according to my own utility function, which I like to think of as more universalist than egoist. But people who are selfish, rational and intelligent can, of course, cause severe problems (according to the utility functions of others at least). This, I gather, is fairly well understood. That's probably why those characteristics describe the greater proportion of Hollywood villains.

In response to comment by [deleted] on Rationality Quotes May 2012
Comment author: nykos 03 May 2012 11:48:47AM *  5 points [-]

Even though his prescription may be lacking (here is some criticism to neocameralism: http://unruled.blogspot.com/2008/06/about-fnargocracy.html ), his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug's diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.

One example of a bad consequence of Universalism is the delay of the Singularity. If you, for example, want to find out why Jews are more intelligent on average than Blacks, the system will NOT support your work and will even ostracize you for being racist, even though that knowledge might one day prove invaluable to understanding intelligence and building an intelligent machine (and also helping the people who are less fortunate at the genetic lottery). The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another. Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy. While we don't have the knowledge to create machines of higher intelligence than us, we do know how to create a smarter next generation of human beings. Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people. We need more smart people (at least until we can build smarter machines), so that we all may benefit from the products of their minds.

Comment author: SusanBrennan 03 May 2012 12:58:19PM 7 points [-]

Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people.

That seems a little bit simplistic. How many problems have been caused by smart people attempting to implement plans which seem theoretically sound, but fail catastrophically in practice? The not-so-smart people are not inclined to come up with such plans in the first place. In my view, the people inclined to cause the greatest problems are the smart ones who are certain that they are right, particularly when they have the ability to convince other smart people that they are right, even when the empirical evidence does not seem to support their claims.

While people may not agree with me on this, I find the theory of "rational addiction" within contemporary economics to carry many of the hallmarks of this way of thinking. It is mathematically justified using impressively complex models and selective post-hoc definitions of terms and makes a number of empirically unfalsifiable claims. You would have to be fairly intelligent to be persuaded by the mathematical models in the first place, but that doesn't make it right.

basically, my point is: it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational. The problems caused by the former are lesser in scale.

View more: Next