Comment author: Kevin 19 April 2010 01:37:47AM *  2 points [-]

I think Less Wrong has an abnormally high percentage of lurkers because if participating at any web site is intimidating, participating at Less Wrong is especially intimidating because of the high level of discourse and English linguistic proficiency.

For the strictest definition of lurker, if you have registered for an account you are not or are not longer a lurker, but the definition is really not important.

Comment author: gregconen 19 April 2010 07:40:20AM 1 point [-]

Also, the karma system adds an additional barrier, at least in my mind. Knowing that your comment is going to be explicitly judged and your score added to a "permanent record" can be intimidating.

Comment author: gregconen 07 April 2010 09:08:50PM 1 point [-]

If you haven't already, do check out Eby's Instant Irresistible Motivation video for learning how to create positive motivation.

Interesting. In fact, it seems to mesh with the process I've successfully used to do things like cleaning my desk.

Unfortunately, many of the tasks I have to do don't lend themselves to the visualization in step 1. How does one visualize having studied for an exam, or completed an exercise routine?

Comment author: [deleted] 07 April 2010 12:10:00AM 10 points [-]

It's very hard to compare one example of genocide to another, particularly when you are comparing events that occurred in different eras. As the genocides of the 20th century proved, technology changes the game by making it easier to commit systematic mass murder. Therefore, comparing body counts or even the frequency of mass slaughter doesn't truly compare two ideologies.

In response to comment by [deleted] on Single Point of Moral Failure
Comment author: gregconen 07 April 2010 02:23:59AM *  4 points [-]

technology changes the game by making it easier to commit systematic mass murder.

Not to mention the simple expedient of having more people around.

Comment author: Emile 06 April 2010 09:55:42AM 4 points [-]

I would say that Gall's Law is about the design capacities of human beings (like Dunbar's Number), or is something like "there's a threshold to how much new complexity you can design and expect to work", with the amount of complexity being different for humans, superintelligent aliens, chimps, or Mother Nature.

(the limit is particularly low fo Mother Nature - she makes smaller steps, but got to make much more of them)

Comment author: gregconen 06 April 2010 06:38:42PM *  1 point [-]

That's not my point. My point is that Gall's law is unfalsifiable by anything short of Omega converting its entire light cone into computronium/utilium in a single, plank-time step.

Edit: Not to say that Gall's Law can't be useful to keep in mind during engineering design.

Comment author: gregconen 04 April 2010 05:07:43PM *  2 points [-]

Deleated as a repeat.

Comment author: gregconen 04 April 2010 04:52:19PM 12 points [-]

Do not imagine that mathematics is hard and crabbed, and repulsive to common sense. It is merely the etherealization of common sense.

WIlliam Thomson, Lord Kelvin

Comment author: Tyrrell_McAllister 02 April 2010 11:57:53PM 2 points [-]

I'm not sure.

Isn't the first rocket or airplane also built on simple technologies?

I'm not saying that the first rocket and first airplane falsified Gall's Law. I'm saying that, had the space shuttle, in the form in which it was actually built, been the first rocket or the first airplane, it would have falsified Gall's Law.

Comment author: gregconen 03 April 2010 12:22:50AM *  5 points [-]

Suppose a hyperintelligent alien race did build a space shuttle equivalent as their first space-capable craft, and then went on to build interplanetary and interstellar craft.

Alien 1: The [interstellar craft, driven by multiple methods of propulsion and myriad components] disproves Gall's Law.

Alien 2: Not at all. [Craft] is a simple extension of well-developed principles like the space shuttle and the light sail.

You can simply define a "working simple system" as whatever you can make work, making that a pure tautology.

Comment author: thomblake 19 March 2010 12:12:52AM 0 points [-]

I wonder if anyone else is reading this...

Comment author: gregconen 19 March 2010 12:55:31AM 0 points [-]

You should probably make an explicit karma balance post for this.

In response to comment by Roko on Let There Be Light
Comment author: Alicorn 18 March 2010 08:27:10PM *  2 points [-]

Surely having accurate positive self-beliefs is a win over having inaccurate positive self-beliefs, even if having inaccurate positive self-beliefs is a loss compared to having accurate negative self-beliefs. I don't suggest that you should become luminous enough to say, "Wow, I suck in the following ways!" and then quit.

In response to comment by Alicorn on Let There Be Light
Comment author: gregconen 18 March 2010 08:30:58PM 0 points [-]

I think the idea is to have both accurate and inaccurate positive self-beliefs, and no negative self-beliefs, accurate or otherwise.

Whether this is desirable or even possible I take no stance.

Comment author: byrnema 18 March 2010 07:35:44PM *  6 points [-]

I am generally confused by the metaethics sequence, which is why I didn't correct Pengvado.

at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.

Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn't we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?

So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, 'My preferences aren't logical. They evolved.'

If there's a difference in two positions in the moral landscape, we needn't justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don't consider logical consistency to be the most important moral principle.

Comment author: gregconen 18 March 2010 08:03:39PM 3 points [-]

But since our preferences are given to us, broadly, by evolution, shouldn't we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?

Yes.

I have a strong preferences for simple set of moral preferences, with minimal inconsistency.

I admit that the idea of holding "killing babies is wrong" as a separate principle from "killing humans is wrong", or holding that "babies are human" as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.

View more: Prev | Next