Comment author: Unnamed 02 April 2010 02:05:58AM *  38 points [-]

Vote this comment up if you're opposed to moving discussions from open threads to the sub-Reddit.

Karma balance is here

Comment author: Unnamed 02 April 2010 02:05:13AM *  20 points [-]

Let's have a poll. Vote this comment up if you're in favor of using the sub-Reddit for assorted discussions instead of using open threads.

Other voting option is here, Karma balance is here

Comment author: alyssavance 02 April 2010 02:02:32AM 3 points [-]

I'm proposing that we set up a new discussion community such that Less Wrongers have a place to talk about off-topic stuff other than Open Thread (which is hugely overcrowded). If either LW or the subreddit crashes, it should have no effect on the other.

Comment author: JGWeissman 02 April 2010 02:02:27AM 7 points [-]

I would prefer if, before I click on the link, the comment tells me something more than someone disagrees with Cyan on the internet.

Good information to include would be the nature of the disagreement (what competing claim is made) and a summary of the reasoning that backs up that competing claim.

I further note that your link points to a list of articles, none of which have the name you cited. This is not helpful.

Comment author: ata 02 April 2010 02:02:10AM 9 points [-]

I think having the discussions fragmented across multiple sites will just lead to confusion. Personally, I wish the LW software supported subreddits. (Haven't gotten around to familiarizing myself with the codebase yet, but if any of you have, do you know if they actually removed subreddit support when they forked it, or does it currently just lack an interface to them?)

Comment author: alyssavance 02 April 2010 02:00:35AM *  2 points [-]

"I agree that there ought to be a better system for managing side conversations that aren't centered around the top-level posts, but I don't think this is the way."

Well, what would you recommend? I'm certainly open to suggestions. Or, if you want to build a better system yourself, go right ahead, we'll all be better off for it. But "the alternative you're proposing has flaws" does not justify the status quo if the status quo has even more flaws.

"The programmer in me says that this is a hack which would have negative side effects if used,"

Such as?

"and the evolved tribal instincts in me say that this is a power grab outside the bounds of your status."

Do you even know what my status is? It's not all that high, but it's more likely than not that you don't know, and you shouldn't presume that people other than you have low status.

Comment author: neq1 02 April 2010 01:59:15AM 5 points [-]

""Not evil, but longing for that which is better, more often directs the steps of the erring"

Theodore Dreiser, Sister Carrie

Comment author: RobinZ 02 April 2010 01:58:50AM 6 points [-]

Wait, are you proposing that a part of Less Wrong be hosted off-site? I'm not sure that's a good idea. Mirrors are one thing, but multiple single-points-of-failure are another entirely.

Comment author: RobinZ 02 April 2010 01:58:12AM 2 points [-]

The argument from (b)* is one of the stronger ones I've heard against FAI.

* Not to be confused with the argument from /b/.

In response to comment by Cyan on The I-Less Eye
Comment author: utilitymonster 02 April 2010 01:45:01AM 0 points [-]

I think this guy disagrees: Weatherson, Brian. Should We Respond to Evil with Indifference? Philosophy and Phenomenological Research 70 (2005): 613-35. Link: http://brian.weatherson.org/papers.shtml

Comment author: jimrandomh 02 April 2010 01:44:10AM *  4 points [-]

I agree that there ought to be a better system for managing side conversations that aren't centered around the top-level posts, but I don't think this is the way. The programmer in me says that this is a hack which would have negative side effects if used, and the evolved tribal instincts in me say that this is a power grab outside the bounds of your status.

Comment author: JGWeissman 02 April 2010 01:17:42AM 0 points [-]

No, I am not a dualist.

If you are not dualistic about consciousness, could you describe why you value it more than cheesecake?

To be precise, I value positive conscious experience more than cheesecake, and negative conscious experience less than cheesecake.

I assign value to things according to how they are experienced, and consciousness is required for this experience. This has to do with the abstract properties of conscious experience, and not with how it is implemented, whether by mathematical structure of physical arrangements, or by ontologically basic consciousness.

Comment author: Kevin 02 April 2010 01:16:51AM *  2 points [-]

I think human history has demonstrated that (b) is certainly true... sometimes I am surprised we are still here.

Comment author: jimrandomh 02 April 2010 01:16:13AM *  8 points [-]

What can be asserted without evidence can be dismissed without evidence.

-- Christopher Hitchens

Accuracy was sacrificed for a pleasant parallel construction. Anything can be so asserted.

And, without supporting evidence, such assertions demonstrate nothing.

The mere fact that an assertion has been made is, in fact, evidence. For example, I will now flip a coin five times, and assert that the outcome was THHTT. I will not provide any evidence other than that assertion, but that is sufficient to conclude that your estimate of the probability that it's true should be higher than 1/2^5. Most assertions don't come with evidence provided unless you go looking for it. If nothing else, most assertions have to be unsupported because they're evidence for other things and the process has to bottom out somewhere.

Now, as a matter of policy we should encourage people to provide more evidence for their assertions wherever possible, but that is entirely separate from the questions of what is evidence, what evidence is needed, and what is demonstrated by an assertion having been made.

Comment author: Peter_de_Blanc 02 April 2010 01:13:01AM 21 points [-]

Of course, to really see what someone values you'd have to see their budget profile across a wide range of wealth levels.

Comment author: pjeby 02 April 2010 01:05:38AM 1 point [-]

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

It looks like a biology-inspired, predictive approach somewhat along the lines of Hawkins' HTMs, except that I've not seen her implementation details spelled out as thoroughly as Hawkins'.

Her analysis seems sound to me (in the sense that her proposed model quite closely matches how humans actually get through the day), except that she seems to elevate certain practical conclusions to a philosophical level that's not really warranted (IMO).

(Of course, I think there would likely be practical problems with AN-based systems being used in general applications -- humans tend to not like it when machines guess, especially if they guess wrong. We routinely prefer our tools to be stupid-but-predictable over smart-but-surprising.)

Comment author: RobinZ 02 April 2010 01:03:28AM 1 point [-]

Such an eventuality would seem to require that (a) human beings are not computable or (b) human beings are not Friendly.

In the latter case, if nothing else, there is [individual]-Friendliness to consider.

Comment author: wheninrome15 02 April 2010 12:53:16AM 1 point [-]

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.

It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are the best and worst case scenarios conditioning on Friendly AI being IMpossible?

Has this been addressed before? As a disclaimer, I haven't thought much about this and I suspect that I'm dressing up the problem in a way that sounds different to me only because I don't fully understand the implications.

Comment author: cousin_it 02 April 2010 12:48:25AM 2 points [-]

You mean, like every Bayesian believes their prior is correct?

Comment author: Yvain 02 April 2010 12:46:16AM 23 points [-]

"Everyone thinks they've won the Magical Belief Lottery. Everyone thinks they more or less have a handle on things, that they, as opposed to the billions who disagree with them, have somehow lucked into the one true belief system."

-- R Scott Bakker, Neuropath

View more: Prev | Next