Vote this comment up if you're opposed to moving discussions from open threads to the sub-Reddit.
Let's have a poll. Vote this comment up if you're in favor of using the sub-Reddit for assorted discussions instead of using open threads.
I would prefer if, before I click on the link, the comment tells me something more than someone disagrees with Cyan on the internet.
Good information to include would be the nature of the disagreement (what competing claim is made) and a summary of the reasoning that backs up that competing claim.
I further note that your link points to a list of articles, none of which have the name you cited. This is not helpful.
I think having the discussions fragmented across multiple sites will just lead to confusion. Personally, I wish the LW software supported subreddits. (Haven't gotten around to familiarizing myself with the codebase yet, but if any of you have, do you know if they actually removed subreddit support when they forked it, or does it currently just lack an interface to them?)
"I agree that there ought to be a better system for managing side conversations that aren't centered around the top-level posts, but I don't think this is the way."
Well, what would you recommend? I'm certainly open to suggestions. Or, if you want to build a better system yourself, go right ahead, we'll all be better off for it. But "the alternative you're proposing has flaws" does not justify the status quo if the status quo has even more flaws.
"The programmer in me says that this is a hack which would have negative side effects if used,"
Such as?
"and the evolved tribal instincts in me say that this is a power grab outside the bounds of your status."
Do you even know what my status is? It's not all that high, but it's more likely than not that you don't know, and you shouldn't presume that people other than you have low status.
I think this guy disagrees: Weatherson, Brian. Should We Respond to Evil with Indifference? Philosophy and Phenomenological Research 70 (2005): 613-35. Link: http://brian.weatherson.org/papers.shtml
I agree that there ought to be a better system for managing side conversations that aren't centered around the top-level posts, but I don't think this is the way. The programmer in me says that this is a hack which would have negative side effects if used, and the evolved tribal instincts in me say that this is a power grab outside the bounds of your status.
No, I am not a dualist.
If you are not dualistic about consciousness, could you describe why you value it more than cheesecake?
To be precise, I value positive conscious experience more than cheesecake, and negative conscious experience less than cheesecake.
I assign value to things according to how they are experienced, and consciousness is required for this experience. This has to do with the abstract properties of conscious experience, and not with how it is implemented, whether by mathematical structure of physical arrangements, or by ontologically basic consciousness.
What can be asserted without evidence can be dismissed without evidence.
-- Christopher Hitchens
Accuracy was sacrificed for a pleasant parallel construction. Anything can be so asserted.
And, without supporting evidence, such assertions demonstrate nothing.
The mere fact that an assertion has been made is, in fact, evidence. For example, I will now flip a coin five times, and assert that the outcome was THHTT. I will not provide any evidence other than that assertion, but that is sufficient to conclude that your estimate of the probability that it's true should be higher than 1/2^5. Most assertions don't come with evidence provided unless you go looking for it. If nothing else, most assertions have to be unsupported because they're evidence for other things and the process has to bottom out somewhere.
Now, as a matter of policy we should encourage people to provide more evidence for their assertions wherever possible, but that is entirely separate from the questions of what is evidence, what evidence is needed, and what is demonstrated by an assertion having been made.
It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.
It looks like a biology-inspired, predictive approach somewhat along the lines of Hawkins' HTMs, except that I've not seen her implementation details spelled out as thoroughly as Hawkins'.
Her analysis seems sound to me (in the sense that her proposed model quite closely matches how humans actually get through the day), except that she seems to elevate certain practical conclusions to a philosophical level that's not really warranted (IMO).
(Of course, I think there would likely be practical problems with AN-based systems being used in general applications -- humans tend to not like it when machines guess, especially if they guess wrong. We routinely prefer our tools to be stupid-but-predictable over smart-but-surprising.)
Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.
It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are the best and worst case scenarios conditioning on Friendly AI being IMpossible?
Has this been addressed before? As a disclaimer, I haven't thought much about this and I suspect that I'm dressing up the problem in a way that sounds different to me only because I don't fully understand the implications.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)