Premise: There exists a community whose top-most goal is to maximally and fairly fulfill the goals of all of its members. They are approximately as rational as the 50th percentile of this community. They politely invite you to join. You are in no imminent danger. Do you: * Join the...
Ohhhhh. WOW! Damn. Now I feel bad. I have been acting like a bull in a china shop, been an extremely ungracious guest, and have taken longer than I prefer to realize these things. My deepest apologies. My only defenses or mitigating circumstances: 1. I really didn't get it 2....
In the spirit of Asimov’s 3 Laws of Robotics 1. You should not be selfish 2. You should not be short-sighted or over-optimize 3. You should maximize the progress towards and fulfillment of all conscious and willed goals, both in terms of numbers and diversity equally, both yours and those...
I'd like to draw a distinction that I intend to use quite heavily in the future. The informal definition of intelligence that most AGI researchers have chosen to support is that of Shane Legg and Marcus Hutter -- “Intelligence measures an agent’s ability to achieve goals in a wide range...
"This premise is VERY flawed" (found here) is the sole author-supplied content of a comment. There are no supporting links or additional content, only a one-sentence quote of the "offending" premise. Yet, it has four upvotes. This is a statement that can be made about any premise. It is backed...
Someone take a look at my score and my history and explain my zero karma. My understanding was that karma never dropped below zero. Apparently, it never *displays* below zero but if it is deep-sixed, it might be a long, long time coming back.