I have written about this exact concept back in 2007 and am basing a large part of my current thinking on the subsequent development of the idea. The original core posts are at:
Relativistic irrationality -> http://www.jame5.com/?p=15
Absolute irrationality -> http://www.jame5.com/?p=45
Respect as basis for interaction with other agents -> http://rationalmorality.info/?p=8
Compassion as rationaly moral consequence -> http://rationalmorality.info/?p=10
Obligation for maintaining diplomatic relations -> http://rationalmorality.info/?p=11
A more rece...
Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.
You will find that this is pretty much what Kuhn says.
Brilliant post Wei.
Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper's Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn's Structure of Scientific Revolutions).
2) You cannot write a book that will be published under EY's name.
Its called ghost writing :-) but then again the true value add lies in the work and not in the identity of the author. (discarding marketing value in the case of celebrities)
Your reading into connotation a bit too much.
I do not think so - am just being German :-) about it: very precise and thorough.
In general: Because my time can be used to do other things which your time cannot be used to do; we are not fungible.
This statement is based on three assumptions: 1) What you are doing instead is in fact more worthy of your attention than your contribution here 2) I could not do what you are doing as least as well as you 3) I do not have other things to do that are at least as worthy of my time
None of those three I am personally willing to grant at this point. But surely that is not the case for all the others around here.
Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.
Interesting analogy - it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.
More recent criticism comes from Mike Treder - managing director of the Institute for Ethics and Emerging Technologies in his article "Fearing the Wrong Monsters" => http://ieet.org/index.php/IEET/more/treder20091031/
Very constructive proposal Kaj. But...
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done?
If Eliezer does not find it a worthwhile investment of his time - why should we?
There is no such thing as an "unobjectionable set of values".
And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so ...
A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created.
Good one - but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)
...Evolution created us. But it'll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we'l
"Besides that"? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.
Wedrifid, not sure what to tell you. Bostrom is but one voice and his evolutionary analysis is very much flawed - again: detailed critique upcoming.
...No, he mightn't care and I certainly don't. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in
Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.
Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not 'compassionate' as...
What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?
The detailed argument that led me to this conclusion is a bit complex. If you are interested in the details please feel free to start here (http://rationalmorality.info/?p=10) and drill down till you hit this post (http://www.jame5.com/?p=27)
Please realize that I spend 2 years writing my book 'Jame5' before I reached that initial insight that eventually lead to 'compassion is a condition for our existence and u...
If I understand your assertions correctly, I believe that I have developed many of them independently
That would not surprise me
Nothing compels us to change our utility function save self-contradiction.
Would it not be utterly self contradicting if compassion where a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?
No, it evolved once, as part of mammalian biology.
Sorry Crono, with a sample size of exactly one in regards to human level rationality you are setting the bar a little bit too high for me. However, considering how disconnected Zoroaster, Buddha, Lao Zi and Jesus where geographically and culturally I guess the evidence is as good as it gets for now.
Also, why should we give a damn about "evolution" wants, when we can, in principle anyway, form a singleton and end evolution?
The typical Bostromian reply again. There are plenty of other scholar...
Random I'll cop to, and more than what you accuse me of - dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent.
Very honorable of you - I respect you for that.
First: no argument is so compelling that all possible minds will accept it. Even the above proof of universality.
I totally agree with that. However the mind of a purposefully crafted AI is only a very small subset of all possible minds and has certain assumed characteristics. These are at a minimum: a utility function and the capacity for self ...
Excellent, excellent point Jack.
There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe.
This is poetry! Hope you don't mind me pasting something here I wrote in another thread:
"With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically sel...
With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
Full discussion with Kaj at her http://xuenay.livejournal.com/325292.html?view=1229740 live journal with further clarifications by me.
Tim: "If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct."
Actually compassion evolved many different times as a central doctrine of all major spiritual traditions. See the charter for compassion. This is in line with my prediction that I made independently and being unaware of this fact until I started looking for it back in late 2007 and eventually finding the link in late 2008 with Karen Armstrong's book The Great Transformation.
Tim: "Why is it a universal moral attra...
The longer I stay around here the more I get the feeling that people vote comments down purely because they don't understand them not because they found a logical or factual error. I expect more from a site dedicated to rationality. This site is called 'less wrong', not 'less understood', 'less believed' or 'less conform'.
Tell me: in what way do you feel that Adelene's comment invalidated my claim?
"This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief."
But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?
From Robin: Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".
I feel the other way around at the moment. Namely "some people, observing the statement, may evaluate it as false without performing any complex analysis"
Perfectly reasonable. But the argument - the evidence if you will - is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.
Since when are 'heh' and 'but, yeah' considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)
"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals."
Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique.
I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is m...
Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.
I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don't you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.
So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?
Hmm - interesting. I thought this could be of interest, considering that there is a large overlap in the desire to be rational on this site and combating the existential risks a rouge AI poses. Reason and existence are central to Objectivism too after all:
“it is only the concept of ‘Life’ that makes the concept of ‘Value’ possible,” and, “the fact that a living entity is, determines what it ought to do.” She writes: “there is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to livin...
Fun investment fact: the two trades that over 40 years turned 1'000 USD into >1'000'000 USD
1'000 USD in Gold on Jan 1970 for 34.94 USD / oz (USD 1'000.00)
1st Trade Sell Gold in Jan 1980 at 675.30 USD / oz (USD 19'327.41) Buy Dow on April 18 1980 at 763.40 (USD 19'327.41)
2nd Trade Sell Dow on Jan 14 2000 at 11'722.98 (USD 296'797.14) Buy Gold on Nov 11 2000 at 264.10 USD / oz (USD 296'797.14)
Portfolio value today: ~1'187'188.57 USD
:-)
Why am I being downvoted?
Sorry for the double post.