All of StefanPernar's Comments + Replies

I have written about this exact concept back in 2007 and am basing a large part of my current thinking on the subsequent development of the idea. The original core posts are at:

Relativistic irrationality -> http://www.jame5.com/?p=15

Absolute irrationality -> http://www.jame5.com/?p=45

Respect as basis for interaction with other agents -> http://rationalmorality.info/?p=8

Compassion as rationaly moral consequence -> http://rationalmorality.info/?p=10

Obligation for maintaining diplomatic relations -> http://rationalmorality.info/?p=11

A more rece... (read more)

1StefanPernar
Why am I being downvoted? Sorry for the double post.

Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.

You will find that this is pretty much what Kuhn says.

Brilliant post Wei.

Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper's Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn's Structure of Scientific Revolutions).

2wedrifid
Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.

Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?

2) You cannot write a book that will be published under EY's name.

Its called ghost writing :-) but then again the true value add lies in the work and not in the identity of the author. (discarding marketing value in the case of celebrities)

Your reading into connotation a bit too much.

I do not think so - am just being German :-) about it: very precise and thorough.

In general: Because my time can be used to do other things which your time cannot be used to do; we are not fungible.

This statement is based on three assumptions: 1) What you are doing instead is in fact more worthy of your attention than your contribution here 2) I could not do what you are doing as least as well as you 3) I do not have other things to do that are at least as worthy of my time

None of those three I am personally willing to grant at this point. But surely that is not the case for all the others around here.

2kurige
1) You can summarize arguments voiced by EY. 2) You cannot write a book that will be published under EY's name. 3) Writing a book takes a great deal of time and effort. You're reading into connotation a bit too much.

Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.

Interesting analogy - it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.

0wedrifid
I could almost agree with this so long as 'obliterate any competitive threat then do whatever the hell we want including, as as desired, removing all need for death, reproduction and competition over resources' is included in the scope of 'alignment with evolutionary forces'.

My apologies for failing to see that - did not mean to be antagonizing - just trying to be honest and forthright about my state of mind :-)

0wedrifid
I can empathise. I have often found myself in situations in which I am attempting discourse with someone who appears to me at least to be incapable or unwilling to understand what I am saying. It is particularly frustrating when the other is supporting the position more favoured by the tribe in question and they can gain support while needing far less rigour and coherency.

More recent criticism comes from Mike Treder - managing director of the Institute for Ethics and Emerging Technologies in his article "Fearing the Wrong Monsters" => http://ieet.org/index.php/IEET/more/treder20091031/

Very constructive proposal Kaj. But...

Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done?

If Eliezer does not find it a worthwhile investment of his time - why should we?

0Kaj_Sotala
I think his arguments are worthwhile and important enough to be heard by an audience that is as wide as possible, regardless of whether or not he feels like writing up the arguments in an easily digestible form.
7Eliezer Yudkowsky
In general: Because my time can be used to do other things which your time cannot be used to do; we are not fungible. (As of this comment being typed, I'm working on a rationality book. This is not something that anyone else can do for me.)

There is no such thing as an "unobjectionable set of values".

And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so ... (read more)

2timtyler
Alas, the first link seems almost too silly to bother with to me, but briefly: Unobjectionable - to whom? An agent objecting to another agent's values is a simple and trivial occurrence. All an agent has to do is to state that - according to its values - it wants to use the atoms of the agent with the supposedly unobjectionable utility function for something else. "Ensure continued co-existence" is vague and wishy-washy. Perhaps publicly work through some "trolley problems" using it - so people have some idea of what you think it means. You claim there can be no rational objection to your preferred utility function. In fact, an agent with a different utility function can (obviously) object to its existence - on grounds of instrumental rationality. I am not clear on why you don't seem to recognise this.
3wedrifid
A side note: these two are not the only reasons to not be persuaded by arguments, although naturally they are the easiest to point out.
1wedrifid
My 'last word' was here. It is an amicable hat tip and expansion on a reasonable perspective that you provide. How much FAI thinking sounds like a "Rapture of the Nerds". It also acknowledges our difference in perspective. While we both imagine evolutionary selection pressures as a 'force', you see it as one to be embraced and defined by while I see it as one that must be mastered or else. We're not going to come closer to agreement than that because we have a fundamentally different moral philosophy which gives us different perspectives on the whole field.
0Furcas
If your second sentence means that an agent who believes in moral realism and has figured out what the true morality is will necessarily want everybody else to share its moral views, well, I'll grant you that this is a common goal amongst humans who are moral realists, but it's not a logical necessity that must apply to all agents. It's obvious that it's possible to be certain that your beliefs are true and not give a crap if other people hold beliefs that are false. That Bob knows that the Earth is ellipsoidal doesn't mean that Bob cares if Jenny believes that the Earth is flat. Likewise, if Bob is a moral realist, he could 'know' that compassion is good and not give a crap if Jenny believes otherwise. If you sense strange paradoxes looming under the above paragraph, it's because you're starting to understand why (axiomatic) morality cannot be objective.
0RobinZ
Why would a paperclip maximizer aim to do something objectively good?

A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created.

Good one - but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)

Evolution created us. But it'll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we'l

... (read more)
0wedrifid
Don't forget the Y2K doomsday folks! ;) Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.

"Besides that"? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.

Wedrifid, not sure what to tell you. Bostrom is but one voice and his evolutionary analysis is very much flawed - again: detailed critique upcoming.

No, he mightn't care and I certainly don't. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in

... (read more)
6wedrifid
A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created. After that it will not matter whether I conform my behaviour evolutionary dynamics as best I can or not. I will not be able to compete with a superintelligence no matter what I do. I'm just a glorified monkey. I can hold about 7 items in working memory, my processor is limited to the speed of neurons and my source code is not maintainable. My only plausible chance of survival is if someone manages to completely thwart evolutionary dynamics by creating a system that utterly dominates all competition and allows my survival because it happens to be programmed to do so. Evolution created us. But it'll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we'll either use that to ensure a desirable future or we will die. I usually wouldn't, I know it is annoying. In this case, however, my statement was intended as a rejection of your patronisation of CronDAS and I am quite comfortable with it as it stands.

Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.

Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not 'compassionate' as... (read more)

5RobinZ
...I'm sorry, that doesn't even sound plausible to me. I think you need a lot of assumptions to derive this result - just pointing out the two I see in your admittedly abbreviated summary: * that any being will prefer its existence to its nonexistence. * that any being will want its maxims to be universal. I don't see any reason to believe either. The former is false right off the bat - a paperclip maximizer would prefer that its components be used to make paperclips - and the latter no less so - an effective paperclip maximizer will just steamroller over disagreement without qualm, however arbitrary its goal.

What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?

The detailed argument that led me to this conclusion is a bit complex. If you are interested in the details please feel free to start here (http://rationalmorality.info/?p=10) and drill down till you hit this post (http://www.jame5.com/?p=27)

Please realize that I spend 2 years writing my book 'Jame5' before I reached that initial insight that eventually lead to 'compassion is a condition for our existence and u... (read more)

2RobinZ
Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion. I'm not asking for your proof - I am assuming for the nonce that it is valid. What I am asking is the assumptions you had to invoke to make the proof. Did you assume that the AI is not powerful enough to achieve its highest desired utility without the cooperation of other beings, for example? Edit: And the reason I am asking for these is that I believe some of these assumptions may be violated in plausible AI scenarios. I want to see these assumptions so that I may evaluate the scope of the theorem.

If I understand your assertions correctly, I believe that I have developed many of them independently

That would not surprise me

Nothing compels us to change our utility function save self-contradiction.

Would it not be utterly self contradicting if compassion where a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?

2RobinZ
What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?

No, it evolved once, as part of mammalian biology.

Sorry Crono, with a sample size of exactly one in regards to human level rationality you are setting the bar a little bit too high for me. However, considering how disconnected Zoroaster, Buddha, Lao Zi and Jesus where geographically and culturally I guess the evidence is as good as it gets for now.

Also, why should we give a damn about "evolution" wants, when we can, in principle anyway, form a singleton and end evolution?

The typical Bostromian reply again. There are plenty of other scholar... (read more)

4wedrifid
"Besides that"? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively. No, he mightn't care and I certainly don't. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in time if it remained the dominant force of development. CronDAS knows that. It's obvious stuff for most in this audience. It just doesn't mean what you think it means.

Random I'll cop to, and more than what you accuse me of - dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent.

Very honorable of you - I respect you for that.

First: no argument is so compelling that all possible minds will accept it. Even the above proof of universality.

I totally agree with that. However the mind of a purposefully crafted AI is only a very small subset of all possible minds and has certain assumed characteristics. These are at a minimum: a utility function and the capacity for self ... (read more)

2RobinZ
I don't think I'm actually coming around to your position so much as stumbling upon points of agreement, sadly. If I understand your assertions correctly, I believe that I have developed many of them independently - in particular, the belief that the evolution of social animals is likely to create something much like morality. Where we diverge is at the final inference from this to the deduction of ethics by arbitrary rational minds. That's not how I read Omohundro. As Kaj aptly pointed out, this metaphor is not upheld when we compare our behavior to that promoted by the alien god of evolution that created us. In fact, people like us, observing that our values differ from our creator's, aren't bothered in the slightest by the contradiction: we just say (correctly) that evolution is nasty and brutish, and we aren't interested in playing by its rules, never mind that it was trying to implement them in us. Nothing compels us to change our utility function save self-contradiction.

Excellent, excellent point Jack.

There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe.

This is poetry! Hope you don't mind me pasting something here I wrote in another thread:

"With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically sel... (read more)

With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'

This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.

3wedrifid
Not really. You don't need to co-exist with anything if you out-compete them then turn their raw materials into paperclips.
0timtyler
The fate of a maximiser depends a great deal on its strength relative to other maximisers. It's utility function is not the only issue - and maximisers with any utility function can easily be eaten by other, more powerful maximisers. If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is "ensuring continued co-existence" - rather than doing the things described in my references? If so, why do you think that? - the view doesn't seem to make any sense.

Robin, your suggestion - that compassion is not a universal rational moral value because although more rational beings (humans) display such traits yet less rational being (dogs) do not - is so far of the mark that it borders on the random.

0RobinZ
Random I'll cop to, and more than what you accuse me of - dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent. For purposes of this conversation, I suppose I should reword my comment as:
3Cyan
Kaj is male (or something else).

Tim: "If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct."

Actually compassion evolved many different times as a central doctrine of all major spiritual traditions. See the charter for compassion. This is in line with my prediction that I made independently and being unaware of this fact until I started looking for it back in late 2007 and eventually finding the link in late 2008 with Karen Armstrong's book The Great Transformation.

Tim: "Why is it a universal moral attra... (read more)

8CronoDAS
No, it evolved once, as part of mammalian biology. Show me a non-mammal intelligence that evolved compassion, and I'll take that argument more seriously. Also, why should we give a damn about "evolution" wants, when we can, in principle anyway, form a singleton and end evolution? Evolution is mindless. It doesn't have a plan. It doesn't have a purpose. It's just what happens under certain conditions. If all life on Earth was destroyed by runaway self-replicating nanobots, then the nanobots would clearly be "fitter" than what they replaced, but I don't see what that has to do with goodness.
1RobinZ
The problem with pointing to the development of compassion in multiple human traditions is that all these are developed within human societies. Humans are humans the world over - that they should think similar ideas is not a stunning revelation. Much more interesting is the independent evolution of similar norms in other taxonomic orders, such as canines. (No, I have no coherent point, why do you ask?)

The longer I stay around here the more I get the feeling that people vote comments down purely because they don't understand them not because they found a logical or factual error. I expect more from a site dedicated to rationality. This site is called 'less wrong', not 'less understood', 'less believed' or 'less conform'.

Tell me: in what way do you feel that Adelene's comment invalidated my claim?

0timtyler
Voting reflects whether people want to see your comments at the top of their pages. It is certainly not just to do with whether what you say is right or not!
5Zack_M_Davis
I can see why it would seem this way to you, but from our perspective, it just looks like people around here tend to have background knowledge that you don't. More specifically: most people here are moral anti-realists, and by rationality we only mean general methods for acquiring accurate world-models and achieving goals. When people with that kind of background are quick to reject claims like "Compassion is a universal moral value," it might superficially seem like they're being arbitrarily dismissive of unfamiliar claims, but we actually think we have strong reasons to rule out such claims. That is: the universe at its most basic level is described by physics, which makes no mention of morality, and it seems like our own moral sensibilities can be entirely explained by contingent evolutionary and cultural forces; therefore, claims about a universal morality are almost certainly false. There might be some sort of game-theoretic reason for agents to pursue the same strategy under some specific conditions---but that's really not the same thing as a universal moral value.
1RobinZ
In the context of a hard-takeoff scenario (a perfectly plausible outcome, from our view), there will be no community of AIs within which any one AI will have to act. Therefore, the pressure to develop a compassionate utility function is absent, and an AI which does not already have such a function will not need to produce it. In the context of a soft-takeoff, a community of AIs may come to dominate major world events in the same sense that humans do now, and that community may develop the various sorts of altruistic behavior selected for in such a community (reciprocal being the obvious one). However, if these AIs are never severely impeded in their actions by competition with human beings, they will never need to develop any compassion for human beings. Reiterating your argument does not affect either of these problems for assumption A, and without assumption A, AdeleneDawner's objection is fatal to your conclusion.
6wedrifid
Were it within my power to do so I would create a machine that was really, really good at doing things I like. It is that simple. This machine is (by definition) 'Friendly' to me. I don't know where the 'deluded' bit comes from but yes, I would end up being a self serving optimizer. Fortunately for everyone else my utility function places quite a lot of value on the whims of other people. My self serving interests are beneficial to others too because I am actually quite a compassionate and altruistic guy. PS: Instead of using quotation marks you can put a '>' at the start of a quoted line. This convention makes quotations far easier to follow. And looks prettier.
3timtyler
There is no such thing as an "unobjectionable set of values". Imagine the values of an agent that wants all the atoms in the universe for its own ends. It will object to any other agent's values - since it objects to the very existence of other agents - since those agents use up its precious atoms - and put them into "wrong" configurations. Whatever values you have, they seem bound to piss off somebody.

"This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief."

But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?

5wedrifid
I have read B. It isn't bad. The main problem I have with it is that the language used blurs the line between "AIs will inevitably tend to" and "it is important that the AI you create will". This leaves plenty of scope for confusion. I've read through some of your blog and have found that I consistently disagree with a lot of what you say. The most significant disagreement can be traced back to the assumption of a universal absolute 'Rational' morality. This passage was a good illustration: You see, I plan to eat my cake but don't expect to be able to keep it. My set of values are utterly whimsical (in the sense that they are arbitrary and not in the sense of incomprehension that the Ayn Rand quotes you link to describe). The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.

From Robin: Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".

I feel the other way around at the moment. Namely "some people, observing the statement, may evaluate it as false without performing any complex analysis"

6gwern
APSD is only unfit in our current context. Would Stone Age psychiatrists have recognized it as an issue? Or as a positive trait good for warring against other tribes and climbing the totem pole? In other situations, compassion is merely an extra expense. (As Thrasymachus asked thousands of years ago: how can a just man do better than an injust man, when the injust man can act justly when it is optimal and injustly when that is optimal?) Why would a recursively-improving AI which is single-mindedly pursuing an optimization goal permit other AIs to exist & threaten it? There is nothing they can offer it that it couldn't do itself. This is true in both slow and fast takeoffs; cooperation only makes sense if there is a low ceiling for AI capability so that there are utility-maximizing projects beyond an AI's ability to do alone then or in the future. And 'sufficiently rational' is dangerous to throw around. It's a fully general argument: 'any sufficiently rational mind will recognize that Islam is the one true religion; that not every human is Muslim says more about their rationality than about the claims is Islam. That's why our Muslim psychiatrists call it UD - Unbeliever Disorder, it is an aberration, not helpful, not 'fit'. Surely the fact that some human are born kafir doesn't invalidate the fact that Muslim people have a tremendous advantage over the kafir in the afterlife? 'There is one God and Muhammed is his prophet' is certainly less obvious than seeing being better superior to blindness, though.'
-4StefanPernar
The longer I stay around here the more I get the feeling that people vote comments down purely because they don't understand them not because they found a logical or factual error. I expect more from a site dedicated to rationality. This site is called 'less wrong', not 'less understood', 'less believed' or 'less conform'. Tell me: in what way do you feel that Adelene's comment invalidated my claim?

Perfectly reasonable. But the argument - the evidence if you will - is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.

-1RobinZ
I think I'd probably agree with Kaj Sotala's remarks if I had read the passages she^H^H^H^H xe had, and judging by your response in the linked comment, I think I would still come to the same conclusion as she^H^H^H^H xe. I don't think your argument actually cuts with the grain of reality, and I am sure it's not sufficient to eliminate concern about UFAI. Edit: I hasten to add that I would agree with assumption A in a sufficiently slow-takeoff scenario (such as, say, the evolution of human beings, or even wolves). I don't find that sufficiently reassuring when it comes to actually making AI, though. Edit 2: Correcting gender of pronouns.

Since when are 'heh' and 'but, yeah' considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)

4wedrifid
The claim "[Compassion is a universal value] = true. (as we have every reason to believe)" was rejected, both implicitly and explicitly by various commenters. This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief. To be fair, I must admit that the quoted portion probably does not do your position justice. I will read through the paper you mention. I (very strongly) doubt it will lead me to accept B but it may be worth reading.

"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals."

Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique.

I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is m... (read more)

0timtyler
This isn't my favourite topic - while you have a whole blog about it - so you are probably quite prepared to discuss things for far longer than I am likely to be interested. Anyway, it seems that I do have some things to say - and we are rather off topic here. So, for my response, see: http://lesswrong.com/lw/1dt/open_thread_november_2009/19hl

Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.

6AdeleneDawner
Assuming I have the correct blog, these two are the only entries that mention Eliezer by name. Edit: The second entry doesn't mention him, actually. It comes up in the search because his name is in a trackback.

"Given this, I conclude that Objectivism isn't the stuff that makes you win, so it's not rationality."

Do you think it is worthwhile to find out where exactly their rationality broke down to avoid a similar outcome here? How would you characterize 'winning' exactly?

3MichaelVassar
Winning = FAI before UFAI, though there are lots of sub-goals to that. It's definitely worth understanding where other people's rationality breaks down, but I think I understand it reasonably well, both in terms of general principles and the specific history of Objectivism, which has been pretty well documented. We do have a huge amount of written material on rationality breaking down and I think I know rather more than we have published. Major points include Rand's disinterest in science, especially science that felt mystical to her like modern physics or hypnosis, and her failure to notice her foundational confusions and respond with due skepticism to long inferential chains built on them. That said, I'd be happy to discuss the topic with Nathaniel Branden some time if he's interested in doing so. I'm sure that his life experience would contribute usefully to my understanding and that it isn't all found in existing bodies of literature either.

Every human being in history so far has died and yet human are not extinct. Not sure what you mean.

Me - if I qualify as an academic expert is another matter entirely of course.

2ChrisHibbert
Do you disagree with Eliezer substantively? If so, can you summarize how much of his arguments you've analyzed, and where you reach different conclusions?

I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don't you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.

So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?

4eirenicon
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
4Zack_M_Davis
I've downvoted your comments in this thread because I don't think serious discussion of the relevance of Objectivism to existential risk reduction meets Less Wrong's quality standard; Ayn Rand just doesn't have anything useful to teach us. Nothing personal, just a matter of "I would like to see fewer comments like this one." (I do hope to see comments from you in the future.) Ayn Rand would hardly be alone in assenting to the propositions that "Rationality is good" and "The end of the world would be bad." A more relevant question would be whether Rand's teachings make a significant contribution to this community's understanding of how to systematically achieve more accurate beliefs and a lower probability of doom. As dearly as I loved Atlas Shrugged, I'm still going to have to answer no.
2Jack
Well to begin with I don't really think Rand was concerned about human extinction, though I haven't read much so maybe you can enlighten me. She also used the word reason a lot. But it doesn't really follow that she was actually employing the concept that we call reason. If she wasn't then thats where she went wrong. He writing is seriously chalk full of obfuscation, conflation and any close to every logical fallacy. Even the quotes you gave above are either inane trivialities or unsupported assertions. There is never an attempt to empirically justify her claims about human nature. If you tried to program an AI using Objectivism it would be a disaster. I don't think you could ever get the thing running because all the terms are so poorly defined. So it just seems like a waste of time to listen to Eliezer talk about this. Edit: I think I only voted down the initial suggestion though. Not the ensuing discussion.

Hmm - interesting. I thought this could be of interest, considering that there is a large overlap in the desire to be rational on this site and combating the existential risks a rouge AI poses. Reason and existence are central to Objectivism too after all:

“it is only the concept of ‘Life’ that makes the concept of ‘Value’ possible,” and, “the fact that a living entity is, determines what it ought to do.” She writes: “there is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to livin... (read more)

9Zack_M_Davis
Objectivism claims to be grounded in rational thought, but that doesn't mean it is. Ayn Rand said a lot of things that I've personally found interesting or inspiring, but taking Objectivism seriously as a theory of how the world really works is just silly. The rationality I know is grounded in an empiricism which Rand just utterly fails at. She makes all these sorts of fascinating pronouncements on the nature of "man" (sic) and economic organization seemingly without even considering the drop-dead basic sorts of questions. Well, what if I'm wrong? What would we expect to see and not see if my theory is right?
3MichaelVassar
In practice, most people inspired by Objectivism have not been able to achieve the sort of things that Rand and her heroes achieved. As far as I can tell, other than Rand herself, no dogmatic Objectivists have done so. Most strikingly, the most influential Objectivist came to head the Federal Reserve Bank. Given this, I conclude that Objectivism isn't the stuff that makes you win, so it's not rationality. That said, I'm very interested in discussing rationality with reflective people who ARE trying to win.
3Tyrrell_McAllister
Relevant Eliezer post: Guardians of Ayn Rand

Fun investment fact: the two trades that over 40 years turned 1'000 USD into >1'000'000 USD

1'000 USD in Gold on Jan 1970 for 34.94 USD / oz (USD 1'000.00)

1st Trade Sell Gold in Jan 1980 at 675.30 USD / oz (USD 19'327.41) Buy Dow on April 18 1980 at 763.40 (USD 19'327.41)

2nd Trade Sell Dow on Jan 14 2000 at 11'722.98 (USD 296'797.14) Buy Gold on Nov 11 2000 at 264.10 USD / oz (USD 296'797.14)

Portfolio value today: ~1'187'188.57 USD

:-)

0[anonymous]
Technically, you started and ended not with 1'000 USD and 1'000'000 USD, but 28.62 ounces of gold and 1124 ounces of gold, which is not quite as impressive-sounding. Still, four trades could have gotten precisely what you said.