All of Stefan_Pernar's Comments + Replies

@Recovering irrationalist, good points, thank you - I just wanted to save time and space by linking to relevant stuff on my blog without repeating myself over and over. My apologies for overdoing it. I guess I feel like talking to a wall or being deliberately ignored due to the lack of feedback. I shall curb my enthusiasm and let things take its course. You know where to find me.

@TGGP: This forum really is not the right place to get into details. It would not be fair towards Eliezer and that I posted something at all is an embarrassing revelation in regards to my intellectual vanity. Mea culpa.

@Tiiba, trust me - I am quite certain that I do, but this is not the right forum - PM me if you want to continue off this blog.

logicnazi, he is making progress ;-)

Humans certainly aren't perfect at imagining. In fact if you ask most people to imagine a heavy object and a much heavier object falling they will predict the much heavier object hits first and I can give a host of other examples of the same thing.

When you ask someone to imagine something he is controlling his imagination which is equivalent to conscious though. What one can think of however is controlled by ones beliefs - what is skewed in humans is their beliefs, not their imagination. Once beliefs are being controlled... (read more)

Caledonian, sorry - do you mean that humanity needs to be superseded?

Gray Area, did you read my paper on friendly AI yet? I must be sounding like a broken record by now ;-)

I justify my statement 'that is good what increases fitness' with the axiomatic belief of 'to exist is preferable over not to exist'

The phenomena created by evolution that seem like horrors to us (parasitic wasps) must be that particular wasp's pinnacle of joy. It is a matter of perspective. I am not saying: eat crap - millions of flies can't be wrong! I am taking the human perspective -... (read more)

Caledonian, yes - I agree 100% - the tricky part is getting to post humanity - avoiding a non-friendly AI. That would be a future where we have a place in the sense that we will have evolved further.

gutzperson, today you are gutzperson - tomorrow you are post-gutzperson yesterday - ensuring your continued existence in that sense will lead to your eventual transcendence. Same for everyone else - just don't extinguish that strand.

Aaron Luchko, I argue that morality can be universally defined. You can find my thoughts in my paper on friendly AI theory? Would love to hear your comments.

Somehow the links in my earlier comment got messed up.

For the link behind 'cognitive evolution' see: http://www.jame5.com/?p=23 For the link behind 'make sure we will have a place' see: http://www.jame5.com/?p=17

gutzperson: good points - it is all about increasing fitness and social control. You will find reading the following paper quite interesting: Selection of Organization at the Social level: obstacles and facilitators of metasystem transitions. Particularly chapter four: Social Control Mechanisms.

Evolution does not stop on the genetic level but continues on the <a href="a href="http://www.jame5.com/?p=23">cognitive level allowing for a far higher complexity and speed. As a result group selection becomes intuitively obvious although on the cognitive level members of weaker groups have of cause in principle the chance to change their minds aka evolve their beliefs before physical annihilation.

"If we can't see clearly the result of a single monotone optimization criterion"

We can project where ever increasing fitness lead... (read more)

The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not. If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.

What a religious person realizes with such a fear is that truth matters – just not in a sense one would assume intuitively.

Phi... (read more)

Great to see more thoughts on evolution from you Eliezer - good stuff.

Nick, truly fascinating read. Thank you. Although I have not read Bostrom's paper prior to today I am glad to find that we come to largely identical conclusions. My core claim 'What is good is what increases fitness' does not mean that I argue for the replacement of humanity with non eudaemonic fitness maximizing agents as Bostrom calls them.

There are two paths to maximizing an individual's fitness:

A) Change an indiidual's genetic/memetic makeup to increase it's fitness in a given environment B) Change an individual's environment to increase it's genetic/memetic fitness

In my AI friendliness theory I argue for option B) using a friendly AGI in which in essence represents Bostrom's singleton.

Elizier: It is pure Judeo-Christian-Islamic exceptionalism, I regret to inform you, to think that failing to believe in the Bible God signifies anything more than failing to believe in the Flying Spaghetti Monster.

This is plain wrong - the former belief increases fitness while the later does not. Look at religion in the light of rational choice aka game theory instead of plainly true or false. Big difference.

Benoit: Stefan Pernar, you are right, christianity is fitter than atheism in an evolutionary kind of way. It's members reproduce, spread, divide and c... (read more)

6rela
Stefan: It seems to me that you are saying: P1) large, stable groups are good (presumably because they minimize total violence?) P2) a large stable group can be formed if the members share internalized restraints P3) one method of creating internalize restraints is religion C) therefore, religion must be good. So, consider that this chain also allows for substitutions, which would not have the same conclusion: P1) small, stable groups are good (maybe because they tend to be formed along familiar structures, and thus maximize commitment between group members?) P2) a large stable group can be formed if the members share explicit restraints, and P3) government based on a social contract enables the members to share explicit restraints P3) one method of creating internalized restraints is a shared belief in the value of the scientific method All of the conclusions have many effects, and not all of these effects are positive. Religion can easily devolve into fundamentalism; small groups tend to fight between themselves; governments can oppress people; a belief in the scientific method can prevent the imagination of non-physical concepts; etc. It could be argued that these negative side-effects are not all equally negative, and that the argument which leads to the least-negative side-effect should be the one that is accepted. But to summarize, whenever we argue for some condition on the basis of evolutionary fitness, we need to consider two things: 1) Most evolutionary fitness arguments do not exclusively mandate the condition which is being argued. 2) A condition is not necessarily desirable simply because it increases evolutionary fitness. The contexts in which that condition tends to occur must also be considered. Best, rela