Comment author: nykos 23 February 2013 01:10:21PM -2 points [-]

If the Basiliskgate Affair is any indication, I would argue that hardcore LessWrongers in general are far too concerned about the potential negative consequences of science and philosophy (to put it mildly) .I think this community needs more of an exposition of the benefits that risk-taking can bring to the whole of society.

Comment author: nykos 11 February 2013 10:15:13PM *  0 points [-]

Why not donate to people promoting neocolonialism, if you are really concerned about efficient malaria eradication and the well-being of Black people? I for one refuse to donate any amount of money to treat symptoms rather than causes, at least in in the case of strangers; it is an inefficient allocation of resources.

In response to The value of Now.
Comment author: nykos 07 February 2013 07:49:25PM -1 points [-]

If I were a scientist, I would ask for evidence of the existence of omega-level beings before further considering the questions. We can of course debate how many Omega-level beings are there on the tip of a pin, but I believe our limited time in this Universe is better spent asking different kinds of questions.

Comment author: nykos 02 February 2013 11:20:00PM *  1 point [-]

Maybe the forces of human nature make the future in some sense inevitable, conspiring to keep the long-term probability of eutopia very low?

If you took a freezing, dirty European peasant in winter ca. 1000 AD, and transported him to 0 AD Rome and its public thermae, he would also be heading towards eutopia - only in the 'wrong' direction of time. The worship of many gods in particular would probably strike him as horrifying.

If you transported Thomas Carlyle through time to the present, he would be horrified and disgusted, probably also frightened. But he would most definitely not be surprised. He would say: "I told you so". I'm sure there were at least few Romans who, when transported to Dark Ages Europe, would have said the same.

Comment author: [deleted] 05 January 2013 01:01:00PM *  5 points [-]

Say Not Universalism, a criticism of Moldbug's position on Progressivisms ties to Christianity.

I disagree with it mildly, since I think there are features of Progressivism that are more or less uniquely attributable to its Christian heritage, but I do think Progressive like memes would have developed in a non-Christian descended implementation of what is often called The Cathedral (political belief pump associated with demotist forms of government).

It is a reminder to Reactionary readers that while the explicit justifications of modern political and social thinking obviously look weak at best and utterly mad at worst, we should take small c-conservative arguments in their favour very seriously. The abolition of many things because their "explicit justifications was crazy" turned out to be dreadful mistakes.

These are the grounds on which I provisionally support social democracy, while strongly encourage exploration of alternatives.

In response to comment by [deleted] on Politics Discussion Thread January 2013
Comment author: nykos 05 January 2013 06:41:06PM *  3 points [-]

I do think Progressive like memes would have developed in a non-Christian descended implementation of what is often called The Cathedral

I think this is quite likely to be the case, since Progressivism (which one might think of as "altruism gone rampant") might actually emerge in time from the mating patterns and the resulting genetic structure of a population.

Comment author: nykos 24 December 2012 01:26:38PM 2 points [-]

What are the experimental predictions of the various string theories?

Have any of those been experimentally verified so far?

Is belief in string theory paying any rent?

Comment author: nykos 31 October 2012 05:55:12PM *  0 points [-]

What about individual IQ? It's not at all clear that learning methods yield uniform results across the bell curve. What might work for a 130+ IQ individual may not work for a 110 IQ individual - and vice-versa.

Comment author: nykos 04 October 2012 11:50:20AM *  3 points [-]

Intelligent people are more likely to think on the consequences when deciding to have a child. But there is a prisoner's dilemma type of situation here:

One reason smart people forego reproduction is because they might feel children make them more unhappy overall for at least the first few years (a not unreasonable assumption). Or simply because they are not religious (smart religious people do still have lots of children) As a consequence, in 20 years, the average IQ of that society will fall (bar some policy reversals encouraging eugenic breeding, or advances in genetic engineering), as only the less intelligent breed. Since, all other things equal, smarter people perform better on their jobs, the average quality of services provided in that society (both public and private) goes down. So in the end everyone becomes more unhappy (even though unhappiness of a childless smart person resulting from societal dysgenics may not outweigh the temporary unhappiness from having a child)

Comment author: Randaly 14 August 2012 08:34:20PM 0 points [-]

The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy - without experimental feedback

I do not understand how this has anything to do with FAI

Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of - the human brain - and simply replicate it in an artificial manner.

This is not in fact "simple" to do. It's not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?

So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.

Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.

With regards to your claims regarding HBD, eugenics, etc: Evolution is a lot weaker than you think it is, and we know a lot less about genetic influence on intelligence than you seem to think. (See eg here or here.) Such a program would be incredibly difficult to get implemented, and so is probably not worth it.

Comment author: nykos 22 August 2012 04:08:51PM *  0 points [-]

I do not understand how this has anything to do with FAI

It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.

This is not in fact "simple" to do. It's not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?

Are there any other current proposals to build AGI that don't start from the brain? From what I can tell, people don't even know where to begin with those.

Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.

At some point you have to settle for "good enough" and "friendly enough". Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.

(like ensuring that value systems remain unchanged during self-modification)

But what if the AI is programmed with a faulty value system by its human creators?

Such a program would be incredibly difficult to get implemented, and so is probably not worth it.

Fair enough, I was giving it as an example because it is possible to implement now - at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.

Comment author: DaFranker 16 August 2012 03:20:27PM 0 points [-]

Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.

This statement is obviously false and obviously falsifiable.

Insert example of vegetative-state life-support cripple "raising a child" (AKA not actually doing anything and having an effective/apparent IQ of ~0, perhaps even dying as soon as the child touches something they weren't supposed to).

At this point, a rock would be just as good at raising a child. At least the child can use the rock to kill a small animal and eat it.

Comment author: nykos 22 August 2012 03:55:58PM 0 points [-]

Is a "vegetative-state life-support cripple" a person at all?

View more: Next