Russia had a head start - and has oil wealth. For comparison, Saudi Arabia's GDP per capita is on the order of $17,000. 45% of its entire GDP is its nationalized oil industry; private industry is only 40%.
It's true. However I think people get a little caught up in the China is growing story. Russia is a dying country in a lot of ways. However, both are heavily controlled by corrupt leaders.
EDIT:was->ways
Russia seems grossly incompetent compared to China. I don't know if the conquerees would be better off.
This is really strange then:
http://www.wolframalpha.com/input/?i=per+capita+gdp+of+russia%2Fper+capita+gdp+of+china
No, I wouldn't consider this to be on-topic, sorry. I'd like to stay focussed on technical feasibility and the arguments raised in the article.
I think he raised a very valid concern. Also, cost is a very important dimension in terms of technological development. If money were not an issue, I have little doubt that we would have seen manned missions to Mars and several asteroids. However, money is very much an issue.
Why will so much go into recovering brains when new ones are so damn cheap?
You have only weak surface similarities, which break down if you look deeper.
In the Christian concept, people need to be saved from the moral punishment for a sin committed before they were born, and this salvation is available only by accepting the religion, and it is absolutely morally right that those who do not accept the religion are not saved, on the authority of a supremely powerful being. The salvation consists of infinite boredom rather than infinite pain after you die.
On the other hand, the concept of an FAI saving the world involves saving people from the harsh reality of an impersonal universe that does not care about us, or anything else. The salvation is for anyone it is in the FAI's power to save, the requirement of cryonics is only because even a superintelligence would likely not be able have enough information about a person to give them new life after their brain had decayed. If it turns out that the FAI can in fact simulate physics backwards well enough to retrieve such people, that would be a good thing. People who happen to be alive when the FAI goes FOOM will not be excluded because they aren't signed up for cryonics. The salvation consists of as much fun as we can get out of the universe, instead of non-existence after a short life.
To all this I would mainly argue what Jaron Lanier does here
Lanier's argument, within the time you linked to, seemed to consist mostly of misusing the word ideology. Throughout the diavlog, he kept accusing AI researchers and Singularians of having a religion, but he never actually backed that up or even explained what he meant. While he seemed to be worshiping mystery, particularly with regards to consciousness, and was evasive when Eliezer questioned him on it.
Consider me incredibly underwhelmed to hear a recitation of Eliezer's views.
It is humorous that you simply assert that Lanier just misuses the word ideology. What I find compelling is his advice to simply do the work and see what can be done.
Eliezer is a story teller. You like his stories and apparently find them worth retelling. Far out. I expect that is what you will always get from him. Look for results elsewhere.
If there is a specific point you'd like to discuss I'd be happy to do that.
You started this thread with a vague claim. If you want talk about specifics, you should quote something that Eliezer has said and explain what Christian overtones you think it has. Pointing to the word "saved" without any context is not enough.
I thought people would have seen the videos, and thus what I was talking about this in context. Oh well here are quotes:
http://www.youtube.com/watch?v=vecaDF7pnoQ#t=2m26s
That's how the world gets saved.
http://www.youtube.com/watch?v=arsI1JcRjfs#t=2m30
The thing that will kill them when they don't sign up for cryonics.
http://www.youtube.com/watch?v=lbzV5Oxkx1E#t=4m00s
But for now it can help the rest of us save the world.
(Probably some paraphrasing but the quotes are in the videos).
So other quotes were in the vimeo video, but these mainly concern the argument that the singularity is obviously the number one priority. Also troubling to me is the idea the the world is irredeemably flawed before the emergence of FAI. Christianity very much rests on the notion that the world is hopeless without redemption from god.
So the similarity mainly lies in the notion that we need a savior, and look we have one! The you will die without cryonics is sort of icing on the cake.
To all this I would mainly argue what Jaron Lanier does here:
http://bloggingheads.tv/diavlogs/15555?in=00:46:48&out=00:51:08
While Eliezer asserts that he will cure AIDS.
There is a lot to like about this world and a lot of problems to work on. However, it is ridiculous to assert you know the number one priority for earth when you have no evidence that your project will be nearly as successful as you think it will be.
I'm not so sure that this post is something I need to see. I was pointing out parallels in Eliezer's language to something you would hear from an evangelist.
If there is a specific point you'd like to discuss I'd be happy to do that.
I think that there are very Christian religious overtones in what Eliezer talks about. In his recently posted answers to questions the term saved was used more than once and many times there was reference to how the world is incredibly screwed without the singularity.
I think 'Christian' is overly specific. But you wouldn't be the first to compare the Singularity to 'Rapture' or some such, or to compare some of these folks to a cult. I think it would be worth everyone's time to make sure there isn't a "hole-shaped god" type effect going on here.
ETA: But remember, If there are biased reasons to say the sun is shining, that doesn't make it dark out.
Thanks for the reply...the downvoting without it is sort of a bummer.
Notice I did not bring up the rapture...Eliezer does not really use similar language in that regard. Use of the word save though strikes me as more Christian though.
Fuckin' a on the god shaped hole stuff. I don't have much patience for people that put arguments forward like that.
By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.
Wow, I really am curious why you think this would apply to spacial dimensions.
20: What is the probability that this is the ultimate base layer of reality?
Eliezer gave the joke answer to this question, because this is something that seems impossible to know.
However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans won't run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.
The zero one infinity rule also makes it seem more unlikely this is the base level of reality. http://catb.org/jargon/html/Z/Zero-One-Infinity-Rule.html
It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.
I believe this is a minority viewpoint here, so my rationalist calculus is probably wrong. Why?
You are suggesting a world with much more energy then the one that we know. It seems you should assign a lower probability to there being a much higher energy universe.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Lots of different words and phrases "devalue" different technical terms, since they exist outside of their technical definition. From what I can see from the OED, group think has been used as a term since 1923 and similar phrases like group mind were used in the late 19th century. Because someone makes a definition in a field it does not strip the original word or phrase of its meaning. If that was the case I'm sure lawyers would have a field day with all of us and that I could pick out quite a few misuses of onto on this site.
The technical meaning you point to is interesting. However, it does not even apply to this site as no one here is making decisions by committee, this is a forum/group blog. I'm surprised you pointed to it as the culture here has some of the pathologies mentioned, such as:
These two aren't really as much are your fault, it's a bummer we can't access/share academic articles that pertain to many of the subjects discussed here.
A little more trouble for this site is the solutions though:
Obvious trouble.
One of Eliezer's biggest flaw's is his high opinion of himself and those that agree with him:
http://bloggingheads.tv/diavlogs/25848?in=49:06&out=49:12
Along with it the wholesale writing off of others that he knows very little of. From Eliezer's point of view there are very few, if any, experts of any worth to consult on many issues at all. This sort of thought is toxic for collaboration with others and can be profoundly isolating.
The history of this site is not very long and it should not surprise anyone that most around here tow the Eliezer line. He created this site after becoming a smallish blog celebrity on overcomingbias. That those that came to populate this site largely agree with him should not at all be surprising. That you think no one should be able to charge groupthink comes off as incredibly defense and really silly given that it would be a little shocking if it wasn't here.
Now to try to be constructive, here are some things that could facilitate interesting discussions around here:
The time when FAI was banned seemed to have a much more diverse and interesting discourse. Why retread the same old topics when you could explore new things that no one here knows what 'Rationality' should say about it?
Let's see some algorithms/programs or something besides overly long posts. I bet some cool stuff could come out of it. Why argue about which two things are better when you could try to see by testing it. Go empiricism.