Hypercomputation doesn't exist. There's no evidence for it - and nor will there ever be. It's an irrelevance that few care about. Solomonoff induction is right about this.
Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it's (arguably) not so likely to kill everybody. MIRI appears to be focussing on the "killing everybody case". That is because - according to them - that is a really, really bad outcome.
The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.
Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years - with their "UnFriendly" peers and their "UnFriendly" institutions. Evidently, "Friendliness" is not necessary for human flourishing.
"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"
Nor does the fact that evolution 'failed' in its goals in all the people who voluntarily abstain from reproducing (and didn't, e.g., hugely benefit their siblings' reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous.
Failure is a necessary part of mapping out the area where success is possible.
Uploads first? It just seems silly to me.
The movie features a luddite group assassinating machine learning researchers - not a great meme to spread around IMHO :-(
Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.
Overall, I think I would have preferred Robopocalypse.
One other point I should make: this isn't just about "someone" being wrong. It's about an author frequently cited by people in the LessWrong community on an important issue being wrong.
Not experts on the topic of diet. I associated with members of the Calorie Restriction Society some time ago. Many of them were experts on diet. IIRC, Taubes was generally treated as a low-grade crackpot by those folk: barely better than Atkins.
To learn more about this, see "Scientific Induction in Probabilistic Mathematics", written up by Jeremy Hahn
This line:
Choose a random sentence from S, with the probability that O is chosen proportional to u(O) - 2^-length(O).
...looks like a subtraction operation to the reader. Perhaps use "i.e." instead.
The paper appears to be arguing against the applicability of the universal prior to mathematics.
However, why not just accept the universal prior - and then update on learning the laws of mathematics?
why did you bring up the 'society' topic in the first place?
A society leads to a structure with advantages of power and intelligence over individuals. It means that we'll always be able to restrain agents in test harnesses, for instance. It means that the designers will be smarter than the designed - via collective intelligence. If the the designers are smarter than the designed, maybe they'll be able to stop them from wireheading themselves.
...If wireheading is plausible, then it's equally plausible given an alien-fearing government, since wireheading
We can model induction in a monistic fashion pretty well - although at the moment the models are somewhat lacking in advanced inductive capacity/compression abilities. The models are good enough to be built and actually work.
Agents wireheading themselves or accidentally performing fatal experiments on themselves will probably be handled in much the same way that biology has handled it to date - e.g. by liberally sprinkling aversive sensors around the creature's brain. The argument that such approaches do not scale up is probably wrong - designers will al...
Naturalized induction is an open problem in Friendly Artificial Intelligence. The problem, in brief: Our current leading models of induction do not allow reasoners to treat their own computations as processes in the world.
I checked. These models of induction apparently allow reasoners to treat their own computations as modifiable processes:
Deutsch is interesting. He seems very close to the LW camp, and I think he's someone LWers should at least be familiar with.
Deutsch seems pretty clueless in the section quoted below. I don't see why students should be interested in what he has to say on this topic.
...It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techn
There never was a bloggingheads - AFAIK. There is: Yudkowsky vs Hanson on the Intelligence Explosion - Jane Street Debate. However, I'd be surprised if Yudkowsky makes the same silly mistake as Deutsch. Yudkowsky knows some things about machine intelligence.
But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences.
My estimate is 80% prediction, with the rest evaluation and tree pruning.
He also says confusing things about induction being inadequate for creativity which I'm guessing he couldn't support well in this short essay (perhaps he explains better in his books).
He does - but it isn't pretty.
Here is my review of The Beginning of Infinity: Explanations That Transform the World.
I remember Eliezer making the same point in a bloggingheads video with Robin Hanson.
A Hanson/Yudkowsky bloggingheads?!? Methinks you are mistaken.
So:
To give an example of a survivalist, here's an individual who proposes that we should be highly prioritizing species-level survival:
As you say, this is not a typical human being - since Nick says he is highly concerned about others.
There are many other survivalists out there, many of whom are much more concerned with personal survival.
If you're dealing with creatures good enough at modeling the world to predict the future and transfer skills, then you're dealing with memetic factors as well as genetic. That's rather beyond the scope of natural selection as typically defined.
What?!? Natural selection applies to both genes and memes.
I suppose there are theoretical situations where that argument wouldn't apply
I don't think you presented a supporting argument. You referenced "typical" definitions of natural selection. I don't know of any definitions that exclude culture. H...
It isn't a testable hypothesis. Why would anyone attempt to assign probabilities to it?