Q: Quantum. Bayesianism isn't the LessWrong official preferred interpretation of QM because....?
There doesn't need to be a special mechanism for power to corrupt; normal reinforcement learning should work perfectly well. When you're corrupt, you take actions to benefit yourself instead of those you're supposed to be benefiting. And if those actions do indeed benefit yourself, well, then, that's obviously the kind of thing that reinforcement learning is designed to teach you to do. You take the bribe, or set up a harem, or whatever, because being corrupt means that you are doing things that feel good to you (and are therefore reinforcing) instead of things that benefit the rest of the group.
It's easy to say you won't give into temptation when you've never been tempted before, but it's a lot harder to say that and also be right.
There's another mechanism which is a bit more like paperclipping: rulers come up with random ideas, which they think are doing good because their yes-men say so. (Example]. So you have two mechanisms, one which can go anywhere, and one which converges onto a narrow set of features, such as having multiple sexual partners. In view of the second mechanism, it becomes clear what a great piece of social technology the idea of an Official Opposition is.
The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.
The entity providing the goals for the AI wouldn't have to be a human, it might instead be a corporation. A reasonable goal for such an AI might be to 'maximize shareholder value'. The shareholders are not humans either, and what they value is only money.
Encouragingly, corporations seem to have am impetus to keep blue-sky thinking and direct execution somewhat separate.
One perhaps useful analogy for super-intelligence going wrong is corporations.
We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people's lives, damaging the environment, corrupting the political process.
By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.
It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.
But are corporations existiential threats?
I think the basic problem here is an undissolved question: what is 'intelligence'? Humans, being human, tend to imagine a superintelligence as a highly augmented human intelligence, so the natural assumption is that regardless of the 'level' of intelligence, skills will cluster roughly the way they do in human minds, i.e. having the ability to take over the world implies a high posterior probability of having the ability to understand human goals.
The problem with this assumption is that mind-design space is large (<--understatement), and the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don't actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I'd put it under 5%.)
In fact, autistic people are an example of non-human-standard ability clusters, and even that's only by a tiny amount in the scale of mind-design-space.
As for an elevator pitch of this concept, something like "just because evolution happened design our brains to be really good at modeling human goal systems, doesn't mean all intelligences are good at it, regardless of how good they might be at destroying the planet".
the prior probability a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal.
What is this process of random design? Actual Ai design is done by humans trying to emulate human abilities.
I'm convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn't be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers's Absent Qualia argument, I'm eager to hear it.
If you mean this sort of thing http://www.kurzweilai.net/slate-this-is-your-brain-on-neural-implants, then he is barely arguing the point at all...this is miles below philosophy-grade thinking..he doesn't even set out a theory of selfhood, just appeals to intuitions. Absent Qualia is much better, although still not anything that should be called a proof.
You seem to be inventing a guarantee that I don't need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.
Mentioning something is not a prerequisite for having it.
If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience
That reads like a non sequitur to me. We don't know what the relationship between algorithms and experience is.
Mentioning something is not a prerequisite for having it.
It's possible for a description that doesn't explicitly mention X to nonethless add up to X, but only possible..you seem to be treating it as a necessity.
I'm not equating thoughts and experiences. I'm relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.
I'm not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I'd probably object and on others I wouldn't.
You only get your guarantee if experiences are the only thing that can cause thoughts about experiences. However, you don;t get that by noting that in humans thoughts are usually caused by experiences. Moreover, in a WBE or AI, there is always a causal account of thoughts that doesn't mention experiences, namely the account i terms of information processing.
If you define Bayes as "something something information, something something update", it goes upwards forwards and sideways too.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Can I ask you to examine the apparent assumption here - that the $450B is all loss? Have you considered the possibility that the people who avoided the tax put the money to good use? Or that the government would not put that money to good use if it took it?
A major way of avoiding tax is to keep money offshore. ... so what can you usefully do with money while it is resting in an account in the Cayman islands?