Well, hmmm. I wonder if this qualifies as "stupid".
Could someone help me summarize the evidence for MWI in the quantum physics sequence? I tried once, and only came up with 1) the fact that collapse postulates are "not nice" (i.e., nonlinear, nonlocal, and so on) and 2) the fact of decoherence. However, the following quote from Many Worlds, One Best Guess (emphasis added):
The debate should already be over. It should have been over fifty years ago. The state of evidence is too lopsided to justify further argument. There is no balance in this issue. There is no rational controversy to teach. The laws of probability theory are laws, not suggestions; there is no flexibility in the best guess given this evidence. Our children will look back at the fact that we were STILL ARGUING about this in the early 21st-century, and correctly deduce that we were nuts.
Is there other evidence as well, then? 1) seems depressingly weak, and as for 2)...
As was mentioned in Decoherence is Falsifiable and Testable, and brought up in the comments, the existence of so-called "microscopic decoherence" (which we have evidence for) is independent from so-called "macr...
If the SIAI engineers figure out how to construct friendly super-AI, why would they care about making it respect the values of anyone but themselves? What incentive do they have to program an AI that is friendly to humanity, and not just to themselves? What's stopping LukeProg from appointing himself king of the universe?
Not an answer, but a solution:
You know what they say the modern version of Pascal's Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. -- Julie from Crystal Nights by Greg Egan
:-p
What's stopping LukeProg from appointing himself king of the universe?
Personal abhorrence at the thought, and lack of AI programming abilities. :)
(But, your question deserves a more serious answer than this.)
Too late - Eliezer and Will Newsome are already dual kings of the universe. They balance each other's reigns in a Ying/Yang kind of way.
I understand CEV. What I don't understand is why the programmers would ask the AI for humanity's CEV, rather than just their own CEV.
The only (sane) reason is for signalling - it's hard to create FAI without someone else stopping you. Given a choice, however, CEV is strictly superior. If you actually do want to have FAI then FAI will be equivalent to it. But if you just think you want FAI but it turns out that, for example, FAI gets dominated by jerks in a way you didn't expect then FAI will end up better than FAI... even from a purely altruistic perspective.
I think it would be significantly easier to make FAI than LukeFreindly AI
Massively backwards! Creating an FAI (presumably 'friendly to humanity') requires an AI that can somehow harvest and aggregate preferences over humans in general but an FAI just needs to scan one brain.
Before I ask these questions, I'd like to say that my computer knowledge is limited to "if it's not working, turn it off and turn it on again" and the math I intuitively grasp is at roughly a middle-school level, except for statistics, which I'm pretty talented at. So, uh... don't assume I know anything, okay? :)
How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won't take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
I've seen some mentions of an AI "bootstrapping" itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right? How does it know what bits to change to make itself more intelligent? (I get the feeling this is a tremendously stupid question, along the lines of "if people evolved from apes then why are there still apes?")
Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI? Certainly the emphasis on Friendliness is important, but is that the only unique thing they're doing?
Consciousness isn't the point. A machine need not be conscious, or "alive", or "sentient," or have "real understanding" to destroy the world. The point is efficient cross-domain optimization. It seems bizarre to think that meat is the only substrate capable of efficient cross-domain optimization. Computers already surpass our abilities in many narrow domains; why not technology design or general reasoning, too?
Neurons work differently than computers only at certain levels of organization, which is true for every two systems you might compare. You can write a computer program that functionally reproduces what happens when neurons fire, as long as you include enough of the details of what neurons do when they fire. But I doubt that replicating neural computation is the easiest way to build a machine with a human-level capacity for efficient cross-domain optimization.
How does it know what bits to change to make itself more intelligent?
There is an entire field called "metaheuristics" devoted to this, but nothing like improving general abilities at efficient cross-domain optimization. I won't say more about this at the moment because I'm writi...
Given that utility functions are only defined up to positive linear transforms, what do total utilitarians and average utilitarians actually mean when they're talking about the sum or the average of several utility functions? I mean, taking what they say literally, if Alice's utility function were twice what it actually is, she would behave the exact same way but she would be twice as ‘important’; that cannot possibly be what they mean. What am I missing?
I would like someone who understands Solomonoff Induction/the univeral prior/algorithmic probability theory to explain how the conclusions drawn in this post affect those drawn in this one. As I understand it, cousin_it's post shows that the probability assigned by the univeral prior is not related to K-complexity; this basically negates the points Eliezer makes in Occam's Razor and in this post. I'm pretty stupid with respect to mathematics, however, so I would like someone to clarify this for me.
(I super-upvoted this, since asking stupid questions is a major flinch/ugh field)
Ok, my stupid question, asked in a blatantly stupid way, is: where does the decision theory stuff fit in The Plan? I have gotten the notion that it's important for Value-Preserving Self-Modification in a potential AI agent, but I'm confused because it all sounds too much like game theory - there all all these other-agents it deals with. If it's not for VPSM, and it fact some exploration of how AI would deal with potential agents, why is this important at all? Let AI figure that out, it's going to be smarter than us anyway.
If there is some Architecture document I should read to grok this, please point me there.
What exactly is the difference in meaning of "intelligence", "rationality", and "optimization power" as used on this site?
If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.
Intelligence is optimization power divided by the resources used.
I checked with: A Collection of Definitions of Intelligence.
Out of 71 definitions, only two mentioned resources:
“Intelligence is the ability to use optimally limited resources – including time – to achieve goals.” R. Kurzweil
“Intelligence is the ability for an information processing system to adapt to its environment with insufficient knowledge and resources.” P. Wang
The paper suggests that the nearest thing to a consensus is that intelligence is about problem-solving ability in a wide range of environments.
Yes, Yudkowsky apparently says otherwise - but: so what?
If I understand it correctly, the FAI problem is basically about making an AI whose goals match those of humanity. But why does the AI need to have goals at all? Couldn't you just program a question-answering machine and then ask it to solve specific problems?
In this interview between Eliezer and Luke, Eliezer says that the "solution" to the exploration-exploitation trade-off is to "figure out how much resources you want to spend on exploring, do a bunch of exploring, use all your remaining resources on exploiting the most valuable thing you’ve discovered, over and over and over again." His point is that humans don't do this, because we have our own, arbitrary value called boredom, while an AI would follow this "pure math."
My potentially stupid question: doesn't this strategy assu...
So in Eliezer's meta-ethics he talks about the abstract computation called "right", whereas in e.g. CEV he talks about stuff like reflective endorsement. So in other words in one place he's talking about goodness as a formal cause and in another he's talking about goodness as a final cause. Does he argue anywhere that these should be expected to be the same thing? I realize that postulating their equivalence is not an unreasonable guess but it's definitely not immediately or logically obvious, non? I suspect that Eliezer's just not making a clear...
I keep scratching my head over this comment made by Vladimir Nesov in the discussion following “A Rationalist’s Tale”. I suppose it would be ideal for Vladimir himself to weigh in and clarify his meaning, but because no objections were really raised to the substance of the comment, and because it in fact scored nine upvotes, I wonder if perhaps no one else was confused. If that’s the case, could someone help me comprehend what’s being said?
My understanding is that it’s the LessWrong consensus that gods do not exist, period; but to me the comment seems to ...
"Magical gods" in the conventional supernatural sense generally don't exist in any universes, insofar as a lot of the properties conventionally ascribed to them are logically impossible or ill-defined, but entities we'd recognize as gods of various sorts do in fact exist in a wide variety of mathematically-describable universes. Whether all mathematically-describable universes have the same ontological status as this one is an open question, to the extent that that question makes sense.
(Some would disagree with referring to any such beings as "gods", e.g. Damien Broderick who said "Gods are ontologically distinct from creatures, or they're not worth the paper they're written on", but this is a semantic argument and I'm not sure how important it is. As long as we're clear that it's probably possible to coherently describe a wide variety of godlike beings but that none of them will have properties like omniscience, omnipotence, etc. in the strongest forms theologians have come up with.)
When people talk about designing FAI, they usually say that we need to figure out how to make the FAI's goals remain stable even as the FAI changes itself. But why can't we just make the FAI incapable of changing itself?
Database servers can improve their own performance, to a degree, simply by performing statistical analysis on tables and altering their metadata. Then they just consult this metadata whenever they have to answer a query. But we never hear about a database server clobbering its own purpose (do we?), since they don't actually alter their own ...
But why can't we just make the FAI incapable of changing itself?
Because it would be weak as piss and incapable of doing most things that we want it to do.
The majority of Friendly AI's ability to do good comes from its ability to modify its own code. Recursive self improvement is key to gaining intelligence and ability swiftly. An AI that is about as powerful as a human is only about as useful as a human.
Are there any intermediate steps toward the CEV, such as individual EV, and if so, are they discussed anywhere?
Where should I ask questions like question 2?
I've been here less than thirty days. Why does my total karma sometimes but not always show a different number from my karma from the last 30 days?
Why are flowers beautiful? I can't think of any "just so" story why this should be true, so it's puzzled me. I don't think it's justification for a God or anything, just something I currently cannot explain.
Many flowers are optimized for being easily found by insects, who don't have particularly good eyesight. To stick out from their surroundings, they can use bright unnatural colors (i.e. not green or brown), unusual patterns (concentric circles is a popular one), have a large surface, etc.
Also, flowers are often quite short-lived, and thus mostly undamaged; we find smoothness and symmetry attractive (for evolutionary reasons - they're signs of health in a human).
In addition, humans select flowers that they themselves find pretty to place in gardens and the like, so when you think of "flowers", the pretty varieties are more likely to come to mind than the less attractive ones (like say that of the plane tree, or of many species of grass - many flowers are also prettier if you look at them in the UltraViolet.). If you take a walk in the woods, most plants you encounter won't have flowers you'll find that pretty; ugly or unremarkable flowers may not even register in your mind as "flowers".
How do I stop my brain from going: "I believe P and I believe something that implies not P -> principle of explosion -> all statements are true!" and instead go "I believe P and I believe something that implies not P -> I one of my beliefs are incorrect". It doesn't happen to often, but it'd be nice to have an actual formal refutation for when it does.
Is there an easy way to read all the top level posts in order starting from the beginning? There doesn't seem to be a 'first post' link anywhere.
There is no clearly defined or motivated problem of "proving Friendliness". We need to understand what goals are, what humane goals are, what process can be used to access their formal definition, and what kinds of things can be done with them how to what end. We need to understand these things well, which (on psychological level) triggers association with mathematical proofs, and will probably actually involve some mathematics suitable to the task. Whether the answers take the form of something describable as "provable Friendliness" seems to me an unclear/unmotivated consideration. Unpacking that label might make it possible to provide a more useful response to the question.
I think I may be incredibly confused.
Firstly, if the universe is distributions of complex amplitudes in configuration space, then shouldn't we describe our knowledge of the world as probability distributions of complex amplitude distributions? Is there some incredibly convenient simplification I'm missing?
Secondly, have I understood correctly that the universe, in quantum mechanics, is a distribution of complex values in an infinite-dimensional space, where each dimension corresponds to the particular values some atribute of some particle in the universe t...
There's an argument that I run into occasionally that I have some difficulty with.
Let's say I tell someone that voting is pointless, because one vote is extremely unlikely to alter the outcome of the election. Then someone might tell me that if everyone thought the way I do, democracy would be impossible.
And they may be right, but since everyone doesn't think the way I do, I don't find it to be a persuasive argument.
Other examples would be littering, abusing community resources, overusing antibiotics, et cetera. They may all be harmful, but if only one add...
How would I set up a website with a similar structure to less wrong? So including user submitted posts, comments and an upvote downvote system.
This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.