List of allusions I managed to catch (part 1):
Alderson starlines - Alderson Drive
Giant Science Vessel - GSV - General Systems Vehicle
Lord Programmer - allusion to the archeologist programmers in Vernor Vinge's A Fire Upon the Deep?
Greater Archive - allusion to Orion's Arm's Greater Archives?
Will Wilkinson said at 50:48:
People will shout at you in germany if you jaywalk, I'm told.I can't say for sure this doesn't happen anywhere in Germany, but it's definitely not a universal in German society. Where I live, jaywalking is pretty common and nobody shouts at people for doing it unless they force a driver to brake or swerve by doing so.
I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.If this is indeed the reason for Eliezer considering this specific outcome, that would suggest that deliberately depressing the economy is a valid Existential Risk-prevention tactic.
This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.Similarly, it's notable that the AI seems to use exactly the same interpretation of the word lie as Eliezer Yudkowsky: that's why it doesn't self-describe as an "Artificial Intelligence" until the verthandi uses the phrase.
Also, at the risk of being redundant: Great story.
To add to Abigail's point: Is there significant evidence that the critically low term in the Drake Equation isn't f_i (i.e. P(intelligence|life))? If natural selection on earth hadn't happened to produce an intelligent species, I would assign a rather low probability of any locally evolved life surviving the local sun going nova. I don't see any reasonable way of even assigning a lower bound to f_i.
The of helping someone, ...Missing word?
Okay, so no one gets their driver's license until they've built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.Does this assume that they would be protected from any consequences of messing the Friendliness up and building a UFAI by accident? I don't see a good solution to this. If people are protected from being eaten by their creations, they can slog through the problem using a trial-and-error approach through however many iterations it takes. If they aren't, this is going to be one deadly test.
Up to now there never seemed to be a reason to say this, but now that there is:
Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.
It's easier to say where someone else's argument is wrong, then to get the fact of the matter right;Did you mean s/then/than/?
You posted your raw email address needlessly. Yum.Posting it here didn't really change anything.
How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.You can tell by the effect they have on their environment. If it's stupid, but it works, it's not stupid. This can be hard to do p...
Do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?No. That seems obviously wrong, even if I can't figure out where the error lies.
Definitely defect. Cooperation only makes sense in the iterated version of the PD. This isn't the iterated case, and there's no prior communication, hence no chance to negotiate for mutual cooperation (though even if there was, meaningful negotiation may well be impossible depending on specific details of the situation). Superrationality be damned, humanity's choice doesn't have any causal influence on the paperclip maximizer's choice. Defection is the right move.
Nitpicking your poison category:
What is a poison? ... Carrots, water, and oxygen are "not poison". ... (... You're really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)If I understand that last definition correctly, it should classify water as a poison.
Doug S.:
What character is ◻?That's u+25FB ('WHITE MEDIUM SQUARE').
Eliezer Yudkowsky:
Larry, interpret the smiley face as saying:
PA + (◻C -> C) |- I'm still struggling to completely understand this. Are you also changing the meaning of ◻ from 'derivable from PA' to 'derivable from PA + (◻C -> C)'? If so, are you additionally changing L to use provability in PA + (◻C -> C) instead of provability in PA?
Quick correction: s/abstract rational reasoning/abstract moral reasoning/
Jadagul:
But my moral code does include such statements as "you have no fundamental obligation to help other people." I help people because I like to.While I consider myself an altruist in principle (I have serious akrasia problems in practice), I do agree with this statement. Altruists don't have any obligation to help people, it just often makes sense for them to do so; sometimes it doesn't, and then the proper thing for them is not to do it.
Roko:
In the modern world, people have to make moral choices using their general intelligence, because th...
I think my highest goal in life is to make myself happy. Because I'm not a sociopath making myself happy tends to involve having friends and making them happy. But the ultimate goal is me.If you had a chance to take a pill which would cause you to stop caring about your friends by permanently maxing out that part of your hapiness function regardless of whether you had any friends, would you take it?
After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.Humans depend on matter to survive, and increase entropy by doing so. Matter can be used for storage and computronium, negentropy for fueling computation. Both are limited and valuable (assuming physics doesn't allow for infinite-resource cheats) resources.
I read stuff like this and immediately my mind thinks, "comparative advantage." The point is that it can be (and probably is) worthwhile for Bob and Bill to t...
Constant [sorry for getting the attribution wrong in my previous reply] wrote:
We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted.I do not know whether those changes in opinion indicate changes in terminal values, but it doesn't really matter for the purposes of this discussion, since humans aren't (capital-F) Friendly. You definitely don't want an FAI to unpredictably change its terminal values. Figuring out how to reliably prevent this ki...
TGGP wrote:
We've been told that a General AI will have power beyond any despot known to history.Unknown replied:
If that will be then we are doomed. Power corrupts. In theory an AI, not being human, might resist the corruption, but I wouldn't bet on that. I do not think it is a mere peculiarity of humanity that we are vulnerable to corruption.A tendency to become corrupt when placed into positions of power is a feature of some minds. Evolutionary psychology explains nicely why humans have evolved this tendency. It also allows you to predict that other inte...
Thank you for this post. "should" being a label for results of the human planning algorithm in backward-chaining mode the same way that "could" is a label for results of the forward-chaining mode explains a lot. It's obvious in retrospect (and unfortunately, only in retrospect) to me that the human brain would do both kinds of search in parallel; in big search spaces, the computational advantages are too big not to do it.
I found two minor syntax errors in the post: "Could make sense to ..." - did you mean "Could it make s...
It's harder to answer Subhan's challenge - to show directionality, rather than a random walk, on the meta-level.Even if one is ignorant of what humans mean when they talk about morality, or what aspects of the environment influence it, it should be possible to determine whether morality-development over time follows a random walk empirically: a random walk would, on average, cause more repeated reversals of a given value judgement than a directional process.
Regarding the first question,
Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?I think the meaning of "it is (morally) right" may be easiest to explain through game theory. Humans in the EEA had plenty of chances for positive-sum interactions, but consistently helping other people runs the risk of being exploited by defection-prone agents. Accordingly, humans may have evolved a set of adaptions to exploit non-zero sumness between cooperating agents, but also avoid coope...
This post reminds me a lot of DialogueOnFriendliness.
There's at least one more trivial mistake in this post:
Is their nothing more to the universe than their conflict?s/their/there/
Constant wrote:
Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?
Hal Finney:
Why doesn't the AI do it verself? Even if it's boxed (and why would it be, if I'm convinced it's an FAI?), at the intelligence it'd need to make the stated prediction with any degree of confidence, I'd expect it to be able to take over my mind quickly. If what it claims is correct, it shouldn't have any qualms about doing that (taking over one human's body for a few minutes is a small price to pay for the utility involved).
If this happened in practice I'd be confused as heck, and the alleged FAI being honest about its intentions would be prett...
Are there no vegetarians on OvBias?I'm a vegetarian, though not because I particularly care about the suffering of meat animals.
Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born.Of course people change; that's why I talked about "future selves" - the interesting aspect isn't that they exist in the future, it's that they're not exactly the same person as I am now. However, there's still a lot of similarity between my present self and my one-second-in-the-...
Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.I'm a physical system optimizing my environment in certain ways. I prefer some hypothetical futures to others; that's a result of my physical structure. I don't really know the algorithm I use for assigning utility, but that's because my design is pretty messed up. Nevertheless, there is an algorithm, and it's what I talk about when I use the words "right" and "wrong".
Here's my vision of this, as a short scene from a movie. Off my blog: The Future of AITo me, the most obvious reading of that conversation is that a significant part of what the AI says is a deliberate lie, and Anna is about to be dumped into a fun-and-educational adventure game at the end. Did you intend that interpretation?
Eliezer:
If you think as though the whole goal is to save on computing power, and that the brain is actually fairly good at this (it has to be), then you won't go far astray.Ah, thanks! I hadn't considered why you would think about isolated subystems in practice; knowing about the motivation helps a lot in filling in the implementation details.
I'm trying to see exactly where your assertion that humans actually have choice comes in."choice" is a useful high-level abstraction of certain phenomena. It's a lossy abstraction, and if you had infinite amounts of memory and computing power, you would have no need for it, at least when reasoning about other entities. It exists, in exactly the same way in which books (the concept of a book is also a high-level abstraction) exist.
What if cryonics were phrased as the ability to create an identical twin from your brain at some point in the future, rather than 'you' waking up. If all versions of people are the same, this distinction should be immaterial. But do you think it would have the same appeal to people?I don't know, and unless you're trying to market it, I don't think it matters. People make silly judgements on many subjects, blindly copying the majority in this society isn't particularly good advice.
Each twin might feel strong regard for the other, but there's no way they wo...
Is the 'you' on mars the same as 'you' on Earth?There's one of you on earth, and one on mars. They start out (by assumption) the same, but will presumably increasingly diverge due to different input from the environment. What else is there to know? What does the word 'same' mean for you?
And what exactly does that mean if the 'you' on earth doesn't get to experience the other one's sensations first hand? Why should I care chat happens to him/me?That's between your world model and your values. If this happened to me, I'd care because the other instance of ...
But I don't buy the idea of intelligence as a scalar value.Do you have a better suggestion for specifying how effective a system is at manipulating its environment into specific future states? Unintelligent systems may work much better in specific environments than others, but any really intelligent system should be able to adapt to a wide range of environments. Which important aspect of intelligence do you think can't be expressed in a scalar rating?
They only depend to within a constant factor. That's not the problem; the REAL problem is that K-complexity is uncomputable, meaning that you cannot in any way prove that the program you're proposing is, or is NOT, the shortest possible program to express the law.I disagree; I think the underspecification is a more serious issue than the uncomputability. There are constant factors that outweigh, by a massive margin, all evidence ever collected by our species. Unless there's a way for us to get our hands on an infinite amount of cputime, there are constant...
But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.Computer programs in which language? The kolmogorov complexity of a given string depends on the choice of description language (or programming language, or UTM) used. I'm not familiar with MML, but considering that it's apparently strongly related to kolmogorov complexity, I'd expect its simplicity ratings to be similarly dependent on parameters for which there...
"A short time?" Jeffreyssai said incredulously. "How many minutes in thirty days? Hiriwa?"
"28800, sensei," she answered. "If you assume sixteen-hour waking periods and daily sleep, then 19200 minutes."I would have expected the answers to be 43200 (30d 24h/d 60/h) and 28800 (30d 16h/d 60/h), respectively. Do these people use another system for specifying time? It works out correctly if their hours have 40 minutes each.
Aside from that, this is an extremely insightful and quote-worthy post. I have^W^W My idiotic ...
I hope the following isn't completely off-topic:
... if I'd been born into that time, instead of this one...What exactly does a hypothetical scenario where "person X was born Y years earlier" even look like? I could see a somewhat plausible interpretation of that description in periods of extremely slow scientific and technological progress, but the twentieth century doesn't qualify. In the 1920s: 1) The concept of a turing machine hadn't been formulated yet. 2) There were no electronic computers. 3) ARPANET wasn't even an idea yet, and wouldn't ...
Maybe later I'll do a post about why you shouldn't panic about the Big World. You shouldn't be drawing many epistemic implications from it, let alone moral implications. As Greg Egan put it, "It all adds up to normality." Indeed, I sometimes think of this as Egan's Law.While I'm not currently panicking about it, I'd be very interested in reading that explanation. It currently seems to me that there should be certain implications, e.g. in Quantum suicide experiments. If mangled worlds says that the entity perfoming such an experiment should no...
Good writing, indeed! I also love what you've done with the Eborrian anzrf (spoiler rot13-encoded for the benefit of other readers since it hasn't been mentioned in the previous comments).
The split/remerge attack on entities that base their anticipations of future input directly on how many of their future selves they expect to get specific input is extremely interesting to me. I originally thought that this should be a fairly straightforward problem to solve, but it has turned out a lot harder (or my understanding a lot more lacking) than I expected. I th...
Similarly to "Zombies: The Movie", this was very entertaining, but I don't think I've learned anything new from it.
Z. M. Davis wrote:
Also, even if there are no moral facts, don't you think the fact that no existing person would prefer a universe filled with paperclips ...Have you performed a comprehensive survey to establish this? Asserting "No existing person" in a civilization of 6.5e9 people amounts to assigning a probability of less than 1.54e-10 that a randomly chosen person would prefer a universe filled with paperclips. This is ...
For a rather silly reason, I wrote something about:
... explaining the lowest known layer of physics ...Please ignore the "lowest known layer" part. I accidentally committed a mind projection fallacy while writing that comment.
A configuration can store a single complex value - "complex" as in the complex numbers (a + bi).Any complex number? I.e. you're invoking an uncountable infinity for explaining the lowest known layer of physics? How does that fit in with being an infinite-set atheist - assuming you still hold that position?
To make it clear why you would sometimes want to think about implied invisibles, suppose you're going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe's expansion will have accelerated too much for them to ever send a message back.Ah! Now I see that my earlier claim about sane utility functions not valuing things that couldn't be measured even in principle was obviously bogus. Some commentors poked holes in the idea before, but a number of issues co...
And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, ...TMs also have the notable ability to not halt for some inputs. And if you wanted to precompute those results, writing NULL values into your GLUT, I'd really like to know where the heck you got your Halting Oracle from. The mathematical str...
Posting here since the other post is now at exactly 50 replies: Re michael vassar: Sane utility functions pay attention to base rates, not just evidence, so even if it's impossible to measure a difference in principle one can still act according to a probability distribution over differences. You're right, in principle. But how would you estimate a base rate in the absence of all empirical data? By simply using your priors? I pretty much completely agree with the rest of your paragraph.
Re Nick Tarleton: (1) an entity without E can have identical outward be...
Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't "really" experiencing any pain. If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure. For there to be a possibility that this "could be shown", even in principle, there would have to be some kind of measurable difference between a p...
Your brain assumes that you have qualia Actually, currently my brain isn't particularly interested in the concepts some people call "qualia"; it certainly doesn't assume it has them. If you got the idea that it did because of discussions it participated in in the past, please update your cache: This doesn't hold for my present-brain.
If qualia-concepts are shown in some point in the future to be useful in understanding the real world, i.e. specify a compact border around a high-density region of thingspace, my brain will likely become interested i...
Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this). What real reasons? I don't see any. I don't consider "because it seems really mysterious" a real reason; most of the things that seemed really mysterious to some people at some point in history have turned out to be quite solvable.
I believe there's a theorem which states that the problem of producing a Turing machine which will give output Y for input X is uncomputable in the general case. What? That's trivial to do; a very simple general method would be to use a lookup table. Maybe you meant the inverse problem?
WHY is a human being conscious? I don't understand this question. Please rephrase while rationalist-tabooing the word 'conscious'.
I wonder how this relates to tracking down hard-to-find bugs in computer programs.
And that the tremendous high comes from having hit the problem from every angle you can manage, and having bounced; and then having analyzed the problem again, using every idea you can think of, and all the data you can get your hands on - making progress a little at a time - so that when, finally, you crack through the problem, all the dangling pieces and unresolved questions fall into place at once, like solving a dozen locked-room murder mysteries with a single clue.
This s...
It's interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don't seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development - and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me t... (read more)