So I submit the only useful questions we can ask are not about AGI, "goals", and other such anthropomorphic, infeasible, irrelevant, and/or hopelessly vague ideas. We can only usefully ask computer security questions. For example some researchers I know believe we can achieve virus-safe computing. If we can achieve security against malware as strong as we can achieve for symmetric key cryptography, then it doesn't matter how smart the software is or what goals it has: if one-way functions exist no computational entity, classical or quantum, can crack symmetric key crypto based on said functions. And if NP-hard public key crypto exists, similarly for public key crypto. These and other security issues, and in particular the security of property rights, are the only real issues here and the rest is BS.

-- Nick Szabo

Nick Szabo and I have very similar backrounds and interests. We both majored in computer science at the University of Washington. We're both very interested in economics and security. We came up with similar ideas about digital money. So why don't I advocate working on security problems while ignoring AGI, goals and Friendliness?

In fact, I once did think that working on security was the best way to push the future towards a positive Singularity and away from a negative one. I started working on my Crypto++ Library shortly after reading Vernor Vinge's A Fire Upon the Deep. I believe it was the first general purpose open source cryptography library, and it's still one of the most popular. (Studying cryptography led me to become involved in the Cypherpunks community with its emphasis on privacy and freedom from government intrusion, but a major reason for me to become interested in cryptography in the first place was a desire to help increase security against future entities similar to the Blight described in Vinge's novel.)

I've since changed my mind, for two reasons.

1. The economics of security seems very unfavorable to the defense, in every field except cryptography.

Studying cryptography gave me hope that improving security could make a difference. But in every other security field, both physical and virtual, little progress is apparent, certainly not enough that humans might hope to defend their property rights against smarter intelligences. Achieving "security against malware as strong as we can achieve for symmetric key cryptography" seems quite hopeless in particular. Nick links above to a 2004 technical report titled "Polaris: Virus Safe Computing for Windows XP", which is strange considering that it's now 2012 and malware have little trouble with the latest operating systems and their defenses. Also striking to me has been the fact that even dedicated security software like OpenSSH and OpenSSL have had design and coding flaws that introduced security holes to the systems that run them.

One way to think about Friendly AI is that it's an offensive approach to the problem of security (i.e., take over the world), instead of a defensive one.

2. Solving the problem of security at a sufficient level of generality requires understanding goals, and is essentially equivalent to solving Friendliness.

What does it mean to have "secure property rights", anyway? If I build an impregnable fortress around me, but an Unfriendly AI causes me to give up my goals in favor of its own by crafting a philosophical argument that is extremely convincing to me but wrong (or more generally, subverts my motivational system in some way), have I retained my "property rights"? What if it does the same to one of my robot servants, so that it subtly starts serving the UFAI's interests while thinking it's still serving mine? How does one define whether a human or an AI has been "subverted" or is "secure", without reference to its "goals"? It became apparent to me that fully solving security is not very different from solving Friendliness.

I would be very interested to know what Nick (and others taking a similar position) thinks after reading the above, or if they've already had similar thoughts but still came to their current conclusions.

New Comment
107 comments, sorted by Click to highlight new comments since: Today at 11:52 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I only have time for a short reply:

(1) I'd rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.

(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

(3) I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.

(4) Computer "goals" are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts. Perhaps you can make some progress by for example advancing the study of postconditions, which seem to be the closest analog to goals in the software engineering world. One can imagine a world where postconditions are always checked, for example, and other software ignores the output of software that has violated one of its postconditions.

The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

As a lawyer, I strongly suspect this statement is false. As you seem to be referring to the term, Law is society's organizational rules about how and when to implement coercive violence. In the abstract, this is powerful, but concretely, this power is implemented by individuals. Some of them (i.e. police officers), care relatively little about the abstract issues - in other words, they aren't careful about the issues that are relevant to AI.

Further, law is filled with backdoors - they are called legislators. In the United States, Congress can make almost any judicially announced rule irrelevant by passing a statute. If you call that process "Law," then you aren't ... (read more)

Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium,

That seems pretty harsh! The Bureau of Labor Statistics reports 728,000 lawyers in the U.S., a notably attorney-heavy society within the developed world. The SMPY study of kids with 1 in 10,000 cognitive test scores found (see page 722) only a small minority studying law. The 90th percentile IQ for "legal occupations" in this chart is a little over 130. Historically populations were much lower, nutrition was worse, legal education or authority was only available to a small minority, and the Flynn Effect had not occurred. Not to mention that law is disproportionately made by politicians who are selected for charisma and other factors in addition to intelligence.

and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.

It's hard to know what to make of this.

Perhaps that the legal system is good at creating incentives that closely align the interests of those it governs with the social good, and that thi... (read more)

9private_messaging12y
There's 0.0001 prior for 1 in 10000 intelligence level. It's a low prior, you need a genius detector with an incredibly low false positive rate before most of your 'geniuses' are actually smart. A very well defined problems with very clear 'solved' condition (such as multiple novel mathematical proofs or novel algorithmic solution to hard problem that others try to solve) would maybe suffice, but 'he seems smart' certainly would not. This also goes for IQ tests themselves - while a genius would have high IQ score, high IQ scored person would most likely be someone somewhat smart slipping through the crack between what IQ test measures and what intelligence is (case example, Chris Langan, or Keith Raniere, or other high IQ 'geniuses' we would never suspect of being particularly smart if not for IQ tests). Weak and/or subjective evidence of intelligence, especially given lack of statistical independence of evidence, should not get your estimate of intelligence of anyone very high.
2Wei Dai12y
This is rather tangential, but I'm curious, out of those who score 1 in 10000 on a standard IQ test, what percentage is actually at least, say, 1 in 5000 in actual intelligence? Do you have a citation or personal estimate?
1David_Gerard12y
Depends what you call "actual intelligence" as distinct from what IQ tests measure. private_messaging talks a lot in terms of observable real-world achievements, so presumably is thinking of something along those lines.
0evand12y
The easiest interpretation to measure would be a regression toward the mean effect. Putting a lower bound on the IQ scores in your sample means that you have a relevant fraction of people who tested higher than their average test score. I suspect that at the high end, IQ tests have few enough questions scored incorrectly that noise can let some < 1 in 5000 IQ test takers into your 1 in 10000 cutoff.
2David_Gerard12y
I also didn't note the other problem: 1 in 10,000 is around IQ=155; the ceiling of most standardized (validated and normed) intelligence tests is around 1 in 1000 (IQ~=149). Tests above this tend to be constructed by people who consider themselves in this range, to see who can join their high IQ society and not substantially for any other purpose.
-1private_messaging12y
Would depend to how you evaluate actual intelligence. IQ test, at high range, measures reliability in solving simple problems (combined with, maybe, environmental exposure similarity to test maker when it comes to progressive matrices and other 'continue sequence' cases - the predictions by Solomonoff induction depend to machine and prior exposure, too). As an extreme example consider an intelligence test of very many very simple and straightforward logical questions. It will correlate with IQ but at the high range it will clearly measure something different from intelligence. All the intelligent individuals will score highly on that test, but so will a lot of people who are simply very good at simple questions. A thought experiment: picture a class room of mind uploads, set for a half the procedural skills to read only, and teach them the algebra class. Same IQ, utterly different outcome. I would expect that if the actual intelligence correlates with IQ to the factor of 0.9 (VERY generous assumption), the IQ could easily become non-predictive at as low as 99th percentile without creating any contradiction with the observed general correlation. edit: that would make about one out of 50 people with IQ of one in 10 000 (or one in 1000 or 1 in 1000 0000 for that matter) be intelligent at level of 1 in 5 000. That seems kind of low, but then, we mostly don't hear of the high IQ people just for IQ alone. edit: and the high IQ organizations like Mensa and the like are hopelessly unremarkable, rather than some ultra powerful groups of super-intelligences. In any case the point is that the higher is the percentile the more confident you must be that you have no common failure mode between parts of your test. edit: and for the record my IQ is 148 as measured on a (crappy) test in English which is not my native tongue. I also got very high percentile ratings in programming contest, and I used to be good at chess. I have no need to rationalize something here. I feel that a
5CarlShulman12y
There are longitudinal studies showing that people with 99.99th percentile performance on cognitive tests have substantially better performance (on patents, income, tenure at top universities) than those at the 99.9th or 99th percentiles. More here. Mensa is less selective than elite colleges or workplaces for intelligence, and much less selective for other things like conscientiousness, height, social ability, family wealth, etc. Far more very high IQ people are in top academic departments, Wall Street, and Silicon Valley than in high-IQ societies more selective than Mensa. So high-IQ societies are a very unrepresentative sample, selected to be less awesome in non-IQ dimensions.
-2private_messaging12y
Uses other tests than IQ test, right? I do not dispute that a cognitive test can be made which would have the required reliability for detecting the 99.99th percentile. The IQ tests, however, are full of 'continue a short sequence' tests that are quite dubious even in principle. It is fundamentally difficult to measure up into 99.99th percentile, you need a highly reliable measurement apparatus, carefully constructed in precisely the way in which IQ tests are not. Extreme rarities like one in 10 000 should not be thrown around lightly. There are other societies. They all are not very selective for intelligence either, though, because they all rely on dubious tests. I would say that this makes those other places be an unrepresentative sample of the "high IQ" individuals. Even if those individuals who pass highly selective requirements on something else rarely enter mensa, they are rare (tautology on highly selective) and their relative under representation in mensa doesn't sway mensa's averages. edit: for example consider the Nobel Prize winners. They all have high IQs but there is considerable spread and the IQ doesn't seem to correlate well with the estimate of "how many others worked on this and did not succeed". Note: I am using "IQ" in the narrow sense of "what IQ tests measure", not as shorthand for intelligence. The intelligence has the capacity to learn component which IQ tests do not measure but tests of mathematical aptitude (with hard problems) or verbal aptitude do. note2: I do not believe that the correlation entirely disappears even for IQ tests past 99th percentile. My argument is that for the typical IQ tests it well could. It's just that the further you get up the smaller fraction of the excellence is actually being measured.
1CarlShulman12y
Administering SATs to younger children, to raise the ceiling. Well Mensa is ~0 selectivity beyond the IQ threshold, and is a substitute good for other social networks, leaving it with the dregs. "Much more" is poor phrasing here, they're not rejecting 90%. If you look at the linked papers you'll see that a good majority of those at the 1 in 10,000 level on those childhood tests wind up with elite university/alumni or professional networks with better than Mensa IQ distributions.
-1private_messaging12y
Ghmmm. I'm sure this measures a plenty of highly useful personal qualities that correlate with income. E.g. rate of learning. Or inclination to pursue intellectual work. Well, yes. I think we agree on all substantial points here but disagree on interpretation of my post. I referred specifically to "IQ tests" not to SAT, as lacking the rigour required for establishing 1 in 10 000 performance with any confidence, to balance on my point that e.g. 'that guy seems smart' shouldn't possibly result in estimate of 1 in 10 000 , and neither could anything that relies on rather subjective estimate of the difficulty of the accomplishments in the settings where you can't e.g. reliably estimate from number of other people who try and don't succeed.
-1CarlShulman12y
Note that these studies use the same tests (childhood SAT) that Eliezer excelled on (quite a lot higher than the 1 in 10,000 level), and that I was taking into account in my estimation.
0private_messaging12y
Sources? Also, a: while that'd be fairly impressive, keep in mind that if it is quite a lot higher than 1 in 10 000 then my prior for it is quite a lot lower than 0.0001 with only minor updates up for 'seeming clever' , and my prior for someone being a psychopath/liar is 0.01, with updates up for talking other people into giving you money. b: not having something else likewise concrete to show off (e.g. contest results of some kind and the like) will at most make me up-estimate him to bin with someone like Keith Raniere or Chris Langan (those did SAT well too), which is already the bin that he's significantly in. Especially as he had been interested in programming, and the programming is the area where you can literally make a LOT of money in just a couple years while gaining the experience and gaining much better cred than childhood SAT. But also an area that heavily tasks general ability to think right and deal with huge amounts of learned information. My impression is that he's a spoiled 'math prodigy' who didn't really study anything beyond fairly elementary math, and my impression is that it's his own impression except he thinks he can do advanced math with little effort using some intuition while i'm pretty damn skeptical of such stuff unless well tested.
5CarlShulman12y
I don't think the childhood SAT gives that much "cred" for real-world efficacy, and I don't conflate intelligence with "everything good a person can be." Obviously, Eliezer is below average in the combination of conscientiousness, conformity, and so forth that causes most smart people to do more schooling. So I would expect lower performance on any given task than from a typical person of his level of intelligence. But it's not that surprising that he would, say, continue popular blogging with significant influence on a sizable audience, rather than stop that (which he values for its effects) to work as a Google engineer to sack away a typical salary, or to do a software startup (which the stats show is pretty uncertain even for those with VC backing and previous successful startups). I agree on not having deep math knowledge, and this being reason to be skeptical of making very unusual progress in AI or FAI. However while his math scores were high, "math prodigy" isn't quite right, since his verbal scores were even higher. There are real differences in what you expect to happen depending on the "top skill." In the SMPY data such people often take up professions like science (or science fiction) writer (or philosopher) that use the verbal skills too, even when they have higher raw math performance than others who go to on to become hard science professors. It's pretty mundane when such a person leans towards being a blogger rather than an engineer, especially when they are doing pretty well as the former. Eliezer has said that if not worried about x-risk he would want to become a science fiction writer, as opposed to a scientist.
5David_Gerard12y
Hey, Raniere was smart enough to get his own cult going.
-3private_messaging12y
Or old enough and disillusioned enough not to fight the cultist's desire to admire someone.
0[anonymous]12y
What salary level is good enough evidence for you to consider someone clever? Notice that your criteria for impressive cleverness excludes practically every graduate student -- the vast majority make next to nothing, have few "concrete" things to show off, etc. Except the interview you quoted says none of that. [...] This is substantially different from EY currently being a math prodigy. In other words, he's no better than random chance, which is vastly different from "[thinking] he can do advanced math with little effort using some intuition." By the same logic, you'd accept P=NP trivially.
5CarlShulman12y
I don't understand. The base rate for Marcello being right is greater than 0.5.
1gwern12y
Maybe EY meant that, on the occasions that Eliezer objected to the final result, he was correct to object half the time. So if Eliezer objected to just 1% of the derivations, on that 1% our confidence in the result of the black box would suddenly drop down to 50% from 99.5% or whatever.
1CarlShulman12y
Yes, but that's not "no better than random chance."
3gwern12y
Sure. I was suggesting a way in which an objection which is itself only 50% correct could be useful, contra Dmytry.
-2[anonymous]12y
Oh, right. The point remains that even a perfect Oracle isn't an efficient source of math proofs.
0[anonymous]12y
You do not understand how basic probability works. I recommend An Intuitive Explanation of Bayes' Theorem. If a device gives a correct diagnosis 999,999 times out of 1,000,000 and is applied to a population that has about 1 in 1,000,000 chance of being positive then a positive diagnosis by the device has approximately 50% chance of being correct. That doesn't make it "no better than random chance". It makes it amazingly good.
-3private_messaging12y
It's not criteria for cleverness, it is criteria for evidence when the prior is 0.0001 (for 1 in 10 000) . One can be clever at one in 7 billions level, and never having done anything of interest, but I can't detect such person as clever at one in 10 000 level with any confidence without seriously strong evidence. I meant, a childhood math prodigy. If Marcello failed one time out of ten and Eliezer detected it half of the time, that would be better than chance. Without knowing failure rate of Marcello (or without knowing how the failures are detected besides being pointed out by EY), one can't say whenever it is better than chance or not.
-6nickLW12y

It's not something you can ever come close to competing with by a philosophy invented from scratch.

I don't understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean "compete" in the sense of providing the most social good. Or something else?

I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.

I disagree with "hopelessly" "anthropomorphic" and "vague", but "infeasible" I may very well agree with, if you mean something like it's highly unlikely that a human team would succeed in creating a Friendly AGI before it's too late to make a difference and without creating unacceptable risk, which is why I advocate more indirect methods of achieving it.

Computer "goals" are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts.

People are trying to design such algorithms, things like practical approximations ... (read more)

It's not something you can ever come close to competing with by a philosophy invented from scratch.

I don't understand what you mean by this.

A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods of exploitation) as they are.

This seems somewhat strange to you, because you believe humans can conceive of AI designs that could reason some things from first principles (given observations of the world that the reasoning needed to be relevant to, plus reasonably anticipatable advantages of computing power over single humans) or incorporate results by reference.

One possible reason he might believe this would be that he believed that, whenever a human reasons about history or evolved institutions, there are something like two distinct levels of a computational complexity hierarchy at work, and that the powers of the greater level... (read more)

If anyone's looking for a computer security project that could be relevant to AI risk scenarios, consider looking at the architecture that an AI box would have, and the software at its surface. While boxing is not sufficient, it is useful as part of a defense-in-depth strategy, and it's a relatively cheap and good precaution for groups that don't think their creations will become generally intelligent with much probability. A box would offer some protection against AI escaping before its birth has been noticed. I'm particularly concerned about terminals (iTerm, xterm, PuTTY, etc), which would most likely interact with the contents of the box in any boxed AI setup, and all seem to be neglected targets written in unsafe languages (ie, C and C++).

It would also be nice to have hardware and software for fake (alarmed) network interfaces, soundcard antennas, and pretend VM hosts. Rather than close off the tricks that an AI of unrecognized generality might use to escape its box, we should replace them all with traps; and since AI researchers probably won't make these things themselves, we should provide ready-made solutions for them.

3CarlShulman12y
These things all seem positive (which doesn't mean near the frontier of effectiveness) and helpful as far as they go.

I find it odd that Nick refers to "AGI goals" as an "anthropomorphic [and] hopelessly vague" idea. One model for AGI goals, for example, is the utility function, which is neither anthropomorphic (since humans don't have them) nor vague.

9Vaniver12y
It seems somewhat vague to me in the sense that the domain of the function is underspecified. Is it valuing sensory inputs? Is it valuing mental models? Is it valuing external reality? Is that at all related to what humans would recognize as "goals" (say, the goal of visiting London)?
7Wei Dai12y
It seems to me that vagueness is different from having competing definitions (e.g., AIXI's notion of utility function vs UDT's) that may turn out to be wrong. In cryptography there are also competing formal definitions of "secure", and for many of them it turns out they don't coincide with our intuitive ideas of "secure", so that a cryptographic scheme can satisfy some formal definition of security while still allowing attackers to "break" the scheme and steal information through ways not anticipated by the designer. Note that this is after several decades of intensive research by hundreds of cryptologists world-wide. Comparatively the problem of "AGI goals" has just begun to be studied. What is it that makes "hopelessly anthropomorphic and vague" apply to "AGI goals", but not to "cryptographic security" as of, say, 1980?
8Vladimir_Nesov12y
AIXI's utility function is useless, the fact that it can be called "utility function" notwithstanding. UDT's utility function is not defined formally (its meaning depends on "math intuition"). For any real-world application of a utility function, we don't have a formal notion of its domain. These definitions are somewhat vague, even if not hopelessly so. They are hopelessly vague for the purpose of building a FAI.
6Wei Dai12y
Perhaps I shouldn't have implied or given the impression that we have fully non-vague definitions of "utility function". What if I instead said that our notions of utility function are not as vague as Vaniver makes them out to be? That our most promising approach for how to define "utility function" gives at least fairly clear conceptual guidance as to the domain, and that we can see some past ideas (e.g., just over sensory inputs) as definitely wrong?
6Vladimir_Nesov12y
Given that the standard of being "fairly clear" is rather vague, I don't know if I disagree, but at the moment I don't know of any approach to a potentially FAI-grade notion of preference of any clarity. Utility functions seem to be a wrong direction, since they don't work in the context of the idea of control based on resolution of logical uncertainty (structure). (UDT's "utility function" is more of a component of definition of something that is not a utility function.) ADT utility value (which is a UDT-like goal definition) is somewhat formal, but only applies to toy examples, it's not clear what it means even in these toy examples, it doesn't work at all when there is uncertainty or incomplete control over that value on part of the agent, and I have no idea how to treat physical world in its context. (It also doesn't have any domain, which seems like a desirable property for a structuralist goal definition.) This situation seems like the opposite of "clear" to me...
3Wei Dai12y
In my original UDT post, I suggested Of course there are enormous philosophical and technical problems involved with this idea, but given that it has more or less guided all subsequent decision theory work by our community (except possibly work within SI that I've not seen), Vaniver's characterization of how much the domain of the utility function is underspecified ("Is it valuing sensory inputs? Is it valuing mental models? Is it valuing external reality?") is just wrong.
3Vladimir_Nesov12y
Right, preference over possible logical consequences of given situations is a strong unifying principle. We can also take physical world to be a certain collection of mathematical structures, possibly heuristically selected based on observations according with being controllable and morally relevant in a tractable way. The tricky thing is that we are not choosing a structure among some collection of structures (a preferred possible world from a collection of possible worlds), but instead we are choosing which properties a given fixed class of structures will have, or alternatively we are choosing which theories/definitions are consistent or inconsistent, which defined classes of structures exist vs. don't exist. Since the alternatives that are not chosen are therefore made inconsistent, it's not clear how to understand them as meaningful possibilities, they are the mysterious logically impossible possible worlds. And there we have it, the mystery of the domain of preference.
-7private_messaging12y
0[anonymous]12y
It's somewhat vague, not necessarily hopelessly so. The question of the domain of utility functions seems important and poorly understood, not to mention the possible inadequacy of the idea of utility functions over worlds, as opposed to something along the lines of a fixed utility value definition that doesn't explicitly refer to any worlds.
1DanielLC12y
It's valuing external reality. Valuing sensory inputs and mental models would just result in wireheading. It would have a utility function, in which it assigns value to possible futures. It's not really a "goal" per se unless it's a satisficer. Otherwise, it's more of a general idea of what's better or worse. It would want to make as many paperclips as it can, rather than build a billion of them.
2JaneQ12y
Mathematically any value that AI can calculate from external anything is a function of sensory input. 'Vague' presumes the level of precision that is not present here. It is not even vague. It's incoherent.
5Wei Dai12y
Given the same stream of sensory inputs, external reality may be different depending on the AI's outputs, and the AI can prefer one output to another based on their predicted effects on external reality even if they make no difference to its future sensory inputs. Even if you were right that valuing external reality is equivalent to valuing sensory input, how would that make it incoherent? Or are you saying that the idea of "external reality" is inherently incoherent?
1JaneQ12y
The 'predicted effects on external reality' is a function of prior input and internal state. The idea of external reality is not incoherent. The idea of valuing external reality with a mathematical function is. Note, by the way, that valuing 'wire in the head' is also a type of 'valuing external reality', not in the sense of 'external' as in wire being outside the box that runs AI, but external in sense of wire being outside the algorithm of the AI. When that point is being discussed here, SI seem to magically acquire an understanding of distinction between outside an algorithm and inside of algorithm to argue that wireheading won't happen. The confusion between model and reality appears and disappears at most convenient moments.
5Wei Dai12y
I think I'm getting a better idea of where our disagreement is coming from. You think of external reality as some particular universe, and since we don't have direct knowledge of what that universe is, we can only apply our utility function to models of it that we build using sensory input, and not to external reality itself. Is this close to what you're thinking? If so, I suggest that "valuing external reality" makes more sense if you instead think of external reality as the collection of all possible universes. I described this idea in more detail in my post introducing UDT.
1private_messaging12y
How would this assign utility to performing an experiment to falsify (drop probability of) some of the 'possible worlds' ? Note that such action decreases the sum of value over possible worlds by eliminating (decreasing weight of) some of the possible worlds. Please note that the "utility function" to which Nick Szabo refers is the notion that is part of the SI marketing pitch, and therein it alludes to the concept of utility from economics - which does actually make the agent value gathering information - and creates impression that this is a general concept applicable to almost any AI and something likely to be created by an AGI team unaware of the whole 'friendliness' idea; something that would be simple to make for paperclips; the world's best technological genius of the future AGI creators being just a skill of making real the stupid wishes which need to be corrected by SI. Meanwhile, in the non-vague sense that you outline here, it appears much more dubious that anyone who does not believe in feasibility of friendliness would want to build this; it's not even clear that anyone could. Meanwhile, an AI whose goal is only defined within a model based on physics as we know it, and lacking any sort of tie of that model to anything real - no value to keeping the model in sync with the world - is sufficient to build all that we need for mind uploading. Sensing is a very hard problem in AI, much more so for AGI.
3Wei Dai12y
UDT would want to perform experiments so that it can condition its future outputs on the results of those experiments (i.e., give different outputs depending on how the experiments come out). This gives it higher utility without "falsifying" any of the possible worlds. The reason UDT is called "updateless" is that it doesn't eliminate or change weight of any of the possible worlds. You might want to re-read the UDT post to better understand it. The rest of your comment makes some sense, but is your argument that without SI (if it didn't exist), nobody else would try to make an AGI with senses and real-world goals? What about those people (like Ben Goertzel) who are currently trying to build such AGIs? Or is your argument that such people have no chance of actually building such AGIs at least until mind uploading happens first? What about the threat of neuromorphic (brain-inspired) AGIs as as we get closer to achieving uploading?
-1private_messaging12y
A particular instance of UDT running particular execution history got to condition on this execution history; you can say that you call conditioning what I call updates; in practice you will want not to run the computations irrelevant to the particular machine, and you will have strictly less computing power in the machine than in the universe it inhabits including the machine itself. It would be good if you could provide example of experimentations it might perform, somewhat formally derived. It feels to me that while it is valuable that you formalized some of the notions you largely have shifted/renamed all the actual problems. E.g. it is problematic to specify utility function on reality, its incoherent. In your case the utility function is specified on all mathematically representable theories, which may well not allow to actually value a paperclip. Plus the number of potential paperclips within a theory would grow larger than any computable function of size of the theory, and the actions may well be dominated by relatively small, but absolutely enormous, differences between huge theories. Can you make actual example of some utility function? It doesn't have to correspond to paperclips - anything so that UDT with this plugged in would actually do something to our reality rather than the imaginary BusyBeaver(100) beings with imaginary dustspecks in their eyes which might be running a boxed sim of our world. With regards to Ben Goertzel, where does his AGI include anything like this not so vague utility function of yours? The marketing spiel in question is, indeed, that Ben Goertzel's AI (or someone else's) would maximize an utility function and kill everyone or something, which leads me to assume that they are not talking of your utility function. With regards to neuromorphic AGIs, I think there's far too much science fiction and far too little understanding of neurology in the rationalization of 'why am I getting paid'. While I do not doubt that brain does im
0Wei Dai12y
You seem to think that I'm claiming that UDT's notion of utility function is the only way real-world goals might be implemented in an AGI. I'm instead suggesting that it is one way to do so. It currently seems to be the most promising approach for FAI, but I certainly wouldn't say that only AIs using UDT can be said to have real-world goals. At this point I'm wondering if Nick's complaint of vagueness was about this more general usage of "goals". It's unclear from reading his comment, but in case it is, I can try to offer a definition: an AI can be said to have real-world goals if it tries to (and generally succeeds at) modeling its environment and chooses actions based on their predicted effects on its environment. Goals in this sense seems to be something that AGI researchers actively pursue, presumably because they think it will make their AGIs more useful or powerful or intelligent. If you read Goertzel's papers, he certainly talks about "goals", "perceptions", "actions", "movement commands", etc.
1private_messaging12y
Then you having formalized your utility function has nothing to do with allegations of vagueness when it comes to defining the utility in the argument of how utility maximizers are dangerous. With regards to it being 'the most promising approach', I think it is a very, very silly idea to have an approach so general that we all may well end up sacrificed in the name of huge number of imaginary beings that might exist, an AI pascal-wagering itself on it's own. It looks like a dead end, especially for friendliness. This does necessarily work like 'I want most paperclips to exist therefore I will talk my way into controlling the world, then kill everyone and make paperclips', though. They also don't try to make goals that couldn't be outsmarted into nihilism. We humans sort-of have a goal of reproduction, except we're too clever, and we use birth control. In your UDT, the actual intelligent component is this mathematical intuition that you'd use to process this theory in reasonable time. The rest is optional and highly difficult (if not altogether impossible) icing, even for the most trivial goal such as paperclips, which may well in principle never work. And the technologies employed in the intelligent component are, without any of those goals, and with much less intelligence (as in computing power and their optimality) requirement, sufficient for e.g. using them to design machinery for mind uploading. Furthermore, and that is the most ridiculous thing, there is this 'oracle AI' being talked about, where an answering system is modelled as based on real world goals and real world utilities, as if those were somehow primal and universally applicable. It seems to me that the goals and utilities are just an useful rhetorical device used to trigger anthropomorphization fallacy at will (in a selective way), as to solicit donations.
0Wei Dai12y
They're not explicitly trying to solve this problem because they don't think it's going to be a problem with their current approach of implementing goals. But suppose you're right and they're wrong, and somebody that wants to build a AGI ends up implementing a motivational system that outsmarts itself into nihilism. Well such an AGI isn't very useful so wouldn't they just keep trying until they stumble onto a motivational system that isn't so prone to nihilism? Similarly, if we let evolution of humans continue, wouldn't humans pretty soon have a motivational system for reproduction that we won't want to cleverly work around?
1private_messaging12y
They do not expect foom either. You can still have formally defined goals - satisfy conditions on equations, et cetera. Defined internally, without the problematic real world component. Use this for e.g. designing reliable cellular machinery ('cure cancer and senescence'). Seems very useful to me. How long would it take you to 'stumble' upon some goal for the UDT that translates to something actually real? The evolution destructively tests designs against reality. Humans do have various motivational systems there, such as religion, btw. I am not sure how you think a motivational system for reproduction could work, so that we would not embrace a solution that actually does not result in reproduction. (Given sufficient intelligence)
1Wei Dai12y
Goertzel does, or at least thinks it's possible. See http://lesswrong.com/lw/aw7/muehlhausergoertzel_dialogue_part_1/ where he says "GOLEM is a design for a strongly self-modifying superintelligent AI system". Also http://novamente.net/AAAI04.pdf where he talks about Novamente potentially being "thoroughly self-modifying and self-improving general intelligence". As I mentioned, there are AGI researchers trying to implement real-world goals right now. If they build an AGI that turns nihilistic, do you think they will just give up and start working on equation solvers instead, or try to "fix" their AGI? I guess probably not very long, if I had a working solution to "math intuition", a sufficiently powerful computer to experiment with, and no concerns for safety...
0timtyler12y
Actions are the product of sensory input and existing state - but the basic idea withstands this, I think.
2TheOtherDave12y
Sure, but the kind of function matters for our purposes. That is, there's a difference between an optimizing system that is designed to optimize for sensory input of a particular type, and a system that is designed to optimize for something that it currently treats sensory input of a particular type as evidence of, and that's a difference I care about if I want that system to maximize the "something" rather than just rewire its own perceptions.
4JaneQ12y
Be specific as of what is the input domain of the 'function' in question. And yes, there is the difference: one is well defined and what is the AI research works towards, and other is part of extensive AI fear rationalization framework, where it is confused with the notion of generality of intelligence, as to presume that the practical AIs will maximize the "somethings", followed by the notion that pretty much all "somethings" would be dangerous to maximize. The utility is a purely descriptive notion; the AI that decides on actions is a normative system. edit: To clarify, the intelligence is defined here as 'cross domain optimizer' that would therefore be able to maximize something vague without it having to be coherently defined. It is similar to knights of the round table worrying that the AI would literally search for holy grail, because to said knights, abstract and ill defined goal of holy grail appears entirely natural; meanwhile for systems more intelligent than said knights such a confused goal, due to it's incoherence, is impossible to define.
3TheOtherDave12y
(shrug) It seems to me that even if I ignore everything SI has to say about AI and existential risk and so on, ignore all the fear-mongering and etc., the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea. And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X. If any part of that is as incoherent as you suggest, and you're capable of pointing out the incoherence in a clear fashion, I would appreciate that.
3JaneQ12y
The prevalence of X is defined how? In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item 'paperclip', and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it's understanding of the 'world' (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas. The practical issue is that the 'prevalence of some X' can not be specified without the model of the world; you can not have a function without specifying it's input domain, and the 'reality' is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical. Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.
2TheOtherDave12y
OK. Thanks for your time.
0DanielLC12y
It can only be said to be powerful if it will tend to do something significant regardless of how you stop it. If what it does has anything in common, even if it's nothing beyond "signficant", it can be said to value that.
3JaneQ12y
Actually, this is example of something incredibly irritating about this entire singularity topic: verbal sophistry of no consequence. What do you call 'powerful' has absolutely zero relation to anything. A powerful drill doesn't tend to do something significant regardless of how you stop it. Neither does powerful computer. Nor should powerful intelligence.
1DanielLC12y
In this case, I'm defining a powerful intelligence differently. An AI that is powerful in your sense is not much of a risk. It's basically the kind of AI we have now. It's neither highly dangerous, nor highly useful (in a singularity-inducing sense). Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI, and far more dangerous. That's why it's primarily what SIAI is worried about.
1JaneQ12y
I'm not clear what we mean by singularity here. If we had an algorithm that works on well defined problems we could solve practical problems. edit: Like improving that algorithm, mind uploading, etc. Effective at what? Would it cure cancer sooner? I doubt so. An "AGI" with a goal it wants to do, resisting any control, is a much more narrow AI than the AI that basically solves systems of equations. Who would I rather hire: impartial math genius that solves the tasks you specify for him, or a brilliant murderous sociopath hell bent on doing his own thing? The latter's usefulness (to me, that's it) is incredibly narrow. Besides being effective at being worse than useless? I'm not quite sure that there's 'why' and 'what' in that 'worried'.
0DanielLC12y
If we have an AGI, it will figure out what problems we need solved and solve them. It may not beat a narrow AI (ANI) in the latter, but it will beat you in the former. You can thus save on the massive losses due to not knowing what you want, politics, not knowing how to best optimize something, etc. I doubt we'd be able to do 1% as well without an FAI as with one. That's still a lot, but that means that a 0.1% chance of producing an FAI and a 99.9% chance of producing a UAI is better than a 100% chance of producing a whole lot of ANIs. Only if his own thing isn't also your own thing.
-4JaneQ12y
Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole 'valuing real world' thing into an AI, without adding any friendliness, actually restricting it's generality when it comes to doing something useful. Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary for the team competent enough to build a full AGI, to not kill everyone, and therefore you should donate (Previously, the position was you should donate so we build FAI before someone builds UFAI, but Luke Muehlhauser been generalizing to non-FAI solutions). That notion is rendered highly implausible when you pin down the meaning of AGI, as we did in this discourse. For the UFAI to happen and kill everyone, a potentially vastly more competent and intelligent team that SI has to fail spectacularly. Will require simulation of me or a brain implant that effectively makes it extension of me. Do not want the former, and the latter is IA.
0timtyler12y
Thinking they are valuing "external reality" probably doesn't really protect agents from wireheading. The agents just wind up with delusional ideas about what "external reality" consists of - built of the patchwork of underspecification left by the original programmers of this concept.
2DanielLC12y
I know that it's possible for an agent that's created with a completely underspecified idea of reality to nonetheless value external reality and avoid wireheading. I know this because I am such an agent. Everything humans can do, an AI could do. There's little reason to believe humans are remotely optimum, so an AI could likely do it better.
0timtyler12y
The "everything humans can do, an AI could do better" argument cuts both ways. Humans can wirehead - machines may be able to wirehead better. That argument is pretty symmetric with the "wirehead avoidance" argument. So: I don't think either argument is worth very much. There may be good arguments that illuminate the future frequency of wireheading, but these don't qualify. It seems quite possible that our entire civilization could wirehead itself - along the lines suggested by David Pearce.
0DanielLC12y
Everything a human can do, a human cannot do in the most extreme possible manner. An AI could be made to wirehead easier or harder. It could think faster or slower. It could be more creative or less creative. It could be nicer or meaner. I wouldn't begin to know how to build an AI that's improved in all the right ways. It might not even be humanly possible. If it's not humanly possible to build a good AI, it's likely impossible for the AI to be able to improve on itself. There's still a good chance that it would work.
0timtyler12y
Probably true - and few want wireheading machines - but the issues are the scale of the technical challenges, and - if these are non-trivial - how much folk will be prepared to pay for the feature. In a society of machines, maybe the occasional one that turns Buddhist - and needs to go back to the factory for psychological repairs - is within tolerable limits. Many apparently think that making machines value "external reality" fixes the wirehead problem - e.g. see "Model-based Utility Functions" - but it leads directly to the problems of what you mean by "external reality" and how to tell a machine that that is what it is supposed to be valuing. It doesn't look much like solving the problem to me.

FAI is a security risk not a fix:

"One way to think about Friendly AI is that it's an offensive approach to the problem of security (i.e., take over the world), instead of a defensive one."

Not if the AI itself is vulnerable to penetration. By your own reasoning, we have no reason to think they won't be. They may turn out to be one of the biggest security liabilities because the way it executes tasks may be very intelligent and there's no reason to believe they won't be reprogrammed to do unfriendly things.

Friendly AI is only friendly until a human figures out how to abuse it.

2Wei Dai12y
An FAI would have some security advantages. It can achieve physical security by taking over the world and virtualizing everyone else, and ought to also have enough optimization power to detect and fix all the "low level" information vulnerabilities (e.g., bugs in its CPU design or network stack). That still leaves "high level" vulnerabilities, which are sort of hard to distinguish from "failures of Friendliness". To avoid these, what I've advocated in the past is that FAI shouldn't be attempted until its builders have already improved beyond human intelligence via other seemingly safer means. BTW, you might enjoy my Hacking the CEV for Fun and Profit. (Edit to add some disclaimers, since Epiphany expressed a concern about PR consequences of this comment: Here I was implicitly assuming that virtualizing people is harmless, but I'm not sure about this, and if it's not, I would prefer the FAI not to virtualize people. Also, I do not work for SIAI nor am I affiliated with them.)
2Epiphany12y
No go. Four reasons. One: If the builders have increased their intelligence levels that high, then other people of that time will be able to do the same and therefore potentially crack the AI. Two: Also, I may as well point out that your argument is based on the assumption that enough intelligence will make for perfect security. It may be that no matter how intelligent the designers are, their security plans are not perfect. Perfect security looks to be about as likely, to me, as perpetual motion is. No matter how much intelligence you throw at it, you won't get a perpetual motion machine. We'd need to discover some paradigm shattering physics information for that to be so. I suppose it is possible that someone will shatter the physics paradigms by discovering new information, but that's not something to count on to build a perpetual motion machine, especially when you're counting on the perpetual motion machine to keep the world safe. Three: Whenever humans have tried to collect too much power into one place, it has not worked out for them. For instance, communism in Russia. They thought they'd share all the money by letting one group distribute it. That did not work. The founding fathers of the USA insisted on checking and balancing the government's power. Surely you are aware of the reasons for that. If the builders are the only ones in the world with intelligence levels that high, the power of that may corrupt them, and they may make a pact to usurp the AI themselves. Four: There may be unexpected thoughts you encounter in that position that seem to justify taking advantage of the situation. For instance, before becoming a jailor, you would assume you're going to be ethical and fair. In that situation, though, people change. (See also: Zimbardo's Stanford prison experiment). Why do they change? I imagine the reasoning goes a little like this: "Great I'm in control. Oh, wait. Everyone wants to get out. Okay. And they're a threat to me because I'm keepi
1Epiphany12y
Whoever it is that keeps thumbing down my posts in this thread is invited to bring brutal honesty down onto my ideas, I am not afraid. If "virtualizing everyone" means what I think you mean by that, that's a euphemism. That it will achieve physical security implies that the physical originals of those people would not exist after the process - otherwise you'd just have two copies of every person which, in theory, could increase their chances of cracking the AI. It sounds like what you're saying here is that the "friendly" AI would copy everyone's mind into a computer system and then kill them. Maybe it seems to some people like copying your mind will preserve you, but imagine this: Ten copies are made. Do you, the physical original person, experience what all ten copies of you are experiencing at once? No. And if you, the physical original person, ceased to exist, would you continue by experiencing what a copy of you is experiencing? Would you have control over their actions? No. You'd be dead. Making copies of ourselves won't save our lives - that would only preserve our minds. Now, if you meant something else by "virtualize" I'd be happy to go read about it. After turning up with absolutely no instances of the terms "virtualize people" or "virtualize everyone" on the internet (barring completely different uses like "blah blah blah virtualize. Everyone is blah blah.") I have no idea what you mean by "virtualize everyone" if it isn't "copy their minds and then kill their bodies."
3Vladimir_Nesov12y
The Worst Argument in the World. This is not a typical instance of "dead", so the connotations of typical examples of "dead" don't automatically apply.
5Richard_Kennaway12y
Tabooing the word "dead", I ask myself, if a copy of myself was made, and ran independently of the original, the original continuing to exist, would either physical copy object to being physically destroyed provided the other continued in existence? I believe both of us would. Even the surviving copy would object to the other being destroyed. But that's just me. How do other people feel?
1Epiphany12y
Assuming the copy had biochemistry, or some other way of experiencing emotions, the cop(ies) of me would definitely object to what had happened. Alternately, if a virtual copy of me was created and was capable of experiencing, I would feel that it was important for the copy to have the opportunity to make a difference in the world - that's why I live - so, yes, I would feel upset about my copy being destroyed. You know, I think this problem has things in common with the individualism vs. communism debate. Do we view the copies as parts of a whole, unimportant in and of themselves, or do we view them all as individuals? If we were to view them as parts of a whole, then what is valued? We don't feel pain or pleasure as a larger entity made up of smaller entities. We feel it individually. If happiness for as many life forms as possible is our goal, both the originals and the copies should have rights. If they copies are capable of experiencing pain and pleasure, they need to have human rights the same as ours. I would not see it as ethical to let myself be copied if my copies would not have rights.
1Vladimir_Nesov12y
We should view them as what they actually are, parts of the world with certain morally relevant structure.
2Epiphany12y
Thank you, Vladmir, for your honest criticism, and more is invited. However, this "Worst argument in the world" comparison is not applicable here. In Yvain's post, he explains: If we do an exercise where we substitute the words "criminal" and "Martin Luther King" with "virtualization" and "death", and read the sentence that results, I think you'll see my point: The opponent is saying "Because you don't like death, and being virtualized will cause death, you should stop liking the idea of being virtualized by an AGI." But virtualization doesn't share the important features of death like not being able to experience anymore and the inability to enjoy the world that made us dislike death in the first place. Therefore, even though being virtualized by an AGI will cause death, there is no reason to dislike virtualization. Not being able to experience anymore and not being able to enjoy the world are unacceptable results of "being virtualized". Therefore, we should not like the idea of being virtualized by AGI.
0Wei Dai12y
Yes, that's what I was thinking when I wrote that, but if the FAI concludes that replacing a physical person with a software copy isn't a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style.
3Epiphany12y
Um. Shouldn't we be thinking "how will we get the FAI to conclude that replacing people with software is not harmless" not "If the FAI concludes that this is harmless..." After all, if it's doing something that kills people, it isn't friendly. To place people into a virtual environment would be to take away their independence. Humans have a need for dignity, and I think that would be bad for them. I think FAI should know better than to do that, too.
0Vladimir_Nesov12y
If it's actually a FAI, you should approve of what it decides, not of what people (including yourself) currently believe. If it can't be relied upon in this manner, it's not (known to be) a FAI. You know whether it's a FAI based on its design, not based on its behavior, which it won't generally be possible to analyze (or do something about). You shouldn't be sure about correct answers to object level (very vaguely specified) questions like "Is replacing people with software harmless?". A FAI should use a procedure that's more reliable in answering such questions than you or any other human is. If it's using such a procedure, then what it decides is a more reliable indicator of what the correct decision is than what you (or I) believe. It's currently unclear how to produce such a procedure. Demanding that FAI conforms to moral beliefs currently held by people is also somewhat pointless in the sense that FAI has to be able to make decisions about much more precisely specified decision problems, such that humans won't be able to analyze those decision problems in any useful way, so there are decisions where moral beliefs currently or potentially held by humans don't apply. If it's built so as to be able to make such decisions, it will also be able to answer the questions about which there are currently held beliefs, as a special case. If the procedure for answering moral questions is more reliable in general, it'll also be more reliable for such questions. See Complex Value Systems are Required to Realize Valuable Futures for some relevant arguments.
0Risto_Saarelma12y
A lot of people here think that the quick assumption that unusual substrate changes necessarily imply death isn't well-founded. Arguing with the assumption that it is obviously true will not be helpful. There should be a good single post or wiki page to like to about this debate (it also comes up constantly in cryonics discussions), but I don't think there is one.
0Risto_Saarelma12y
This is a pretty long-running debate on LW and you've just given the starting argument yet again. One recent response is asking how can you tell sleeping won't kill you?
1Epiphany12y
Physical walls are superior to logical walls according to what I've read. Turning everything into logic won't solve the largest of your security problems, and could exacerbate them. That's five.

Security is solving the problem after the fact, and I think that is totally the wrong approach here, we should be asking if something can be designed into the AI that prevents people from wanting to take the AI over or prevents takeovers from being disastrous (three suggestions for that are included in this comment).

Perhaps the best approach to security is to solve the problems humans are having that cause them to commit crimes. Of course this appears to be a chicken-or-egg proposition "Well, the AI can't solve the problems until it's securely built,... (read more)

[-][anonymous]12y00

This argument seems be following a common schema:

To understand X, it is necessary to understand its relations to other things in the world.

But to understand its relations to each of the other things that exist, it is necessary to understand each of those things as well.

Y describes many of the things that commonly interact with X.

Therefore, the best way to advance our understanding of X, is to learn about Y.

Is that a fair description of the structure of the argument? If so, are you arguing that our understanding of superintelligence needs to be advanced th... (read more)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]12y00

"One Way Functions" aren't strictly one-way; they are just much harder to calculate in one direction than the other. A breakthrough in algorithms, or a powerful enough computer, can solve the problem.

[This comment is no longer endorsed by its author]Reply

Not really an answer to your question, but it seems to me a lot depends on what position I take wrt value drift and the subject-dependence of values.

At one extreme: if I believe that whatever I happen to value right now is what I value, and what I value tomorrow is what I value tomorrow, and it simply doesn't matter how those things relate to each other, I just want to optimize my environment for what I value at any given moment, then it makes sense to concentrate on security without reference to goals. More precisely, it makes sense to concentrate on mech... (read more)

0DanielLC12y
Wouldn't that be a bad idea? If you change your mind as to what you value, then Future!you will optimize for something Present!you doesn't want. Since you're only worried about Present!you's goals, that would be bad.
0TheOtherDave12y
Sure, if I'm only worried about Present!me's goals, then the entire rest of the paragraph you didn't bother quoting is of course false, and the sentence you quote for which that paragraph was intended as context is also false.
0DanielLC12y
Sorry. I missed a word when I read it the first time.
0torekp12y
I don't understand how your hypothetical beliefs of paragraph two differ from those of paragraph four. Or don't they? Please elaborate. Are you saying that Nick Szabo's position depends on (or at least is helped by) viewing one's later values as quite possibly better than current ones?
1TheOtherDave12y
What I'm referring to in paragraphs 2 and 4 are similar enough that what differences may exist between them don't especially matter to any point I'm making. No, and in fact I don't believe that. Better to say that, insofar as an important component of Friendliness research is working out ways to avoid value drift, the OP's preference for Friendliness research over security research is reinforced by a model of the world in which value drift is a problem to be avoided/fixed rather than simply a neutral feature of the world.