Comment author: [deleted] 21 April 2014 02:23:13PM *  0 points [-]

I agree, but I was taking the work of translation to be entirely on the side of an AGI: it would take whatever sentences it thinks in a meta-language and translate them into human language. Figuring out how to express such thoughts in our language would be a challenging practical problem, but that's exactly where AGI shines. I'm assuming, obviously, that it wants to be understood. I am very ready to agree that an AGI attempting to be obscure to us will probably succeed.

In response to comment by [deleted] on AI risk, new executive summary
Comment author: Armok_GoB 22 April 2014 01:56:27AM 1 point [-]

Thats obvious and not what I meant. I'm talking about the simplest possible in principle expression in the human language being that long and complex.

Comment author: EHeller 21 April 2014 10:36:11PM *  1 point [-]

numbers specified with a lot of digits that actually matters

Thats why we have paper, I can write it down. "Understanding" and "remembering" seem somewhat orthogonal here. I can't recite Moby Dick from memory, but I understood the book. If you give me a 20 digit number 123... and I can't hold it but retain "a number slightly larger than 1.23 * 10^20," that doesn't mean I can't understand you.

Just because you can count in binary on you hands does not mean you can comprehend the code of an operating system expressed in that format.

Print it out for me, and give me enough time, and I will be able to understand it, especially if you give me some context.

Yes, you can encode things in a way that make them harder for humans to understand, no one would argue that. The question is- are there concepts that are simply impossible to explain to a human? I point out that while I can't remember a 20 digit number, I can derive pretty much all of classical physics, so certainly humans can hold quite complex ideas in their head, even if they aren't optimized for storage of long numbers.

Comment author: Armok_GoB 22 April 2014 01:52:26AM 0 points [-]

You can construct a system consisting of a planet's worth of paper and pencils and an immortal version of yourself (or a vast dynasty of successors) that can understand it, if nothing else because it's turing complete and can simulate the AGI. this is not the same as you understanding it while still remaining fully human. Even if you did somehow integrate the paper-system sufficiently that'd be just as big a change as uploading and intelligence-augmenting the normal way.

The approximation thing is why I specified digits mattering. It wont help one bit when talking about something like gödel numbering.

Comment author: [deleted] 21 April 2014 12:57:29AM *  0 points [-]

Sorry, didn't mean to call you personally any of those adjectives :)

None taken then.

Pretty much, yes, I find it totally possible. I am not saying that I am confident that this is the case, just that I find it more likely than the alternative, which would require an additional reason why it isn't so.

Well, tell me what you think of this argument:

Lets divide the meta-language into two sets: P (the sentences that cannot be rendered in English) and Q (the sentences that can). If you expect Q to be empty, then let me know and we can talk about that case. But let's assume for now that Q is not empty, since I assume we both think that an AGI will be able to handle human language quite easily. Q is, for all intents and purposes, a 'human' language itself.

Premise one is that that translation is transitive: if I can translate language a into language b, and language b into language c, then I can translate language a into language c (maybe I need to use language b as an intermediate step, though).

Premise two: If I cannot translate a sentence in language a into an expression in language b, then there is no expression in language b that expresses the same thought as that sentence in language a.

Premise three: Any AGI would have to learn language originally from us, and thereafter either from us or from previous versions of itself.

So by stipulation, every sentence in Q can be rendered in English, and Q is non-empty. If any sentence in P cannot be rendered in English, then it follows from premise one that sentences in P cannot be rendered in sentences in Q (since then they could thereby be rendered into English). It also follows, if you accept premise two, that Q cannot express any sentence in P. So an AGI knowing only Q could never learn to express any sentence in P, since if it could, any speaker of Q (potentially any non-improved human) could in principle learn to express sentences in P (given an arbitrarily large amount of resources like time, questions and answers, etc.).

Hence, no AGI, beginning from a language like English could go on to learn how to express any sentence in P. Therefore no AGI will ever know P.

I'm not super confident this argument is sound, but it seems to me to be at least plausible.

If you agree with Eliezer's definition of intelligence as optimization power

Well, that's a fine definition, but it's tricky in this case. Because if intelligence is optimization power, and optimizing presupposes something to optimize, then intelligence (on that definition) isn't strictly a factor in (ultimate) goal formation. If that's right, than something's being much more intelligent would (as I think someone else mentioned) just lead to very hard to understand instrumental goals. It would have no direct relationship with terminal goals.

In response to comment by [deleted] on AI risk, new executive summary
Comment author: Armok_GoB 21 April 2014 02:15:32PM 0 points [-]

Premise one is false assuming finite memory.

Premise 3 does not hold well either; Many new words come from pointing out a pattern in the environment, not from defining in terms of previous words.

Comment author: [deleted] 20 April 2014 10:51:40PM *  0 points [-]

I am not sure what you mean by "refute" here.

It was just a joke: I meant that I would prove you wrong by showing that I can understand you, despite the difference in our intellectual faculties. I don't really know if we have very different intellectual faculties; it was just a slightly ironic reposte to being called "naive, unimaginative and closed-minded" earlier. You may be right! But then my understanding you is at least a counterexample.

you can sort-of teach a parrot or an ape to recognize words

Can we taboo the 'animals can't be made to understand us' analogy? I don't think it's a good analogy, and I assume you can express your point without it. It certainly can't be the substance of your argument.

Anyway, would you be willing to agree to this: "There are at least some sentences in the meta-language (i.e. the kind of language an AGI might be capable of) such that those sentences cannot be translated into even an arbitrarily complex expressions in human language." For example, there will be sentences in the meta-language that cannot be expressed in human language, even if we allow the users of human language (and the AGI) an arbitrarily large amount of time, an arbitrarily large number of attempts at conversation, question and answer, etc. an arbitrarily large capacity for producing metaphor, illustration, etc. Is that your view? Or is that far too extreme? Do you just mean to say that the average human being today couldn't get their heads around an AGI's goals given 40 minutes, pencil, and paper? Or something in between these two claims?

I simply follow the chain of goal complexity as it grows with the intelligence complexity, from protozoa to primate and on and note that I do not see a reason why it would stop growing just because we cannot imagine what else a super-intelligence would use for/instead of a goal system.

Why do you think this is a strong argument? It strikes me as very indirect and intuitionistic. I mean, I see what you're saying, but I'm not at all confident that the relations between a protozoa and a fish, a dog and a chimp, a 8th century dock worker and a 21st century physicist, and the smartest of (non-uplifted) people and an AGI all fall onto a single continuum of intelligence/complexity of goals. I don't even know what kind of empirical evidence (I mean the sort of think one would find in a scientific journal) could be given in favor of such a conclusion. I just don't really see why you're so confident in this conclusion.

In response to comment by [deleted] on AI risk, new executive summary
Comment author: Armok_GoB 21 April 2014 02:08:21PM 0 points [-]

Using "even an arbitrarily complex expressions in human language" seem unfair, given that it's turing complete but describing even a simple program in it fully in it without external tools will far exceed the capability of any actual human except for maybe a few savants that ended up highly specialized towards that narrow kind of task.

In response to comment by [deleted] on AI risk, new executive summary
Comment author: shminux 20 April 2014 10:22:39PM -1 points [-]

I will try to refute you by understanding what you say.

I am not sure what you mean by "refute" here. Prove my conjecture wrong by giving a counterexample? Show that my arguments are wrong? Show that the examples I used to make my point clearer are bad examples? If it's the last one, but then I would not call it a refutation.

I guess that by 'meta-' you intend to say that at least some sentences in the meta-language couldn't in principle be translated into a non-meta 'human' language. Is that right?

Indeed, at least not without some extra layer of meaning not originally expressed in the language. To give another example (not a proof, just an illustration of my point), you can sort-of teach a parrot or an ape to recognize words, to count and maybe even to add, but I don't expect it to be possible to teach one to construct mathematical proofs or to understand what one even is. Even if a proof can be expressed as a finite string of symbols (a sentence in a language) a chimp is capable of distinguishing from another string. There is just too much meta there, with symbols standing for other symbols or numbers or concepts.

I agree that my PhD defense example is not a proof, but an illustration meant to show that humans quite often experience a disconnect between a language ans an underlying concept, which well might be out of reach, despite being expressed with familiar symbols, just like a chimp would in the above example.

What reason do you have for thinking an AGI's goals would be complex at all?

I simply follow the chain of goal complexity as it grows with the intelligence complexity, from protozoa to primate and on and note that I do not see a reason why it would stop growing just because we cannot imagine what else a super-intelligence would use for/instead of a goal system.

Comment author: Armok_GoB 21 April 2014 02:00:26PM *  0 points [-]

I can in fact imagine what else a super-intelligence would use instead of a goal system. A bunch of different ones even. For example, a lump of incomprehensible super-solomonoff-compressed code that approximates a hypercomputer simulating a multiverse with the utility function as an epiphenomenal physical law feeding backwards in time to the AIs actions. Or a carefully tuned decentralized process (think natural selection, or the invisible hand) found to match the AIs previous goals exactly by searching through an infinite platonic space.

(yes, half of those are not real words; the goal was to imagine something that per definition could not be understood, so it's hard to do better than vaguely pointing in the direction of a feeling.)

Edit: I forgot: "goal system replaced by completely arbitrary thing that resembles it even less because it was traded away counterfactually to another part of tegmark-5"

Comment author: EHeller 21 April 2014 01:43:44AM *  0 points [-]

Sure, but I find "can't understand" sort of fuzzy as a concept. i.e. I wouldn't say I 'understand' compactification and calabi yau manifolds the same way I understand sheet music (or the same way I understand the word green), but I do understand them all in some way.

It seems unlikely to me that there exist concepts that can't be at least broadly conveyed via some combination of those. My intuition is that existing human languages cover, with their descriptive power, the full range of explainable things.

for example- it seems unlikely there exists a law of physics that cannot be expressed as an equation. It seems equally unlikely there exists an equation I would be totally incapable of working with. Even if I'll never have the insight that lead someone to write it down, if you give it to me, I can use it to do things.

Comment author: Armok_GoB 21 April 2014 01:40:18PM 1 point [-]

Human languages can encode anything, but a human can't understand most things valid in human languages; most notably, extremely long things, and numbers specified with a lot of digits that actually matters. Just because you can count in binary on you hands does not mean you can comprehend the code of an operating system expressed in that format.

Humans seem "concept-complete" in much the same way your desktop PC seems turing complete. Except it's much more easily broken because the human brain has absurdly shity memory.

Comment author: shminux 21 April 2014 12:06:32AM -1 points [-]

Is your argument, essentially, that you think a machine intelligence can create a mathematics humans cannot understand, even in principle?

"mathematics" may be a wrong word for it. I totally think that a transhuman can create concepts and ideas which a mere human cannot understand even when patiently explained. I am quite surprised that other people here don't find it an obvious default.

Comment author: Armok_GoB 21 April 2014 12:55:48PM 0 points [-]

My impression was the question was not if it'd have those concepts, since as you say thats obvious, but if they'd be referenced necessarily by the utility function.

Comment author: shminux 20 April 2014 07:55:23AM *  -1 points [-]

Re goals, I feel that comparing advanced AGI to humans is like comparing humans to chimps: regardless how much we want to explain human ethics and goals to a chimp, and how much effort we put in, its mind just isn't equipped to comprehend them. Similarly, even the most benevolent and conscientious AGI would be unable to explain its goal system or its ethical system to even a very smart human. Like chimps, humans have their own limits of comprehension, even though we do not know what they are from the inside.

Comment author: Armok_GoB 21 April 2014 12:48:47PM *  0 points [-]

It might not be possible to "truly comprehend" the AIs advanced meta-meta-ethics and whatever compact algorithm replaces the goal-subgoals tree, but the AI most certainly can provide a code of behavior and prove that following it is a really good idea, much like humans might train pets to provide a variety of useful tasks whose true purpose they can't comprehend. And it doesn't seem unreasonable that this code of behavior wouldn't have the look and feel of an in-depth philosophy of ethics, and have some very very deep and general compression/procedural mechanism that seem very much like things you'd expect from a true and meaningful set of metaethics to humans, even if it did not correspond much to whats going on inside the AI. It also probably wouldn't accidentally trigger hypocrisy-revulsion in the humans, although the AI seeming to also be following it is just one of many solutions to that and probably not a very likely one.

Friendliness is pretty much an entirely tangential issue and the equivalent depth of explaining it would require the solution to several open questions unless I'm forgetting something right now. (I probably am)

There, question dissolved.

Edit; I ended up commenting in a bunch of places, in this comment tree, so i feel the need to clarify; I consider both side here to be making errors, and ended up seeing to favor the shminux side because thats where I were able to make interesting contributions, and it made some true tangential claims that were argued against and not defended well. I do not agree with the implications for friendliness however; you don't need to understand something to be able to construct true statements about it or even direct it's expression powerfully to have properties you can reference but don't understand either, especially if you have access to external tools.

Comment author: D_Malik 08 April 2014 07:05:35PM *  15 points [-]

Should we listen to music? This seems like a high-value thing to think about.* Some considerations:

  • Music masks distractions. But we can get the same effect through alternatives such as white noise, calming environmental noise, or ambient social noise.

  • Music creates distractions. It causes interruptions. It forces us to switch our attention between tasks. For instance, listening to music while driving increases the risk of accidents.

  • We seem to enjoy listening to music. Anecdotally, when I've gone on "music fasts", music starts to sound much better and I develop cravings for music. This may indicate that this is a treadmill system, such that listening to music does not produce lasting improvements in mood. (That is, if enjoyment stems from relative change in quality/quantity of music and not from absolute quality/quantity, then we likely cannot obtain a lasting benefit.)

  • Frequency of music-listening correlates (.18) with conscientiousness. I'd guess the causation's in the wrong direction, though.

  • Listening to random music (e.g. a multi-genre playlist on shuffle) will randomize emotion and mindstate. Entropic influences on sorta-optimized things (e.g. mindstate) are usually harmful. And the music-listening people do nowadays is very unlike EEA conditions, which is usually bad.

(These are the product of 30 minutes of googling; I'm asking you, not telling you.)

Here are some ways we could change our music-listening patterns:

  • Music modifies emotion. We could use this to induce specific useful emotions. For instance, for productivity, one could listen to a long epic music mix.

  • Stop listening to music entirely, and switch to various varieties of ambient noise. Moderate ambient noise seems to be best for thinking.

  • Use music only as reinforcement for desired activities. I wrote a plugin to implement this for Anki. Additionally, music benefits exercise, so we might listen to music only at the gym. The treadmill-like nature of music enjoyment (see above) may be helpful here, as it would serve to regulate e.g. exercise frequency - infrequent exercise would create music cravings which would increase exercise frequency, and vice versa.

  • Listen only to educational music. Unfortunately, not much educational music for adults exists. We could get around this by overlaying regular music with text-to-speeched educational material or with audiobooks.

* I've been doing quantitative attention-allocation optimization lately, and "figure out whether to stop listening to music again" has one of the highest expected-utilons-per-time of all the interventions I've considered but not yet implemented.

Comment author: Armok_GoB 11 April 2014 12:00:48AM 2 points [-]

Obligatory link: http://mynoise.net/noiseMachines.php

This not only includes noises like white, it also has soundscapes and music/noise hybrid things and a suprisingly effective isochronic generator.

Comment author: RichardKennaway 08 April 2014 02:40:36PM 9 points [-]

Being able to patronise the young is the only advantage of age

Failing health is the only disadvantage of age. In every other way, the years just make things better.

Comment author: Armok_GoB 09 April 2014 07:45:35PM -1 points [-]

Other people and governments knowing about it and changing how rules and expectations apply are pretty darn big disadvantages for both young, old, and in between, in different situations and ways.

View more: Prev | Next