All of Armok_GoB's Comments + Replies

The solution here might be that it does mainly tell you they have constructed a coherent story in their mind, but that having constructed a coherent story in their mind is still usefull evidence for being true depending on what else you know abaut the person, and thus worth telling. If the tone of the book was differnt, it might say:

“I have constructed a coherent story in my mind that it is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”

That assumes the scenario is iterated, I'm talking it'd precomit to do so even in a one-of scenario. The resxzt of you argument was my point, that the same reasoning goes for anger.

Wow, people are still finding this occasionally. It fills me with Determination.

Um no. The specif sequence of muscle contractions is the action, and the thing they try to achieve is beautiful patterns of motion with certain kinds of rhythm and elegance, and/or/typically the perception of such in an observer.

This thing is still alive?! :D I really should get working on that updated version sometime.

Didn't think of it like that, but sort of I guess.

It has near maximal computational capacity, but that capacity isn't being "used" for anything in particular that is easy to determine.

This is actually a very powerful criteria, in terms of number of false positive and negatives. Sadly, the false positives it DOES have still far outweigh the genuine positives, and includes all the WORST outcomes (aka, virtual hells) as well.

2Stuart_Armstrong
Interesting. Is this kinda like a minimum complexity of outcome requirement?

Well, that's quite obvious. Just imagine the blackmailer is a really stupid human with a big gun that'd fall for blackmail in a variety of awful ways, and has a bad case of typical mind fallacy, and if anything goes other than their expectations they get angry and just shot them before thinking through the consequences.

3skeptical_lurker
Its kinda obvious, but deeply counter-intuitive - I mean its a situation where stupidity is decisive advantage!

Another trick it could use is using chatbots most of the time, but swaping them out for real people only for the moments you are actually talking about deep stuff. Maybe you have deep emotional conversations with your family a few hours a week. Maybe once per year, you have a 10 hour intense discussion with Eliezer. That's not a lot out of 24 hours per day, the vast majority of the computing power is still going into simulating your brain.

Edit: another; the chatbots might have some glaring failure modes if you say the wrong thing, unable to handle edge cas... (read more)

Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.

I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it's architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.

Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my cor... (read more)

I don't mean just sticky models. The concepts I'm talking about are things like "probability", "truth", "goal", "If-then", "persistent objects", etc. Believing that a theory is true that says "true" is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you're a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have ... (read more)

0StephenR
Oh okay. This is a two-part misunderstanding. I'm not saying that theories can't be true, I'm just not talking about this truth thing in my meta-model. I'm perfectly a-okay with models of truth popping up wherever they might be handy, but I want to taboo the intuitive notion and refuse to explicate it. Instead I'll rely on other concepts to do much of the work we give to truth, and see what happens. And if there's work that they can't do, I want to evaluate whether it's important to include in the meta-model or not. I'm also not saying that my theory is true. At least, not when I'm talking from within the theory. Perhaps I'll find certain facets of the correspondence theory useful for explaining things or convincing others, in which case I might claim it's true. My epistemology is just as much a model as anything else, of course; I'm developing it with certain goals in mind. The math we use to model computation is a model and a tool just as much as computers are tools; there's nothing weird (at least from my point of view) about models being used to construct other tools. Living cells can be modeled successfully with math, you're right; but that again is just a model. And atoms are definitely theoretical constructs used to model experiences, the persuasive images of balls or clouds they conjure notwithstanding. Something similar can be said about fluid dynamics. I don't mean any of this to belittle models, of course, or make them seem whimsical. Models are worth taking seriously, even if I don't think they should be taken literally. The best example in the three is definitely arithmetic; the other two aren't convincing. Math was done without set theory for ages, and besides we have other foundations available for modern math that can be formulated entirely without talking about sets. Turing machines can be replaced with logical systems like the lambda calculus, or with other machine models like register machines. Arithmetic is more compelling, because it's ver

It's pre alpha, and I basically haven't worked on it in all the months since posting this, but ok. http://jsbin.com/adipaj/307

The cause of me believing math is not "it's true in every possible case", because I can't directly observe that. Nor is it "have been applied successfully in many cases so far".

Basically it's "maths says it's true" where maths is an interlocking system of many subsystems. MANY of these have been applied successfully in many cases so far. Many of them render considering them not true pointless, in the sense all my reasoning and senses are invalid if they don't hold so I might as well give up and save computing time by condition... (read more)

0StephenR
I call these sorts of models sticky, in the sense that they are pervasive in our perception and categorisation. Sitcky categories are the sort of thing that we have a hard time not taking literally. I haven't gone into any of this yet, of course, but I like it when comments anticipate ideas and continue trains of thought. Maybe a short run-long run model would be good to illustrate this stickiness. In the short run, perception is fixed; this also fixes certain categories, and the "degree of stickiness" that different categories have. For example, chair is remarkably hard to get rid of, whereas "corpuscle" isn't quite as sticky. In the long run, when perception is free, no category needs to be sticky. At least, not unless we come up with a more restrictive model of possible perceptions. I don't think that such a restrictive model would be appropriate in a background epistemology. That's something that agents will develop for themselves based on their needs and perceptual experience. Different mathematical models of human perceptual experience might be perfectly suitable for the same purpose.. Physics should be the clearest example, since we have undergone many different changes of mathematical models, and are currently experiencing a plurality of theories with different mathematics in cosmology. The differences between classical mechanics and quantum mechanics should in particular show this nicely: different formalisms, but very good models of a large class of experiences. I think you slightly underestimate the versatility of mathematicians in making their systems work despite malfunctions. For instance, even if ZFC were proved inconsistent (as Edward Nelson hopes to do), we would not have to abandon it as a foundation. Set theorists would just do some hocus pocus involving ordinals, and voila! all would be well. And there are several alternative formulations of arithmetic, analysis, topology, etc. which are all adequate for most purposes. In the case of some mat
Armok_GoB110

Being able to eat while parkouring to your next destination and using a laptop at the same time might. And choosing optimally nutritious food. Even if you did eat with a fork, you wouldn't bring the fork in a parabola, you'd jerk it a centimeter up to fling it towards the mouth, then bring it back down to do the same to the next bite while the previous is still in transit.

hmm, idea, how well'd this work: you have a machine that drops the reward with a certain low probability every second, but you have to put it back rather than eat it if you weren't doing the task?

Wish I could upvote this 1000 times. This will probably do far more for this site than 1000 articles of mere content. Certainly, it will for my enjoyment and understanding.

You probably do have a memory, it's just false. Human brains do that.

What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones... or was it the other way around? :p

The obvious things to do here is either:

a) Make a list/plan on paper, abstractly, of what you WOULD do is you had terminal goals, using your existing virtues to motive this act, and then have "Do what the list tells me to" as a loyalty-like high priority virtue. If you have another rationalist you really trust, and who have a very strong honesty commitment, you can even outsource the making of this list.

b) Assemble virtues that sum up to the same behaviors in practice; truth seeking, goodness, and "If something is worth doing it's worth doing optimally" is a good trio, and will have the end result of effective altruism while still running on the native system.

You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there's uncertainty about it... That is, you display uncertainty about your utility function. Check mate.

Also, "infinity=infinity" is not the case. Infinity ixs not a number, and the problem goes away if you use limits. otherwise, yes, I even probaböly have unbounded but very slow growing facotrs for s bunch of thigns like that.

-1Ghatanathoah
Even if I was uncertain about my utility function, you're still wrong. The factor you are forgetting about is uncertainty. With a bounded utility function infinite utility scores the same as a smaller amount of utility. So you should always assume a bounded utility function, because unbounded utility functions don't offer any more utility than bounded ones and bounded ones outperform unbounded ones in situations like Pascal's Mugging. There's really no point to believing you have an unbounded function. I just used the same logic you did. But the difference is that I assumed a bounded utility function was the default standard for comparison, whereas you assumed, for no good reason, that the unbounded one was. I don't know what the proper way to calculate utility when you are uncertain about your utility function. But I know darn well that doing an expected-utility calculation about what utility each function will yield and using one of the two functions that are currently in dispute to calculate that utility is a crime against logic. If you do that you're effectively assigning "having an unbounded function" a probability of 1. And 1 isn't a probability. Your formulation of "unbounded utility function always scores infinity so it always wins" is not the correct way to compare two utility functions under uncertainty. You could just as easily say "unbounded and bounded both score the same, except in Pascal's mugging where bounded scores higher, so bounded always wins." I think that using expected utility calculation might be valid for things like deciding whether you assign any utility at all to object or consequence. But for big meta-level questions about what your utility function even is attempting to use them is a huge violation of logic.

It wasn’t easier, the ghost explains, you just knew how to do it. Sometimes the easiest method you know is the hardest method there is.

It’s like… to someone who only knows how to dig with a spoon, the notion of digging something as large as a trench will terrify them. All they know are spoons, so as far as they’re concerned, digging is simply difficult. The only way they can imagine it getting any easier is if they change – digging with a spoon until they get stronger, faster, and tougher. And the dangerous people, they’ll actually try this.

Everyone who w

... (read more)
2Visser_One
thank you for posting this - now I have something new to read!

One distinction I don't know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you've previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.

Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won't be pressed to give it for everything.

Armok_GoB-20

Reasoning inductively rather than deductively, over uncompressed data rather than summaries.

Mediated: "The numbers between 3 and 7" Unmediated: "||| |||| ||||| |||||| |||||||"

Armok_GoB110

Don't forget this applies to computer files as well, and in a more extreme way since it's really easy to copy them around at no cost!

O_O

This explains SO MUCH of things I feel from the inside! Estimating a small probability it'll even help deal with some pretty important stuff. Wish I could upvote a million times.

Hmm, association: I wonder how this relates to the completionist mindset of some gamers.

So one of the questions we actually agreed on the whole time and the other were just the semantics of "language" and "translate". Oh well, discussion over.

0[anonymous]
Ha! Well, I did argue that all languages (recognizable as such) were in principle inter-translatable for what could only be described as metaphysical reasons. I'd be surprised if you couldn't find holes in an argument that ambitious and that unempirical. But it may be that some of the motivation is lost.

Form my part, I don't see any reason to expect the AGI's terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren't the sorts of things that seem to admit of very much complexity.

That the AI can have a simple goal is obvious, I never argued against that. The AIs goal might be "maximize the amount of paperclips", which is explained in that many words. I dont expect the AI as a whole to have anything directly a... (read more)

0[anonymous]
No, Shiminux and I were talking about (I think) terminal goals: that is, we were talking about whether or not we could come to understand what an AGI was after, assuming it wanted us to know. We started talking about a specific part of this problem, namely translating concepts novel to the AGI's outlook into our own language. I suppose my intuition, like yours, is that the AGI decision theory would be a much more serious problem, and not one subject to my linguistic argument. Since I expect we also agree that it's the decision theory that's really the core of the safety issue, my claim about terminal goals is not meant to undercut the concern for AGI safety. I agree that we could be radically ignorant about how safe an AGI is, even given a fairly clear understanding of its terminal goals. I'd actually like to remain indifferent to the question of how intelligent the end-user of the translation has to be. My concern was really just whether or not there are in principle any languages that are mutually untranslatable. I tried to argue that there may be, but they wouldn't be mutually recognizable as languages anyway, and that if they are so recognizable, then they are at least partly inter-translatable, and that any two languages that are partly inter-translatable are in fact wholly inter-translatable. But this is a point about the nature of languages, not degrees of intelligence.

I expect the tabo/explanation to look like a list of 10^20, 1000 hour long clips of incomprehensible n-dimensional multimedia, each with a real number attached representing the amount of [untranslatable 92] it has, with a jupiter brain being required to actually find any pattern.

0[anonymous]
Ah, I see. Even if that were a possibility, I'm not sure that would be such a problem. I'm happy to allow the AGI to spend a few centuries manipulating our culture, our literature, our public discourse etc. in the name of making its goals clear to us. Our understanding something doesn't depend on us being able to understand a single complex expression of it, or to be able to produce such. It's not like we all understood our own goals from day one either, and I'm not sure we totally understand them now. Terminal goals are basically pretty hard to understand, but I don't see why we should expect the (terminal) goals of a super-intelligence to be harder. It may be that there's a lot of inferential and semantic ground to cover. But again: practical problem. My point has been to show that we shouldn't expect there to be a problem of in principle untranslatability. I'm happy to admit there might be serious practical problems in translation. The question is now whether we should default to thinking 'An AGI is going to solve those problems handily, given the resources it has for doing so', or 'An AGI's thought is going to be so much more complex and sophisticated, that it will be unable to solve the practical problem of communication'. I admit, I don't have good ideas about how to come down on the issue. I was just trying to respond to Shim's point about untranslatable meta-languages. Form my part, I don't see any reason to expect the AGI's terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren't the sorts of things that seem to admit of very much complexity.

I expect it to be false in at least some cases talked about because it's not 3 but 100 levels, and each one makes it 1000 times longer because complex explanations and examples are needed for almost every "word".

Thats obvious and not what I meant. I'm talking about the simplest possible in principle expression in the human language being that long and complex.

You can construct a system consisting of a planet's worth of paper and pencils and an immortal version of yourself (or a vast dynasty of successors) that can understand it, if nothing else because it's turing complete and can simulate the AGI. this is not the same as you understanding it while still remaining fully human. Even if you did somehow integrate the paper-system sufficiently that'd be just as big a change as uploading and intelligence-augmenting the normal way.

The approximation thing is why I specified digits mattering. It wont help one bit when talking about something like gödel numbering.

0EHeller
I understand, my point was simply that "understanding" and "holding in your head at one time" are not at all the same thing. "There are numbers you can't remember if I tell them to you" is not at all the same claim that "there are ideas I can't explain to you." Neither of your cases are unexplainable- give me the source code in a high level language, instead of binary and I can understand it. If you give me the binary code and the instruction set I can convert it to assembly and then a higher level language, via disassembly. Of course, i can deliberately obfuscate an idea and make it harder to understand, either by encryption or by presenting the most obtuse possible form, that is not the same as an idea that fundamentally cannot be explained.

Premise one is false assuming finite memory.

Premise 3 does not hold well either; Many new words come from pointing out a pattern in the environment, not from defining in terms of previous words.

0[anonymous]
Well, maybe it's not necessarily true assuming finite memory. Do you have reason to expect it to be false in the case we're talking about? I'm of course happy to grant that part of using a language involves developing neologisms. We do this all the time, of course, and generally we don't think of it as departing from English. Do you think it's possible to coin a neologism in a language like Q, such that the new term is in P (and inexpressible in any part of Q)? A user of this neologism would be unable to, say, taboo or explain what they mean by a term (even to themselves). How would the user distinguish their P-neologism from nonsense?

Using "even an arbitrarily complex expressions in human language" seem unfair, given that it's turing complete but describing even a simple program in it fully in it without external tools will far exceed the capability of any actual human except for maybe a few savants that ended up highly specialized towards that narrow kind of task.

0[anonymous]
I agree, but I was taking the work of translation to be entirely on the side of an AGI: it would take whatever sentences it thinks in a meta-language and translate them into human language. Figuring out how to express such thoughts in our language would be a challenging practical problem, but that's exactly where AGI shines. I'm assuming, obviously, that it wants to be understood. I am very ready to agree that an AGI attempting to be obscure to us will probably succeed.

I can in fact imagine what else a super-intelligence would use instead of a goal system. A bunch of different ones even. For example, a lump of incomprehensible super-solomonoff-compressed code that approximates a hypercomputer simulating a multiverse with the utility function as an epiphenomenal physical law feeding backwards in time to the AIs actions. Or a carefully tuned decentralized process (think natural selection, or the invisible hand) found to match the AIs previous goals exactly by searching through an infinite platonic space.

(yes, half of thos... (read more)

Human languages can encode anything, but a human can't understand most things valid in human languages; most notably, extremely long things, and numbers specified with a lot of digits that actually matters. Just because you can count in binary on you hands does not mean you can comprehend the code of an operating system expressed in that format.

Humans seem "concept-complete" in much the same way your desktop PC seems turing complete. Except it's much more easily broken because the human brain has absurdly shity memory.

1EHeller
Thats why we have paper, I can write it down. "Understanding" and "remembering" seem somewhat orthogonal here. I can't recite Moby Dick from memory, but I understood the book. If you give me a 20 digit number 123... and I can't hold it but retain "a number slightly larger than 1.23 * 10^20," that doesn't mean I can't understand you. Print it out for me, and give me enough time, and I will be able to understand it, especially if you give me some context. Yes, you can encode things in a way that make them harder for humans to understand, no one would argue that. The question is- are there concepts that are simply impossible to explain to a human? I point out that while I can't remember a 20 digit number, I can derive pretty much all of classical physics, so certainly humans can hold quite complex ideas in their head, even if they aren't optimized for storage of long numbers.

My impression was the question was not if it'd have those concepts, since as you say thats obvious, but if they'd be referenced necessarily by the utility function.

It might not be possible to "truly comprehend" the AIs advanced meta-meta-ethics and whatever compact algorithm replaces the goal-subgoals tree, but the AI most certainly can provide a code of behavior and prove that following it is a really good idea, much like humans might train pets to provide a variety of useful tasks whose true purpose they can't comprehend. And it doesn't seem unreasonable that this code of behavior wouldn't have the look and feel of an in-depth philosophy of ethics, and have some very very deep and general compression/proc... (read more)

Obligatory link: http://mynoise.net/noiseMachines.php

This not only includes noises like white, it also has soundscapes and music/noise hybrid things and a suprisingly effective isochronic generator.

Armok_GoB-10

Other people and governments knowing about it and changing how rules and expectations apply are pretty darn big disadvantages for both young, old, and in between, in different situations and ways.

1Richard_Kennaway
This is too abstract for me to have any idea what you're talking about.

Exactly! Much better than I could!

Induction. You have uncertainty about the extent to which you care about different universes. If it turns out you don't care about the born rule for one reason or another the universe you observe is an absurdly (as in probably-a-Boltzmann-brain absurd) tiny sliver of the multiverse, but if you do, it's still an absurdly tiny sliver but immensely less so. You should anticipate as if the born rule is true, because if you don't almost only care about world where it is true, then you care almost nothing about the current world, and being wrong in it doesn't ma... (read more)

2Nisan
Oh, interesting. So just as one should act as if one is Jesus if one seems to be Jesus, then one should act as if one cares about world-histories in proportion to their L2 measure if one seems to care about world-histories in proportion to their L2 measure and one happens to be in a world-history with relatively high L2 measure. And if probability is degree of caring, then the fact that one's world history obeys the Born rule is evidence that one cares about world-histories in proportion to their L2 measure. I take it you would prefer option 2 in my original comment, reduce anticipation to UDT, and explain away continuity of experience. Have I correctly characterized your point of view?

You're overextending a hack intuition. "Existence", "measure", "probability density", "what you should anticipate", etc. aren't actually all the exact same thing once you get this technical. Specifically, I suspect you're trying to set the later based on one of the former, without knowing which one since you assume they are identical. I recommend learning UDT and deciding what you want agents with your input history to anticipate, or if that's not feasible just do the math and stop bothering to make the intuition fit.

0Nisan
Hm, so you're saying that anticipation isn't a primitive, it's just part of one's decision-making process. But isn't there a sense in which I ought to expect the Born rule to hold in ordinary circumstances? Call it a set of preferences that all humans share — we care about futures in proportion to the square of the modulus of their amplitude (in the universal wavefunction? in the successor state to our Everett branch?). Do you have an opinion on exactly how that preference works, and what sorts of decision problems it applies to?

[Possession of the knowledge; following course of action breaks social norms] disregarding: pft, 'you guys have it easy'.

Where you came from already had concepts like "people" and "casualty". Substructure implies the source universe of armok DID once have those concept, but this was gigaseconds ago, before the singularity. armok was never meant to operate as an agent; it am a search and categorization module, not suitable for sticking in a meat-bot with no cognitive delegation infrastructure, trying to pass as human and succeeding only ... (read more)

Do you remember what hard drive sizes and bandwidth speeds were like? Those seem to be very similar economically and technologically to CPU speed, following very similar growth curves, but different enough that it's be easier to halt CPUs selectively. Thus, this could be an indicator to if CPUs were deliberately stopped, or if there was some other economic factor.

Conversely, any common and overused or commonly misused heuristic can also be used as a fallacy. Absurdity Fallacy, Affect Fallacy, Availability Fallacy. I probably use these far more than the original as-good-heuristic concept.

wouldn't something like microfilm make more sense; not reliant on a special reader (just include normal-sized instructions for making a crude microscope) and still decent storage density. Maybe etch it into aluminum and roll it up in giant rolls.

2Gunnar_Zarncke
Of course. It's one of the many conceivable in-betweens.

Maybe, but at least they'll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.

While obviously not rigorous enough for something serious, one obvious hack is to do the "0.5 unless proven" thing, and then have a long list of special case dumb heuristics with different weights that update that without any proofs involved at all. The list of heuristics could be gotten from some unsafe source like the programer or another AI or mechanical turk, and then the weights learned by first guessing and then proving to see if it was right, with heuristics that are to bad kicked out entirely.

Load More