Comment author: Armok_GoB 17 May 2014 08:18:34PM 5 points [-]

It wasn’t easier, the ghost explains, you just knew how to do it. Sometimes the easiest method you know is the hardest method there is.

It’s like… to someone who only knows how to dig with a spoon, the notion of digging something as large as a trench will terrify them. All they know are spoons, so as far as they’re concerned, digging is simply difficult. The only way they can imagine it getting any easier is if they change – digging with a spoon until they get stronger, faster, and tougher. And the dangerous people, they’ll actually try this.

Everyone who will ever oppose you in life is a crazy, burly dude with a spoon, and you will never be able to outspoon them. Even the powerful people, they’re just spooning harder and more vigorously than everyone else, like hungry orphan children eating soup. Except the soup is power. I’ll level with you here: I have completely lost track of this analogy.

What I’m saying, giant talking cat, is that everyone is stupid. They attain a narrow grasp of reality and live their life as though there is nothing else. But you, me, creatures with imagination – we aren’t constrained by our experiences. We’re inspired by them. If we have trouble digging with a spoon, we build a shovel. If we’re stopped by a wall, we make a door. And if we can’t make a door, we ask ourselves whether we really need an opening to pass through something solid in the first place.

You point out that you’re not a ghost, and that you do need an opening to pass through solid objects.

No – that’s your mistake, he replies. That’s why you’re still not thinking like a witchhunter. You’re trying to do things right, and that’s wrong. Mysticism means taking a step back – accepting that the very laws of reason and logic you abide by are merely one option of many. It means knowing you only see half the picture in a world where everyone else thinks they see the whole thing. It means having the sheer arrogance to have humility.

That’s why I’m saying you have to think like a witchhunter. You have to be a little wrong to be completely right – to abandon truth in favor of questioning falsehood. If you think something’s the easiest way, you have to know you’re wrong. You have to understand how to stand against the very stance of understanding! You have to know you are inferior; that your knowledge and perceptions will never stand up to the true scope of all possible reality. You have to be a little further from perfect, and embrace that notion.

Source: http://www.prequeladventure.com/2014/05/3391/

Comment author: fezziwig 12 May 2014 08:08:10PM 1 point [-]

Yes, it's pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that's the point of lying, after all. But right now we're talking about the harm one does to oneself by lying; I submit that there needn't be any.

Comment author: Armok_GoB 14 May 2014 12:15:17AM *  1 point [-]

One distinction I don't know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you've previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.

Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won't be pressed to give it for everything.

Comment author: RichardKennaway 09 May 2014 06:30:33AM 1 point [-]

Can you give some examples of more and less mediated experiences?

Comment author: Armok_GoB 09 May 2014 02:35:06PM 0 points [-]

Reasoning inductively rather than deductively, over uncompressed data rather than summaries.

Mediated: "The numbers between 3 and 7" Unmediated: "||| |||| ||||| |||||| |||||||"

Comment author: mare-of-night 30 April 2014 04:58:10AM 6 points [-]

After coming close to being unable to pay at a restaurant once, I do this with money, and it works well. It's not cheap in the same way, so I do have to only put it in places where I'll remember to retrieve it later (usually just in an inner pocket of each of my frequently-used bags). But, having extra money with me that doesn't go into my "do I have enough for this outing" calculation has saved me some worrying.

Actually, I guess this is a general strategy for stuff you might unexpectedly need, or might loose the first copy of. I've also done it with travel documents, (non-perishable) snacks, medicine, and a few other things. Usually I just put a couple copies in each purse or backpack, though; I haven't tried many creative hiding places.

Comment author: Armok_GoB 01 May 2014 02:26:24PM 6 points [-]

Don't forget this applies to computer files as well, and in a more extreme way since it's really easy to copy them around at no cost!

Comment author: Kaj_Sotala 21 April 2014 08:09:42PM *  14 points [-]

I was feeling lethargic and unmotivated today, but as a way of not-doing-anything, I got myself to at least read a paper on the computational architecture of the brain and summarize the beginning of it. Might be interest to people, also briefly touches upon meditation.

Whatever next? Predictive brains, situated agents, and the future of cognitive science (Andy Clark 2013, Behavioral and Brain Sciences) is an interesting paper on the computational architecture of the brain. It’s arguing that a large part of the brain is made up of hierarchical systems, where each system uses an internal model of the lower system in an attempt to predict the next outputs of the lower system. Whenever a higher system mispredicts a lower system’s next output, it will adjust itself in an attempt to make better predictions in the future.

EDIT: Just realized, this model explains tulpas. Also has connections to perceptual control theory, confirmation bias and people's general tendency to see what they expect to see, embodied cognition, the extent to which the environment affects our thought... whoa.

Comment author: Armok_GoB 27 April 2014 12:33:38AM 2 points [-]

O_O

This explains SO MUCH of things I feel from the inside! Estimating a small probability it'll even help deal with some pretty important stuff. Wish I could upvote a million times.

Comment author: Armok_GoB 26 April 2014 11:46:02PM 0 points [-]

Hmm, association: I wonder how this relates to the completionist mindset of some gamers.

Comment author: [deleted] 22 April 2014 09:44:39PM *  0 points [-]

I thought we were talking about the AI's decision theory.

No, Shiminux and I were talking about (I think) terminal goals: that is, we were talking about whether or not we could come to understand what an AGI was after, assuming it wanted us to know. We started talking about a specific part of this problem, namely translating concepts novel to the AGI's outlook into our own language.

I suppose my intuition, like yours, is that the AGI decision theory would be a much more serious problem, and not one subject to my linguistic argument. Since I expect we also agree that it's the decision theory that's really the core of the safety issue, my claim about terminal goals is not meant to undercut the concern for AGI safety. I agree that we could be radically ignorant about how safe an AGI is, even given a fairly clear understanding of its terminal goals.

The implicit constraint of "translate" is that it's to an already existing specific human, and they have to still be human at the end of the process.

I'd actually like to remain indifferent to the question of how intelligent the end-user of the translation has to be. My concern was really just whether or not there are in principle any languages that are mutually untranslatable. I tried to argue that there may be, but they wouldn't be mutually recognizable as languages anyway, and that if they are so recognizable, then they are at least partly inter-translatable, and that any two languages that are partly inter-translatable are in fact wholly inter-translatable. But this is a point about the nature of languages, not degrees of intelligence.

In response to comment by [deleted] on AI risk, new executive summary
Comment author: Armok_GoB 23 April 2014 07:23:11PM 0 points [-]

So one of the questions we actually agreed on the whole time and the other were just the semantics of "language" and "translate". Oh well, discussion over.

Comment author: [deleted] 22 April 2014 02:24:36PM *  0 points [-]

I'm talking about the simplest possible in principle expression in the human language being that long and complex.

Ah, I see. Even if that were a possibility, I'm not sure that would be such a problem. I'm happy to allow the AGI to spend a few centuries manipulating our culture, our literature, our public discourse etc. in the name of making its goals clear to us. Our understanding something doesn't depend on us being able to understand a single complex expression of it, or to be able to produce such. It's not like we all understood our own goals from day one either, and I'm not sure we totally understand them now. Terminal goals are basically pretty hard to understand, but I don't see why we should expect the (terminal) goals of a super-intelligence to be harder.

I expect it to be false in at least some cases talked about because it's not 3 but 100 levels, and each one makes it 1000 times longer because complex explanations and examples are needed for almost every "word".

It may be that there's a lot of inferential and semantic ground to cover. But again: practical problem. My point has been to show that we shouldn't expect there to be a problem of in principle untranslatability. I'm happy to admit there might be serious practical problems in translation. The question is now whether we should default to thinking 'An AGI is going to solve those problems handily, given the resources it has for doing so', or 'An AGI's thought is going to be so much more complex and sophisticated, that it will be unable to solve the practical problem of communication'. I admit, I don't have good ideas about how to come down on the issue. I was just trying to respond to Shim's point about untranslatable meta-languages.

Form my part, I don't see any reason to expect the AGI's terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren't the sorts of things that seem to admit of very much complexity.

In response to comment by [deleted] on AI risk, new executive summary
Comment author: Armok_GoB 22 April 2014 09:32:03PM 0 points [-]

Form my part, I don't see any reason to expect the AGI's terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren't the sorts of things that seem to admit of very much complexity.

That the AI can have a simple goal is obvious, I never argued against that. The AIs goal might be "maximize the amount of paperclips", which is explained in that many words. I dont expect the AI as a whole to have anything directly analogous to instrumental goals on the highest level either, so that's a non issue. I thought we were talking about the AI's decision theory.

On manipulating culture for centuries and solving as practical problem: Or it could just instal an implant or guide evolution to increase intelligence until we were smart enough. The implicit constraint of "translate" is that it's to an already existing specific human, and they have to still be human at the end of the process. Not "could something that was once human come to understand it".

Comment author: [deleted] 21 April 2014 02:29:09PM *  0 points [-]

Premise one is false assuming finite memory.

Well, maybe it's not necessarily true assuming finite memory. Do you have reason to expect it to be false in the case we're talking about?

Many new words come from pointing out a pattern in the environment, not from defining in terms of previous words.

I'm of course happy to grant that part of using a language involves developing neologisms. We do this all the time, of course, and generally we don't think of it as departing from English. Do you think it's possible to coin a neologism in a language like Q, such that the new term is in P (and inexpressible in any part of Q)? A user of this neologism would be unable to, say, taboo or explain what they mean by a term (even to themselves). How would the user distinguish their P-neologism from nonsense?

In response to comment by [deleted] on AI risk, new executive summary
Comment author: Armok_GoB 22 April 2014 02:04:19AM 0 points [-]

I expect the tabo/explanation to look like a list of 10^20, 1000 hour long clips of incomprehensible n-dimensional multimedia, each with a real number attached representing the amount of [untranslatable 92] it has, with a jupiter brain being required to actually find any pattern.

Comment author: [deleted] 21 April 2014 02:29:09PM *  0 points [-]

Premise one is false assuming finite memory.

Well, maybe it's not necessarily true assuming finite memory. Do you have reason to expect it to be false in the case we're talking about?

Many new words come from pointing out a pattern in the environment, not from defining in terms of previous words.

I'm of course happy to grant that part of using a language involves developing neologisms. We do this all the time, of course, and generally we don't think of it as departing from English. Do you think it's possible to coin a neologism in a language like Q, such that the new term is in P (and inexpressible in any part of Q)? A user of this neologism would be unable to, say, taboo or explain what they mean by a term (even to themselves). How would the user distinguish their P-neologism from nonsense?

In response to comment by [deleted] on AI risk, new executive summary
Comment author: Armok_GoB 22 April 2014 01:58:40AM 0 points [-]

I expect it to be false in at least some cases talked about because it's not 3 but 100 levels, and each one makes it 1000 times longer because complex explanations and examples are needed for almost every "word".

View more: Prev | Next