Comment author: helltank 16 November 2014 02:31:43AM 3 points [-]

Okay, thanks for the update and of course the idea of measuring agentness, while simultaneously being careful not to apply the halo effect to agentness, is fundamentally sound. I would propose treating the perceived agentness of a certain person as a belief, so that it can be updated quickly with well-known rationalist patterns when the shift moves to another domain.

Let us take the example of a person who is very agenty in managing relationships but bad at time management, as given in your post. In this case, I would observe that this person displays high levels of agentness in managing relationships. However, this does not equate to high agentness in other fields; yet it may be an indication of an overall trend of agentness in his life. Therefore if his relationship agentness level is 10 I might estimate a prior of his agentness at any random domain to be, say, 6.

Now, suppose I observe him scheduling his tasks with a supposed agentness of 6 and he screws it up completely, because of an inherent weakness which I didn't know about in that domain. After the first few times he was late I could lower my belief probability that his agentness in that domain (time management) is actually 6, and increase the probability of the belief that it is 3, for instance, plus a slight increase in the numbers adjacent (2 and 4).

However, cached thoughts do interest me. We have seen clearly that cached thoughts can act against agentness; but in my opinion the correct path is to make cached thoughts for agentness. Say you discover that in situation X, given Y and Z, A is almost always(or a sufficiently high percentage chance) the most agenty option. Then you can use your system 2 to train your system 1 into storing this pattern, and in future situations you will reflex perform A, with the slow-down consideration given depending on how high the chance that the agenty option is not A after all times its disutility and so on.

I would say that cached thoughts are very interesting phenomena, being able to control the first actions of a human being(and the actions that we, being impulsive creatures, normally take first), and that with proper training it might even be possible to use them for good.

Comment author: helltank 16 November 2014 02:00:31AM 1 point [-]

I will probably read this post in more detail when the font isn't hurting my sleep-deprived eyes. Please fix!

In response to comment by helltank on Belief Chains
Comment author: 27chaos 15 November 2014 11:08:30PM 1 point [-]

You might appreciate this paper: http://www.cs.toronto.edu/~nitish/msc_thesis.pdf

In response to comment by 27chaos on Belief Chains
Comment author: helltank 15 November 2014 11:12:34PM 1 point [-]

27chaos, that is a very interesting paper and I thank you for the find. It's actually quite a happy coincidence as neural networks (having been prompted by the blegg sequence) was on my next-to-study list. Glad to be able to add this paper to my queue.

In response to Belief Chains
Comment author: helltank 15 November 2014 11:06:38PM 5 points [-]

Very useful and instructive post. I would like to comment that one of the biggest tests(or so it seems to me) to check if a belief chain is valid or not is the test of resistance to arbitrary changes.

You write that systems like [I was abused]<->[people are meanies] <-> [life is horrible] <-> are stable and this is why people believe them; because they seem to hold sound under their own reasoning. But they are inherently not stable because they are not connected to the unshakable foundation of the source of truth(reality)!

Suppose you apply my test and /arbitrarily change one of the beliefs/. Let's say I decide to change the belief [I was abused] to [I was not abused] (which is an entirely plausible viewpoint to hold unless you think that everyone is abused). In that case, the whole chain falls apart, because if you were not abused, then it does not prove that people are meanies, which in turn implies a possible non-terrible world. And therefore the system is only stable on the surface. A house is not called solid if it can stand up; it is called solid if it can stand rough weather(arbitrary changes) without falling.

Let's look at the truthful chain [Laws of physics exist] <-> [Gravity exists] <-> [If I jump, I will not float]. In this case we can arbitrarily change the value of ANY belief and still have the chain repair itself. Let's say I say that the LAWS OF PHYSICS ARE FALSE. In that case I would merely say, "Gravity- supported by the observation that jumping peoples fall- proves or at least very strongly evidences the existence of a system of rules that govern our universe", and from there work out the laws of physics from basic principles at caveman level. It might take a long time, but theoretically it holds.

Now, if I say that gravity does not exist, a few experiments with the laws of physics -> gravity will prove me wrong. And If I decide to claim that if I jump, I will not fall, gravity, supported by the laws of physics, thinks otherwise(and enforces its opinion quite sharply).

The obvious assumption here is that there is a third person saying "these things are false" in the second example as opposed to god making a change in the first. But the key point is that actually stable (or true) belief chains cannot logically support such a random change without auto-repairing itself. It is impossible to imagine the laws of physics existing as they are and yet gravity being arbitrarily different for some reason. The truth of the belief chain holds all the way to the laws of physics down to quantum mechanics, which is the highest detail of reality we have yet to find.

It seems clear to me that the ability to support and repair an arbitrary chain is what differentiates true chains from bad chains.

Comment author: SpencerHill 13 November 2014 08:50:44AM 1 point [-]

I have to admit, this is pretty damn impressive, no matter how basic people seem to think those skills should be.

Comment author: helltank 13 November 2014 01:46:02PM 2 points [-]

Thanks a lot. I really appreciated that comment.

Comment author: helltank 10 November 2014 02:58:55AM 0 points [-]

A psychopath would have no problem with this, by the way; he'd just step on the heads of people and be on his merry way, calm as ever.

Comment author: helltank 10 November 2014 02:41:43AM 11 points [-]

I went through an entire evening outing and did not drop the ball once socially- in every event, I successfully carried out all the steps of social interaction, from perfectly(or so I'd like to think) mimicking empathy, adopting correct facial expressions and words. I'd like to think that this is a huge step forward in my social training. One of the people that I went on an outing with even commented that he thought my social skills were improving greatly.

In response to Baysian conundrum
Comment author: helltank 16 October 2014 09:08:49AM 0 points [-]

I'm really having a lot of trouble understanding why the answer isn't just:

1000/1001 chance I'm about to be transported to a tropical island 0 chance given I didn't make the oath.

Assuming that uploaded you memory blocks his own uploading when running simulations.

Comment author: pragmatist 14 September 2014 05:57:57AM 2 points [-]

Pushing the button can't make you a psychopath. You're either already a psychopath or you're not. If you're not, you will not push the button, although you might consider pushing it.

Comment author: helltank 14 September 2014 12:51:06PM 1 point [-]

Maybe I was unclear.

I'm arguing that the button will never, ever be pushed. If you are NOT a psychopath, you won't push, end of story.

If you ARE A psychopath, you can choose to push or not push.

if you push, that's evidence you are a psychopath. If you are a psychopath, you should not push. Therefore, you will always end up regretting the decision to push.

If you don't push, you don't push and nothing happens.

In all three cases the correct decision is not to push, therefore you should not push.

Comment author: helltank 14 September 2014 12:03:47AM *  -2 points [-]

Most people would die before they think. Most do.

-AC Grayling

View more: Prev | Next