All of Vladimir_Nesov2's Comments + Replies

Your 'epiphenomena' are good old invariants. When you are talking about exorcising epiphenomena, you really are talking about establishing invariants as laws that allow you to use fewer degrees of freedom. One can even talk about consciousness being only dependent on physical makeup of the universe, and hence being an invariant across universes with the same physical makeup. What is the point in reformulating it your way, exactly?

Caledonian, you are not helping by disagreeing without clarification. You don't need to be certain about anything, including estimation of how much you are uncertain about something and estimation of how much you are uncertain about the estimation, etc.

Roland,

Probabilities allow grades of beliefs, and just as Achilles's pursuit of tortoise can be considered as consisting of infinite number of steps, if you note that steps actually get infinitely short, you can sum them up to a finite quantity. Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about. Evidence can shift probabilities on all metalevels, even if in some hapless formalism there are infinitely many of them, and still lead to reasonable finite conclusions (decisions).

We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?

That's the problem with discrete reasoning. When you have probabilities, this problem disappears. See http://www.ditext.com/carroll/tortoise.html

I started to seriously think about rationality only when I started to think about AI, trying to understand grounding. When I saw that meaning, communication, correctness and understanding are just particular ways to characterize probabilistic relations between "representation" and "represented", it all started to come together, and later was transferred to human reasoning and beyond. So, it was the enigma of AI that acted as a catalyst in my case, not a particular delusion (or misplaced trust). Most of the things I read on the subject w... (read more)

HA: "Trying cryonics requires a leap of faith straight into the unknown for a benefit with an unestimable likelihood."

It's what probability is for, isn't it? If you don't know and don't have good prior hints, you just choose prior at random, merely making sure that mutually exclusive outcomes sum up to 1, and then adjust with what little evidence you've got. In reality, you usually do have some prior predispositions though. You don't raise your hands in awe and exclaim that this probability is too shaky to be estimated and even thought about, bec... (read more)

One problem is that 'you' that can be affected by things that you expect to interact with in the future is in principle no different from those space colonists that are sent out. You can't interact with future-you. All decisions that we are making form the future with which we don't directly interact. Future-you is just a result of one more 'default' manufacturing process, where laws of physics ensure that there is a physical structure very similar to what was in the past. Hunger is a drive that makes you 'manufacture' a fed-future-you, compassion is a dri... (read more)

The joy of textbook-mediated personal discovery...

Eliezer,

What do specks have to do with circularity? Where in last posts you explain that certain groups of decision problems are mathematically equivalent, independent on actual decision, here you argue for a particular decision. Note that utility is not necessarily linear of number of people.

Discount rate takes care of effect your effort can have on the future, relative to effect it will have on present, it has nothing to do with 'intrinsic utility' of things in the future. Future doesn't exist in the present, you only have a model of the future when you make decisions in the present. Your current decisions are only as good as you can anticipate their effect in the future, and process Robin described in his blog post replay is how it can proceed, it assumes that you know very little and will be better off with just passing resources to future folk to take care of whatever they need themselves.

Caledonian, I think you are confusing goals with truths. If truth is that the goal consists in certain things, rationality doesn't oppose it in any way. It is merely a tool that optimizes performance, not an arbitrary moral constraint.

OC: Eliezer, enough with your nonsense about cryonicism, life-extensionism, trans-humanism, and the singularity. These things have nothing to do with overcoming bias. They are just your arbitrary beliefs.

I guess it's other way around: the point of most of the questions raised by Eliezer is to take a debiased look at controversial issues such as those you list, to hopefully build a solid case for sensible versions of them. For example, existing articles can point at fallacies in your assertions: you assume cryonics, etc. to be separate magisteria outside of... (read more)

Eliezer,

Your accent on leadership in this context seems strange: it was in no one's interest to leave, so biased decision was to follow you, not hesitation in choosing to lead others outside.

It feels like there was no explicit rule not to ask questions. It's interesting what percentage of subjects actually questioned the process.

If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.

I don't see how moderate number of other wrong-answering subjects should influence decision of rational subject, even if it's strictly speaking stronger evidence, as uncertainty in your own sanity should be much lower than probability of alternative explanations for wrong answers of other subjects.

Since there are insanely many slightly different outcomes, terminal value is also too big to be considered. So it's useless to pose a question of making a difference between terminal values and instrumental values, since you can't reason about specific terminal values anyway. All things you can reason about are instrumental values.

Eliezer: An intuitive guess is non-scientific but not non-rational

It doesn't affect my point; but do you argue that intuitive reasoning can be made free of bias?

2[anonymous]
An intuitive guess can be made without biasing the result (accept or reject), so long as one does not privilege the hypothesis.

Such speech could theoretically perform "bringing to attention" function. Chunks of "bringing to attention" are equivalent to any kind of knowledge, it's just an inefficient form, and abnormality of that speech in its utter inefficiency, not lack of content. People can bear such talk as similar inefficiency can be present in other talks in different form. Inefficiency makes it much simpler to obfuscate eluding certain topics.

Phlogiston is not necessarily a bad thing. Concepts are utilized in reasoning to reduce and structure search space. Concepts can be placed in correspondence with multitude of contexts, selecting a branch with required properties, which correlate with its usage. In this case active 'phlogiston' concept correlates with presence of fire. Unifying all processes that exhibit fire under this tag can help in development of induction contexts. Process of this refinement includes examination of protocols which include 'phlogiston' concept. It's just not a causal model, which can rigorously predict nontrivial results through deduction.

0tristanhaze
More than six years late, but better late than never... 'Concepts are utilized in reasoning to reduce and structure search space' - anyone have any references or ideas for further developments of this line of thought? Seems very interesting and related to the philosophical idea of abduction or inference to the best explanation. (Perhaps the relation is one of justification.) Also, since I find the OP compelling despite this point, I would be interested to see how far they can be reconciled. My guess, loosely expressed, is that the stuff in Eliezer's OP above about the importance of good bookkeeping to prevent update messages bouncing back is sound, and should be implemented in designing intelligent systems, but some additional, more abductionesque process could be carefully laid on top. And when interpreting human reasoning, we should perhaps try to learn to distinguish whether, in a given case of a non-predictive empirical belief, the credence comes from bad bookkeeping, in which case it's illegitimate, or an abductive process which may be legitimate, and indeed may be legitimated along the lines of Vladimir's tantalizing hint in the parent comment.

Just a question of bookkeeping - online confidence update can be no less misleading, even if all facts are processed once. Million negative arguments can have negligible total effect if they happen to be dependent in non-obvious way.

Some ungrounded concepts can produce your own behavior which in itself can be experienced, so it's difficult to draw the line just by requiring concepts to be grounded. You believe that you believe in something, because you experience yourself acting in a way consistent with you believing in it. It can define intrinsic goal system, point in mind design space as you call it. So one can't abolish all such concepts, only resist acquiring them.