Comment author: Rolf_Nelson2 17 March 2008 12:53:11PM 2 points [-]

Doug S., we get the point, nothing that Ian could say would pry you away from your version of reductionism, there's no need to make any more posts with Fully General Counterarguments. "I defy the data" is a position, but does not serve as an explanation of why you hold that position, or why other people should hold that position as well.

I would agree with reductionism, if phrased as follows:

1. When entity A can be explained in terms of another entity B, but not vice-versa, it makes sense to say that entity A "has less existence" compared to the fundamental entities that do exist. That is, we can still have A in our models, but we should be aware that it's only a "cognitive shortcut", like when a map draws a road as a homogeneous black line instead of showing microscopic detail.

2. The number of fundamental entities is relatively small, as we live in a lawful universe. If we see a mysterious behavior, our first guess should be that it's probably a result of the known entities, rather than a new entity. (Occam's razor)

3. Reductionism, as a philosophy, doesn't itself say what these fundamental entities are; they could be particles, or laws of nature, or 31 flavors of ice cream. If every particle were composed of smaller particles, then there would be no "fundamental particle", but the law that states how this composition occurs would still be fundamental. If we discover tomorrow that unicorns exist and are indivisible (rather than made up of quarks), then this is a huge surprise and requires a rewrite of all known laws of physics, but it does not falsify reductionism because that just means that a "unicorn field" (which seems to couple quite strongly with the Higgs boson) gets added to our list of fundamental entities.

4. Reductionism is a logical/philosophical rather than an empirical observation, and can't be falsified as long as Occam's razor holds.

Comment author: Rolf_Nelson2 14 March 2008 03:38:00AM 0 points [-]

if the vast majority of the measure of possible worlds given Bob's knowledge is in worlds where he loses, he's objectively wrong.

That's a self-consistent system, it just seems to me more useful and intuitive to say that:

"P" is true => P
"Bob believes P" is true => Bob believes P

but not

"Bob's belief in P" is true => ...er, what exactly?

Also, I frequently need to attach probabilities to facts, where probability goes from [0,1] (or, in Eliezer's formulation, (-inf, inf)). But it's rare for me to have to any reason to attach probabilities to probabilities. On the flip side, I attach scoring rules in the range [0, -inf] to probability calculations, but not to facts. So in my current worldview, facts and probabilities are tentatively "made of different substances".

Comment author: Rolf_Nelson2 13 March 2008 03:41:48AM 0 points [-]

Follow-up question: If Bob believes he has a >50% chance of winning the lottery tomorrow, is his belief objectively wrong? I would tentatively propose that his belief is unfounded, "unattached to reality", unwise, and unreasonable, but that it's not useful to consider his belief "objectively wrong".

If you disagree, consider this: suppose he wins the lottery after all by chance, can you still claim the next day that his belief was objectively wrong?

Comment author: Rolf_Nelson2 09 March 2008 06:29:46PM 7 points [-]

Most of the proposed models in this thread seem reasonable.

I would write down all the odd things people say about free will, pick the simplest model that explained 90% of it, and then see if I could make novel and accurate predictions based on the model. But, I'm too lazy to do that. So I'll just guess.

Evolution hardwired our cognition to contain two mutually-exclusive categories, call them "actions" and "events."

"Actions" match: [rational, has no understandable prior cause]. "Rational" means they are often influenced by reward and punishment. Synonyms for 'has no understandable prior cause' include 'free will', 'caused by elan vitale' and 'unpredictable, at least by the prediction process we use for things-in-general like rocks'.

"Events" match: [not rational, always directly caused by some previous and intuitively comprehendable physical event or action]. If you throw a rock up, it will come back down, no matter how much you threaten or plead with it.

We are born to axiomatically believe actions we take of this innate 'free will' category have no physical cause. In this model, symptoms might include:

* believing there is an interesting category called 'free will'

* believing that arguing whether humans either belong to, or don't belong to, this 'free will' category, is an interesting question

* believing that if we don't have 'free will', it's wrong to punish people

* believing that if we don't have 'free will', we are marionettes, zombies, or in some other way 'subhuman'.

* believing that if we don't understand what causes a thunderstorm or a crop failure or an eclipse, it is the will of a rational agent who can be appeased through the appropriate sacrifices

* believing that if our actions are caused by God's will, fate, spiritual possession, an ancient prophesy, Newtonian dynamics, or some other simple and easily-understandable cause, we do not have 'free will'. However, if our actions are caused by an immaterial soul, spooky quantum mechanics, or anything else that 'lives in another dimension beyond the grasp of intuitive reason', then we retain 'free will'.

I'm not particularly confident my model is correct, the human capacity to spot patterns where there are none works against me here.

Comment author: Rolf_Nelson2 27 February 2008 03:04:24AM 1 point [-]

Green-eyed people are more likely than average to be black-haired (and vice versa), meaning that we can probabilistically infer green eyes from black hair or vice versa

There is nothing in the mind that is not first in the census.

Comment author: Rolf_Nelson2 27 February 2008 01:53:22AM 3 points [-]

Another solid essay.

To form accurate beliefs about something, you really do have to observe it.

How do we model the fact that I know the Universe was in a specific low-entropy state (spacetime was flat) shortly after the Big Bang? It's a small region in the phase space, but I don't have enough bits of observations to directly pick that region out of all the points in phase space.

Comment author: Rolf_Nelson2 21 February 2008 02:06:07PM 5 points [-]

Frank, tcpkac:

What do you think of, say, philosophers' endless arguments of what the word "knowledge" *really* means? This seems to me one example where many philosophers don't seem to understand that the word doesn't have any intrinsic meaning apart from how people define it.

If Bob sees a projection of an oasis and thinks there's an oasis, but there's a real oasis behind the projection that creates a projection of itself as a Darwinian self-defense mechanism, does Bob "know" there's an oasis? Presumably Eliezer would ask, "for what purpose do we want to answer the question?" However, many philosophers would prefer to unconstructively argue what semantics are "correct". So my personal experience is that I don't think Eliezer's attacking a straw man here.

A similar example in grammar: many people think usage of "ain't" is somehow objectively wrong, rather than being just an uncommon and frowned-upon dialect.

In response to Disguised Queries
Comment author: Rolf_Nelson2 11 February 2008 12:51:27AM 7 points [-]

What's really at stake is an atheist's claim of substantial difference and superiority relative to religion

Often semantics matter because laws and contracts are written in words. When "Congress shall make no law respecting an establishment of religion", it's sometimes advantageous to claim that you're not a religion, or that your enemy is a religion. If churches get preferential tax treatment, it may be advantageous to claim that you're a church.

In response to Trust in Bayes
Comment author: Rolf_Nelson2 31 January 2008 04:05:12AM 0 points [-]

@Peter As a human, I can't introspect and look at my utility function, so I don't really know if it's bounded or not. If I'm not absolutely certain that it's bounded, should I just assume it's unbounded, since there is much more at stake in this case?

This has been gnawing at my brain for a while. If the useful Universe is temporally unbounded, then utility arguably goes to aleph-null. Some MWI-type models and Ultimate-ensemble models arguably give you an uncountable number of copies of yourself, does that count as greater than than aleph-null or less than aleph-null (because we normalize to a measure [0, 1] that "looks" small)? What if someone claims "the Universe is spatially finite, but everyone has an inaccessible cardinal number of invisible copies of themselves?" Given my ignorance and confusion, maybe it makes sense to pick the X most credible utility measures, and give them each an "equal vote" in deciding what to do next at each stage, as a current interim measure. This horrendous muddled compromise is itself non-utilitarian and sub-optimal, but I personally don't have a better answer at the moment.

I used to think of my utility function as unbounded, and then after Eliezer's "Pascal's Mugging" post I thought of it as probably bounded. This decision changed the way I live my life... not at all. However, I can understand that if you want to instruct an AGI, you may not be able to allow yourself the luxury of such blissful agnosticism.

@Stephen An intuition in the opposite direction (which I think Rolf agrees with) is that once you reach giant tentacled squillions of units of fun, specifying when/where it happens takes just as much algorithmic complexity as making up a mind from scratch (or interpreting it from a rock).

Alas I'm not completely sure what you're talking about, the secret decoder ring says "fun = utility" but I think I require an additional cryptogram clue. Is this a UDASSA reference?

In response to Trust in Bayes
Comment author: Rolf_Nelson2 30 January 2008 01:04:00AM 1 point [-]

other way around, I mean.

View more: Prev | Next