Comment author: [deleted] 23 August 2013 08:08:53AM 2 points [-]

Lumifer, you are falling prey to several of the traps detailed in A Human's Guide to Words. So far I have basically parroted EY's 102 material.

Meditation: Taboo "Knowledge" and describe your relation with riding a bicycle.

Meditation: Taboo "Knowledge" and describe your relation with some field of science you are proficient in.

Meditation: Taboo "Knowledge" and describe a religious person's views on god.

...

...

You and me both know what 'knowledge' is in everyday speech. The problem is what constitues 'knowlege' in extreme situations.

The thing is that "Knowledge" is ambiguous in everyday speech. We misunderstood each other when I initially answered your question: I thought you were speaking about the tried and tired philosophical issue that have been discussed for ages.

The answer in the Philosophical Issue of Knowledge is: "You philosophers are all morons; you are using the same word to mean different things."

Plato has a famous definition of "Knowledge": Justified True Belief. Notice how he has moved the problem of explaining "Knowledge" into the problem of explaining "Justification." (And "True." And "Belief." Neither concepts were actually well explained when Plato was alive and kicking.)

"Knowledge" can also be a synonym for "Skill." Such as knowing how to ride a bicycle. Notice how the grammatical construction "knowing how to <verb phrase>." is different from "knowing <noun phrase> to be true." One could argue that they are the same thing, but I think they are not. So we have at least two types of everyday discussed knowledge: Procedural Knowledge (how to do stuff) and Object Knowledge (facts and stuff).

The distinction between the two is obvious when you really taboo it: Procedural knowledge is like a tool. It is a means to an end, an extension of your primitive action set. Having lots of procedural knowledge is a boon in Instrumental Rationality, but most skills are irrelevant to Epistemological Rationality. (Riding a cicycle will only very rarely tell you the secrets of the universe.)

Object Knowledge, or Facts, are thingies in your mental model of how the world works. This mental model is what you use when you want to predict how the world is going to behave in the future, so that you can make plans. (Because you have goals you want to attain.)

Your world model is updated automatically by processes which you do not control. A sufficiently advanced agent might be able to excercise some control, at least at the design level, of it's updating algorithms. In short, you take in sensory data and crunch some numbers and out comes a bayesian-esque update.

So my standing viewpoint is: I don't care what you call it; "knowledge" or "hunch" or "divine inspiration." I care about what your probability distribution over future events is. I don't care what you call it "skills" or "knowledge" or "talent." I care about what sort of planning algorithm you implement.

And on the topic of subjectivity: If I have trained skills or observed evidence different from you, then yes we have subjectively different "knowledge." I for instance know 12 programming languages and intimate facts about my significant other.

But the thing is that there is only One Correct Way of updating on evidence: Bayes Theorem. If you deviate from that you will have less than optimal predictive power.

I really suggest you go and read some of the core sequences to refresh this.

In response to comment by [deleted] on What Bayesianism taught me
Comment author: skepsci 03 September 2013 03:16:11PM 1 point [-]

I think the dichotomy between procedural knowledge and object knowledge is overblown, at least in the area of science. Scientific object knowledge is (or at least should be) procedural knowledge: it should enable you to A) predict what will happen in a given situation (e.g. if someone drops a mento into a bottle of diet coke) and B) predict how to set up a situation to achieve a desired result (e.g. produce pure L-glucose).

Comment author: RobbBB 27 August 2013 08:51:05PM *  -1 points [-]

I have no idea what these algorithms might be are and neither do you. Accordingly I don't see any basis for speculating what will they allow.

Well, let's think about whether we have a proof of concept. What's an example of a generalization about high-complexity algorithms that might show most of them to be easily usefully compressed, for an observer living inside one? At this point it's OK if we don't know that the generalization holds; I just want to know what it could even look like to discover that a universe that looks like ours (as opposed to, say, one that looks like a patchwork or a Boltzmann Braintopia) is the norm for high-complexity sapience-permitting worlds.

ETA: Since most conceivable universes are very very complicated, I'd agree that we probably live in a very very complicated universe, if it could be shown that our empirical data doesn't strongly support nomic simplicity.

The default rule is that "man is the measure of all things" so presumably you are using these words in the context of what is short and simple for the human brain.

No, I'm saying it's short and simple relative to the number of ways a universe could be, and short and simple relative to the number of ways a life-bearing universe could be. There's no upper bound on how complicated a universe could in principle be, but there is a lower bound, and our physics is, even in human terms, not far off from that lower bound.

Comment author: skepsci 03 September 2013 02:55:31PM 3 points [-]

Humans have a preference for simple laws because those are the ones we can understand and reason about. The history of physics is a history of coming up with gradually more complex laws that are better approximations to reality.

Why not expect this trend to continue with our best model of reality becoming more and more complex?

Comment author: RobbBB 27 August 2013 08:15:03PM -1 points [-]

I don't think that most high-complexity algorithms for building a life-permitting observable universe would allow a theory as simple as human physics to approximate the algorithm as well as our observable universe does.

Do you think the observable universe is a lot more complicated than it appears?

Comment author: skepsci 03 September 2013 02:51:56PM 2 points [-]

This is trivially false. Imagine, for the sake of argument, that there is a short, simple set of rules for building a life permitting observable universe. Now add an arbitrary, small, highly complex perturbation to that set of rules. Voila, infinitely many high complexity algorithms which can be well-approximated by low complexity algorithms.

Comment author: skepsci 03 September 2013 02:45:46PM 3 points [-]

I model basically everyone I interact with as an agent. This is useful when trying to get help from people who don't want to help you, such as customer service or bureaucrats. By giving the agent agency, it's easy to identify the problem: the agent in question wants to get rid of you with the least amount of effort so they can go back to chatting with their coworkers/browsing the internet/listening to the radio. The solution is generally to make it seem like less effort to get rid of you by helping you with your problem (which is their job after all) than something else. This can be done by simply insisting on being helped, making a ruckus, or asking for a manager, depending on the situation.

Comment author: Vaniver 25 August 2013 05:02:48AM *  23 points [-]

One of my habits while driving is to attempt to model the minds of many of the drivers around me (in situations of light traffic). One result is that when someone does something unexpected, my first reaction is typically "what does he know that I don't?" rather than "what is that idiot doing?". From talking to other drivers, this part of my driving seem abnormal.

In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me.

One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?

With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"

Do you get more of what you want by blaming people or assigning fault?

Comment author: skepsci 03 September 2013 02:37:30PM 2 points [-]

I do the same sort of thinking about the motivations of other drivers, but it seems strange to me to phrase the question as "what does he know that I don't?" More often than not, the cause of strange driving behaviors is lack of knowledge, confusion, or just being an asshole.

Some examples of this I saw recently include 1) a guy who immediately cut across two lanes of traffic to get in the exit lane, then just as quickly darted out of it at the beginning of the offramp; 2) A guy on the freeway who slowed to a crawl despite traffic moving quickly all around him; 3) That guy who constantly changes lanes in order to move just slightly faster than the flow of traffic.

I'm more likely to ask "what do they know that I don't?" when I see several people ahead of me act in the same way that I can't explain (e.g. many people changing lanes in the same direction).

Comment author: Wei_Dai 06 June 2012 02:15:46AM 3 points [-]

The idea is, however we define P, how can we be that sure that there isn't some kind of uncomputable physics that would allow someone to build a device that can find the lexicographically least object x such that P(x) < 1/3^^^3 and present it us?

Comment author: skepsci 18 August 2013 04:56:25PM *  -1 points [-]

If there's some uncomputable physics that would allow someone to build such a device, we ought to redefine what we mean by computable to include whatever the device outputs. After all, said device falsifies the Church-Turing thesis, which forms the basis for our definition of "computable".

Comment author: RichardKennaway 14 July 2013 05:50:39PM 0 points [-]

You just run the Prolog (or whatever logic system implements all this), and it either terminates with a failure or does not terminate within the time allowed by the competition. The time limit renders everything practically decidable.

Comment author: skepsci 15 July 2013 05:14:34AM 2 points [-]

Perhaps it terminates in the time required proving that A defects and B cooperates, even though the axioms were inconsistent, and one could also have proved that A cooperates and B defects.

Comment author: RichardKennaway 14 July 2013 07:40:34AM 0 points [-]

Deem both of the agents to have not terminated?

Comment author: skepsci 14 July 2013 02:21:55PM 1 point [-]

How will you know? The set of consistent axiom systems is undecidable. (Though the set of inconsistent axioms systems is computably enumerable.)

Comment author: RichardKennaway 13 July 2013 09:19:05PM 0 points [-]

Thinking about this further, given enough assertions, there's no need to have the programs at all. Let the agents be just the assertions that they make about themselves. Each agent would consist of a set of axioms, perhaps written in Prolog, about who they cooperate or defect against, and running a contest between two agents would just be a matter of taking the union of the two sets of axioms and attempting to deduce (by the standard Prolog proof-search) what choice each one makes.

Comment author: skepsci 14 July 2013 05:10:51AM 2 points [-]

What happens if the two sets of axioms are individually consistent, but together are inconsistent?

Comment author: RichardKennaway 13 July 2013 09:13:30PM *  0 points [-]

What's wrong with "I will cooperate with anyone who verifiably asserts that they cooperate with me".

The word "me". By Rice's theorem, they can't tell that they're dealing with someone computationally equivalent to me, and there's no other sense of my "identity" that can be referred to.

Athough that could be added. Have all competitors choose a unique name and allow them to verifiably claim to be the owner of their name. Then "I will cooperate with anyone who verifiably asserts that they cooperate with me" can work if "me" is understood to mean "the entity with my name". Discovering a blob's name would have to be a second primitive operation allowed on blobs.

ETA: I think I missed your point a little. Yes, a cooperate-bot verifiably asserts that it cooperates with everyone, so it is an entity that "verifiably asserts (something that implies) that they will cooperate with me." And there will be other bots of which this is true. But I'm not sure that I can verifiably express that I will cooperate with the class of all such bots, because "the class of all such bots" looks undecidable.

Comment author: skepsci 14 July 2013 04:57:43AM *  2 points [-]

Your source code is your name. Having an additional name would be irrelevant. It is certainly possible for bots to prove they cooperate with a given bot, by looking at that particular bot's source. It would, as you say, be much harder for a bot to prove it cooperates with every bot equivalent to a given bot (in the sense of making the same cooperate/defect decisions vs. every opponent).

Rice's theorem may not be as much of an obstruction as you seem to indicate. For example, Rice's theorem doesn't prohibit a bot which proves that it defects against all defectbots, and cooperates with all cooperatebots. Indeed, you can construct an example of such a bot. (Rice's theorem would, however, prevent constructing a bot which cooperates with cooperatebots and defects against everyone else.)

View more: Next