All of Ape in the coat's Comments + Replies

hich For example, If I use self-sampling to estimate the number of seconds in the year, I will get a correct answer of around several tens of millions. But using word generator will never output a word longer than 100 letters. 

Using the month of your birth to estimate the number of seconds in the year also won't work well, unless you multiply it by number of seconds in a month.

Likewise here. You can estimate the number of months in a year by number of letters in the world and then multiply it by number of seconds in a months.

I didn't understand your i

... (read more)
2avturchin
I meant that if I know only the total number of the seconds which passed from the beginning of the year (around 15 million for today of this year) – and I want to predict the total number of seconds in each year. No information about months.  As most people are born randomly and we know it, we can use my date of birth as random. If we have any suspicions about non randomness, we have to take them into account.  

You chose word length generator as you know that typical length of words is 1-10. Thus not random.

This is not relevant to my point. After all you also know that typical month is 1-12

No, the point is that I specifically selected a number via an algorithm that has nothing to do with sampling months. And yet your test outputs positive result anyway. Therefore your test is unreliable.

I didn't rejected any results  – it works in any test I have imagined

That's exactly the problem. Essentially you are playing a 2,4,6 game, got no negative result yet and are ... (read more)

1avturchin
For example, If I use self-sampling to estimate the number of seconds in the year, I will get a correct answer of around several tens of millions. But using word generator will never output a word longer than 100 letters.  I didn't understand your idea here:

First of all, your experimental method can really benefit from a control group. Pick a setting where a thing is definitely not randomly sampled from a set. Perform your experiment and see what happens.

Consider. I generated a random word using this site https://randomwordgenerator.com/

This word turned out to be "mosaic". It has 6 letters. Let's test whether it's length is randomly sampled from the number of months in a year.

As 6*2=12, this actually works perfectly, even better than estimating the number of months in a year based on your birth month!

It also ... (read more)

5avturchin
Don't agree. You chose word length generator as you know that typical length of words is 1-10. Thus not random. I didn't rejected any results  – it works in any test I have imagined, and I also didn't include several experiments which have the same results, e.g the total number of the days in a year based on my birthday (got around 500) and total number of letters in english alphabet (got around 40).  Note that alphabet letter count is not cyclical as well as my distance to equator.  Do not understand this: If I were born 1 of January, I would get years duration 2 days which is very wrong. 

I personally didn't expect Trump to do any tarrifs at all

Just curious, how comes? Were you simply not paying attention to what he was saying? Or were you not believing in his promises?

2Rafael Harth
Trump says a lot of stuff that he doesn't do, the set of specific things that presidents don't do is larger than the set of things they do, and tariffs didn't even seem like they'd be super popular with his base if in fact they were implemented. So "~nothing is gonna happen wrt tariffs" seemed like the default outcome with not enough evidence to assume otherwise. I was also not paying a lot of attention to what he was saying. After the election ended, I made a conscious decision to tune out of politics to protect my mental health. So it was a low information take -- but I don't know if paying more attention would have changed my prediction. I still don't think I actually know why Trump is doing the tariffs, especially to such an extreme extent..

I think "mediocre" is a quite appropriate adjective when describing a thing that we had high hopes for, but now received evidence, according to which while the thing technically works, it performs worse than expected, and the most exciting use cases are not validated.

I indeed used a single example here, so the strength of the evidence is arguable, but I don't see why this case should be an outlier. I could've searched for more, like this one, that is particularly bad:

In any case, you can consider this post my public prediction that othe... (read more)

I think the problem here is that you do not quite understand the problem.

There is definetely some kind of misunderstanding that is going on, and I'd like to figure it out.

It's not that we "imagine that we've imagined the whole world, do not notice any contradictions and call it a day". 

How it's not the case? Citing you from here:

When you are conditioning on empirical fact, you are imaging set of logically consistent worlds where this empirical fact is true and ask yourself about frequency of other empirical facts inside this set.

How do you know which ... (read more)

In this post I've described a unified framework that allows to reason about any type of uncertainty be it logical or empirical. I would appreciate engagement from people who think that logical uncertainty is still unsolved.

Are you arguing that the distinction between objective and subjective are "very unhelpful," because the state of people's subjective beliefs are technically an objective fact of the world?

It's unhelpful due to a an implicit (and in our case somewhat explicit) assumption that "subjective" and "objective"  are in opposition to each other. That it's two different magisteriums and things are either one or the other.

why don't you argue that all similar categorizations are unhelpful, e.g. map vs. territory

Map and territory framework lacks this assumption. I... (read more)

1Knight Lee
To me, it looks like the blogger (Coel) is trying to say that morality is a fact about what we humans want, rather than a fact of the universe which can be deduced independently from what anyone wants. My opinion is Coel makes this clear when he explains, "Subjective does not mean unimportant." "Subjective does not mean arbitrary." "Subjective does not mean that anyone’s opinion is “just as good”." "Separate magisteriums" seems to refer to dualism, where people believe that their consciousness/mind exists outside the laws of physics, and cannot be explained by the laws of physics. But my opinion is Coel didn't imply that subjective facts are a "separate magisterium" in opposition to objective facts. He said that subjective morals are explained by objective facts: "Our feelings and attitudes are rooted in human nature, being a product of our evolutionary heritage, programmed by genes. None of that is arbitrary." But I'm often wrong about these things don't take me too seriously :/

This debate seems hampered by a lack of clarity on what “objective” and “subjective” moralities are.

Absolutely.

Coyne gave a sensible definition of “objective” morality as being the stance that something can be discerned to be “morally wrong” through reasoning about facts about the world, rather than by reference to human opinion.

That's a poor definition. It tries to oppose facts about the worlds to human opinions. While whether humans have particular opinions or not is also a matter of facts about the world.

The fault here lies on the terms itself. Such dyc... (read more)

-1TAG
You can claim that subjective attitudes are still part of reality ontologically , but the point is that they function differently epistemologically . Opinions and beliefs and for the and falsehoods are all made of atoms, but all function differently. There are potentially as many subjective attitudes as there are people, and they are variable across time as well. The arbitrariness and lack of coherence is what causes the problems. Objectivity is worth having in ethics, because a world in which prisoners have done something really wrong, and really deserve their punishment is a better than a world in which prisoners just have desires the majority don't like.
1Knight Lee
I'm not 100% sure I know what I'm talking about, but it feels like that's splitting hairs. Are you arguing that the distinction between objective and subjective are "very unhelpful," because the state of people's subjective beliefs are technically an objective fact of the world? In that case, why don't you argue that all similar categorizations are unhelpful, e.g. map vs. territory?

Yes, you are correct! Thanks for noticing it.

1Markvy
Did not expect you to respond THAT fast :)

Actually... I will say it: This feels like a fast rebranding of the Halting Problem, like without actually knowing what it implies.

Being able to rebrand an argument so that it could talk about a different problem in a valid way is exactly what is to understand it - not just repeat the same words in the same context that the teacher said but generalize it. We can go into the realm of second order logic and say that

For every property that at least one program has, a universal detector of this property has to itself have this property on at least some input.

M... (read more)

0milanrosko
I realized that with you formulating the Turing problem in this way helped me a great dead how to express the main idea. What I did Logic -> Modular Logic -> Modular Logic Thought Experiment -> Human Logic -> Lambda Form -> Language -> Turing Form -> Application -> Human This route is a one way street... But if you have it in logic, you can express it also as Logic ->  Propositional Logic  -> Natural Language -> Step by step propositions where you can say either yey or ney. If you are logical you must arrive at the conclusion. Thank you for this.
0milanrosko
1. I will say that your rational holds up in many ways, in some ways don't. I give you that you won the argument. You are right mostly.   2. "Well, I'm not making any claims about an average LessWronger here, but between the two of us, it's me who has written an explicit logical proof of a theorem and you who is shouting "Turing proof!", "Halting machine!" "Godel incompletness!" without going into the substance of them." Absolutely correct. You won this argument too. 3. Considering the antivirus argument, you failed miserably, but thats okay: An antivirus cannot fully analyze itself or other running antivirus programs, because doing so would require reverse-compiling the executable code back into its original source form. Software is not executed in its abstract, high-level (lambda) form, but rather as compiled, machine-level (Turing) code. Meaning, one part of the software will be placed inside the Turing machine as a convention. Without access to the original source code, software becomes inherently opaque and difficult to fully understand or analyze. Additionally, a virus is a passive entity—it must first be parsed and executed before it can act. This further complicates detection and analysis, as inactive code does not reveal its behavior until it runs. 4. This is where it gets interesting. "Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it's a universal halting machine. But it's not clear how this is applicable." You know what. You are totally right. So here is what I really say: If the brain is something like a computer... It has to be obey the rules of incompleteness. So "incompleteness" must be hidden somewhere in the setup. We have a map:

You basically left our other more formal conversation to engage in the critique of prose.

Not at all. I'm doing both. I specifically started the conversation in the post which is less... prose. But I suspect you may also be interested in engagement with the long post that you put so much effort to write. If it's not the case - nevermind and let's continue the discussion in the argument thread.

These are metaphors to lead the reader slowly to the idea...

If you require flawed metaphors, what does it say about the idea?

Now you might say I have a psychotic fit

Fr... (read more)

So, essentially, it's like trying to explain to a halting machine—which believes it is a universal halting machine—that it is not, in fact, a universal halting machine.

Don't tell me what it's like. Construct the actual argument, that is isomorphic to Turing proof.

Let me give you an example. Let's prove that no perfect antivirus is possible. 

Let a perfect antivirus A be a program that receives some program P and it's input X as arguments and returns 1 if P is malevolent on input X and 0 otherwise. And A itself is not malevolent on any input.

Suppose A e... (read more)

0milanrosko
Actually... I will say it: This feels like a fast rebranding of the Halting Problem, like without actually knowing what it implies. Why? Because, it’s unintuitive — almost so that it's false. How would a virus (B) know what the antivirus (A) predicts about B? That seems artificial. It can't quarry an antivirus software. No. Fuck that. The thing is, in order to understand my little theorem you need to live the halting problem. But seems people here are not versed in classical computer science only shouting "Bayeism! Bayeism!" which is proven to be effectively wrong by the sleeping beauty paradox (frequentist "thirder's" get more money in simulations.) btw I gave up on lesswrong completely. This feels more like where lesser nerds hang out after office. Sad, because the site has a certain beauty in it's tidiness and structure.  
1milanrosko
1. "Don't tell me what it's like." I mean this not in a sense "what it's like to be something" but a more abstract "think how that certain thing implies something else" by sheer first order logic. 2. Okay so this is you replaced halting machines with programs, and the halting oracle with a virus... and... X as an input?  ah no the virus is that what changes, it is the halting. Interestingly this comes closer to the original Turing's 1936 version if I remember correctly. Okay so... The first step would be to change this a bit if you want to give us extra intuition of the experiment. Because the G Zombie is a double Turing experiment. For that, we need to make it timeless, and more tangible. Often the Halting oracles is explained by throwing it and the virus chained together... like there are two halting oracles machines and a switch, interestingly this happens with the lambda term. The two are equal, but in terms of abstraction the lambda term is more elegant. Okay, now... it seems you understand it perfectly. Now we need to go a bit meta. Church-Turing-Thesis. This implies the following. Think of how you found something out with antivirus program. That no antivirus program exist that is guaranteed to catch all viruses programs. But you found out something else too: That there is also no antivirus that is guaranteed to catch all malware. AND there is no software to catch all cases... You continue this route... and land on "second order logic" There is no case of second order logic that catches all first-order-logic terms (virus). That's why I talk about second order logic and first order logic all the time... (now strictly speaking this is not precise, but almost. You can say first order is complete, second order is incomplete. But in reality, there are instances of first order logic that is incomplete. Formally first order is assumed to be complete) It is the antivirus and the virus. This is profound because it highlights a unique phenomenon: the more c

If you are familiar with it, just say “yes,” and we’ll proceed.

Yes.

-2milanrosko
Perfect. So, essentially, it's like trying to explain to a halting machine—which believes it is a universal halting machine—that it is not, in fact, a universal halting machine. From the perspective of a halting machine that mistakenly believes itself to be universal, the computation continues indefinitely. This isn’t exactly the original argument, but it’s very similar in its implications. However— My argument adds another layer of complexity: we are halting machines that believe we are universal halting machines. In other words, we cannotlogically prove that we are not universal halting machines if we assume that we are. That’s why I don't believe that I don’t have qualia. But from a rational, logical perspective, I must conclude that I don’t, according to the principles from first order logic. And this, I argue, is a profound idea. It explains why qualia feels real—even though qualia, strictly speaking, doesn’t exist within our physical universe. It's a fiction. But as I say this, I laugh—because I feel qualia, and I am not believing my own theory... Which, ironically, is exactly what Turing’s argument would predict.

In The Terminator, we often see the world through the machine’s perspective: a red-tinged overlay of cascading data, a synthetic gaze parsing its environment with cold precision. But this raises an unsettling question: Who—or what—actually experiences that view?

Is there an AI inside the AI, or is this merely a fallacy that invites us to project a mind where none exists?

Nothing is preventing us from designing a system consisting of a module generating a red-tinged video stream and image recognition software that looks at the stream and b... (read more)

0milanrosko
You basically left our other more formal conversation to engage in the critique of prose. *slow clap* These are metaphors to lead the reader slowly to the idea... This is not the Argument. The Argument is right there and you are not engaging with it. You need to understand the claim first in order to deconstruct it. Now you might say I have a psychotic fit, but earlier as we discussed Turing, you didn't seem to resonate with any of the ideas. If you are ready to engage with the ideas I am at your disposal.

 "This, if I'm not missing anything" Yes you This is called a Modus tollens. We are not concerned about the boolean of each of the statements.
 

1.
"if I'm not missing anything" it is likely you do let me explain. This is called a Modus Tollens. We are not concerned about Lisas logic as a boolean. We look each proposition its entirety. I advice you to read about Turings proof on the halting problem, because it is the same technique.

I struggle to parse this. In general the coherency of your reply is poor. Are you by chance using an LLM? 

I apprec... (read more)

1milanrosko
I'd like you thank you though for your engagement: This is valuable. You are doing are making it clear how to better frame the problem.
-2milanrosko
So. Let us step back a bit. I am on your side. You are critically thinking, and maybe my tone was condescending. I read your reply carefully, and make  proposals because I really believe we can achieve something. But be advised: This is a complicated issue. The problem at heart is, self-referential (second-order-logic). That is: Something might be true, exactly because we can't think of it as being true, because it is connected to our ability to think whether something is true or not. I know it sounds complicated, but it coherent. Now let's see... Okay, this is an easy one. The argument follows exactly the same syllogistic structure ("If this, then that") as Turing’s proof. On LLMs: Yes, I sometimes use LLMs for grammar checking—sometimes I don't. But know this: the argument I'm presenting is, formally, too complex for an LLM to generate on its own. However, an LLM can still be used—cautiously—as a tool for verification and questioning. Now, if you're not familiar with Turing’s 1936 proof, it's a fascinating twist in mathematics and logic. In it, Turing demonstrated that a Universal Turing Machine cannot decide all problems—that such a machine cannot be fully constructed. If you are unfamiliar with the proof, I strongly recommend looking it up. It is very interesting and is a prerequisite to understand EN.  I don’t believe EN can be fully understood without an intuitive grasp of how Turing employed ideas related to incompleteness. My argument is very similar in structure—so similar, in fact, that certain terms in my argument could be directly mapped to terms in Turing’s. Now, I’ll wait for your response. This isn't me being condescending. Rather, I’m realizing through these discussions that I often assume people are familiar with proof theory—when, in fact, there’s still groundwork to be laid. Otherwise... If you are familiar with it, just say “yes,” and we’ll proceed. For me, you already demonstrated that you are a critical thinker. You might be the s
1[comment deleted]

First of all, I think you are confusing incompleteness with having false beliefs.

A. Lisa is not a P-Zombie
B. Lisa asserts that she is a not P-Zombie
C. Lisa would be complete: Not Possible ✗

C doesn't follow. Lisa would need to be able to formally prove that she is not P-Zombie, not merely assert that she is not one, so that completeness was relevant at all. Even then it's not clear that Lisa would be complete - maybe there is some other statement that Lisa can't prove which, nonetheless, has to be true?

A. Lisa is a P-Zombie
B. Lisa asserts that she is a not

... (read more)
-9milanrosko

I think picking axioms is not necessary here and in any case inconsequential.

By picking your axioms you logically pinpoint what you are talking in the first place. Have you read Highly Advanced Epistemology 101 for Beginners? I'm noticing that our inferential distance is larger than it should be otherwise.

"Bachelors are unmarried" is true whether or not I regard it as some kind of axiom or not.

No, you are missing the point. I'm not saying that this phrase has to be axiom itself. I'm saying that you need to somehow axiomatically define your individual words... (read more)

2cubefox
I have read it a while ago, but he overstates the importance of axiom systems. E.g. he wrote: That's evidently not true. Mathematicians studied arithmetic for two thousand years before it was axiomatized by Dedekind and Peano. Likewise, mathematical statisticians have studied probability theory long before it was axiomatized by Kolmogorov in the 1930s. Advanced theorems preceded these axiomatizations. Mathematicians rarely use axiom systems in their work even if they are theoretically available. That's why it is hard to translate proofs into Lean code. Mathematicians just use well-known mathematical facts (that are considered obvious or already sufficiently established by others) as assumptions for their proofs. That's obviously not necessary. We neither do nor need to "somehow axiomatically define" our individual words for "Bachelors are unmarried" to be true. What would these axioms even be? Clearly the sentence has meaning and is true without any axiomatization.

Yes, the meaning of a statement depends causally on empirical facts. But this doesn't imply that the truth value of "Bachelors are unmarried" depends less than completely on its meaning.

I think we are in agreement here.

My point is that if your picking of particular axioms is entangled with reality, then you are already using a map to describe some territory. And then you can just as well describe this territory more accurately.

I think the instrumental justification (like Dutch book arguments) for laws of epistemic rationality (like logic and probability) i

... (read more)
2cubefox
I think picking axioms is not necessary here and in any case inconsequential. "Bachelors are unmarried" is true whether or not I regard it as some kind of axiom or not. I seems the same holds for tautologies and probabilistic laws. Moreover, I think neither of them is really "entangled" with reality, in the sense that they are compatible with any possible reality. They merely describe what's possible in the first place. That bachelors can't be married is not a fact about reality but a fact about the concept of a bachelor and the concept of marriage. Suppose you are not instrumentally exploitable "in principle", whatever that means. Then it arguably would still be epistemically irrational to believe that "Linda is a feminist and a bank teller" is more likely than "Linda is a bank teller". Moreover, it is theoretically possible that there are cases where it is instrumentally rational to be epistemically irrational. Maybe someone rewards people with (epistemically) irrational beliefs. Maybe theism has favorable psychological consequences. Maybe Pascal's Wager is instrumentally rational. So epistemic irrationality can't in general be explained with instrumental irrationality as the latter may not even be present. I don't think we have to appeal to reality. Suppose the concept of bachelorhood and marriage had never emerged. Or suppose humans had never come up with logic and probability theory, and not even with language at all. Or humans had never existed in the first place. Then it would still be true that all bachelors are necessarily unmarried, and that tautologies are true. Moreover, it's clear that long before the actual emergence of humanity and arithmetic, two dinosaurs plus three dinosaurs already were five dinosaurs. Or suppose the causal history had only been a little bit different, such that "blue" means "green" and "green" means "blue". Would it then be the case that grass is blue and the sky is green? Of course not. It would only mean that we say "grass is

Ok, let me see if I'm understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don't know which, so you can't make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model.

Basically yes. Strictly speaking it's not just any arbitrary digit, but any digit your knowledge about values of which works the same way as about value of X. 

For any digit you can exe... (read more)

3jwfiredragon
Oops, that's my bad for not double-checking the definitions before I wrote that comment. I think the distinction I was getting at was more like known unknowns vs unknown unknowns, which isn't relevant in platonic-ideal probability experiments like the ones we're discussing here, but is useful in real-world situations where you can look for more information to improve your model. Now that I'm cleared up on the definitions, I do agree that there doesn't really seem to be a difference between physical and logical uncertainty.

Is there a formal way you'd define this? My first attempt is something like "information that, if it were different, would change my answer"

I'd say that the rule is: "To construct probability experiment use the minimum generalization that still allows you to model your uncertainty".

In the case with 1,253,725,569th digit of pi, if I try to construct a probability experiment consisting only of checking this paticular digit, I fail to model my uncertainty, as I don't know yet what is the value of this digit.

So instead I use a more general probability experime... (read more)

1jwfiredragon
Ok, let me see if I'm understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don't know which, so you can't make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model. The first part about not having a coherent model sounds a lot like the frequentist idea that you can't generate a coherent probability for a coin of unknown bias - you know that it's not 1/2 but you can't decide on any specific value.  This seems equivalent to my definition of "information that would change your answer if it was different", so it looks like we converged on similar ideas? I'd argue that it's physical uncertainty before the coin is flipped, but logical certainty after. After the flip, the coin's state is unknown the same way the X-th digit of pi is unknown - the answer exists and all you need to do is look for it.

As soon as you have your axioms you can indeed analytically derive theorems from them. However, the way you determine which axioms to pick, is entangled with reality. It's an especially clear case with probability theory where the development of the field was motivated by very practical concerns. 

The reason why some axioms appear to us appropriate for logic of beliefs and some don't, is because we know what beliefs are from experience. We are trying to come up with a mathematical model approximating this element of reality - an intensional definition ... (read more)

2cubefox
Yes, the meaning of a statement depends causally on empirical facts. But this doesn't imply that the truth value of "Bachelors are unmarried" depends less than completely on its meaning. Its meaning (M) screens off the empirical facts (E) and its truth value (T). The causal graph looks like this: E —> M —> T If this graph is faithful, it follows that E and T are conditionally independent given M. E⊥T∣M. So if you know M, E gives you no additional information about T. And the same is the case for all "analytic" statements, where the truth value only depends on its meaning. They are distinguished from synthetic statements, where the graph looks like this: E —> M —> T |_________^ That is, we have an additional direct influence of the empirical facts on the truth value. Here E and T are no longer conditionally independent given M. I think that logical and probabilistic laws are analytic in the above sense, rather than synthetic. Including axioms. There are often alternative axiomatizations of the same laws. So P(A∨B)=P(A)+P(B)−P(A∧B) and P(⊤)=1 are equally analytic, even though only the latter is used as an axiom. I think the instrumental justification (like Dutch book arguments) for laws of epistemic rationality (like logic and probability) is too weak. Because in situations where there happens to be in fact no danger of being exploited by a Dutch book (because there is nobody who would do such an exploit) it is not instrumentally irrational to be epistemically irrational. But you continue to be epistemically irrational if you have e.g. incoherent beliefs. So epistemic rationality cannot be grounded in instrumental rationality. Epistemic rationality laws being true in virtue of their meaning alone (being analytic) therefore seems a more plausible justification for epistemic rationality.

I think the probability axioms are a sort of "logic of sets of beliefs". If the axioms are violated the belief set seems to be irrational.

Well yes, they are. But how do you know which axioms are the correct axioms for logic of sets beliefs? How comes violation of some axioms seems to be irrational, while violation of other axioms does not? What do you even mean by "rational" if not "systematic way to arrive to map-territory correspondence"?

You see, in any case you have to ground your mathematical model in reality. Natural numbers may be logically pinpointe... (read more)

2cubefox
It seems clear to me that statements expressing logical or probabilistic laws like P(A∨B)=P(A)+P(B)−P(A∧B) or ¬(A∧¬A) are "analytic". Similar to "Bachelors are unmarried". The truth of a statement in general is determined by two things, it's meaning and what the world is like. But for some statements the latter part is irrelevant, and their meanings alone are sufficient to determine their truth or falsity.

Degrees of belief adhering to the probability calculus at any point in time rules out things like "Mary is a feminist and a bank teller" to simultaneously receive a higher degree of belief than "Mary is a bank teller". It also requires e.g. that if  and  then . That's called "probabilism" or "synchronic coherence".

What is even the motivation for it? If you are not interested in your map representing a territory, why demanding that your map is coherent?

And why not assume some completely different axioms... (read more)

2cubefox
Not to remove all limitations: I think the probability axioms are a sort of "logic of sets of beliefs". If the axioms are violated the belief set seems to be irrational. (Or at least the smallest incoherent subset that, if removed, would make the set coherent.) Conventional logic doesn't work as a logic for belief sets, as the preface and lottery paradox show, but subjective probability theory does work. As a justification for the axioms: that seems a similar problem to justifying the tautologies / inference rules of classical logic. Maybe an instrumental Dutch book argument works. But I do think it does come down to semantic content: If someone says "P(A and B)>P(A)" it isn't a sign of incoherence if he means with "and" what I mean with "or". Regarding the map representing the territory: That's a more challenging thing to formalize than just logic or probability theory. It would amount to a theory of induction. We would need to formalize and philosophically justify at least something like Ockham's razor. There are some attempts, but I think no good solution.

Yes, that's why I only said "less arbitrary".

I don't think I can agree even with that. 

Previously we arbritrary assumed that a particular sample space correspond to a problem. Now we are arbitrary assuming that a particular set of possible worlds corresponds to a problem. In the best case we are exactly as arbitrary as before and have simply renamed our set. In the worst case we are making a lot of extra unfalsifiable assumptions about metaphysics.

You could theoretically believe to degree 0 in the propositions "the die comes up 6" or "the die lands at

... (read more)
2cubefox
For a propositional theory this axiom is replaced with P(⊤)=1, i.e. a tautology in classical propositional logic receives probability 1. Degrees of belief adhering to the probability calculus at any point in time rules out things like "Mary is a feminist and a bank teller" to simultaneously receive a higher degree of belief than "Mary is a bank teller". It also requires e.g. that if P(A)=0.6 and P(B)=0.5 then 0.1≥P(A∧B)≥0.5. That's called "probabilism" or "synchronic coherence". Another assumption is typically that Pnew(A):=Pold(A|B) after "observing" B. This is called "conditionalization" or sometimes "diachronic coherence".

You may assume that it's the way how Albert managed to persuade Barry to continue)

A less arbitrary way to define a sample space is to take the set of all possible worlds.

And how would you know which worlds are possible and which are not?

How would Albert and Barry use the framework of "possible worlds" to help them resolve their disagreement?

But for subjective probability theory a "sample space" isn't even needed at all. A probability function can simply be defined over a Boolean algebra of propositions. Propositions ("events") are taken to be primary instead of being defined via primary outcomes of a sample space.

This simply passes the ... (read more)

2cubefox
Yes, that's why I only said "less arbitrary". Regarding "knowing": In subjective probability theory, the probability over the "event" space is just about what you believe, not about what you know. You could theoretically believe to degree 0 in the propositions "the die comes up 6" or "the die lands at an angle". Or that the die comes up as both 1 and 2 with some positive probability. There is no requirement that your degrees of belief are accurate relative to some external standard. It is only assumed that the beliefs we do have compose in a way that adheres to the axioms of probability theory. E.g. P(A)≥P(A and B). Otherwise we are, presumably, irrational.

Thankfully rising land prices due to agglomeration effect is not a thing and the number of people in town is constant...

Don't get me wrong, building more housing is good, actually. But it's going to be only marginal improvement, without addressing the systemic issues with land capturing a huge share of economic gains, renting economy and real-estate speculators. These issues are not solvable without a substantial Land Value Tax.

Yes, if you have 500 000 people in town, you need to produce food for 500 000 people all the time. While if you have 500 000 people in town, you only need to build houses for 500 000 once.

Unless, of course, some people who already have a house or people who do not even live in your town can buy houses in your town in order to use them as investment and/or to rent them to people living in your town. Thankfully, this is a completely ridiculous counterfactual and noone ever does that...

But the logic of "there is a shortage of X, therefore the proper solution

... (read more)
2Viliam
The people outside the town who buy houses here either expect to rent them expensively, or to use them as an investment because they expect the costs of housing to grow. (Or a combination of both.) Refusing to build more houses means doing exactly the thing they want -- it keeps the rents high, and it keeps the costs growing. If you have 500 000 people in the town, and 100 000 houses are owned by people outside the town, you should build more houses until there are 600 000 of them (i.e. not only 500 000). Then the people outside the town will find it difficult to rent their houses expensively, and may start worrying that the costs will not grow sufficiently to justify their investments.

Suppose, someone comes to you and shows you a red ball. Then they explain that they had two bags one with all red balls and one with balls of ten different colors. They have randomly picked the bag from which to pick the ball and then randomly picked the ball from this bag. Additionally they've precommited to come to you and show you the ball iff it happened to be red. What are the odds that the ball that is being shown to you have been picked from the all-red bag?

To put a bit of a crude metaphor on it, if you were to pick a random number uniformly between 0 and 1,000,000, and pre-commit to having a child on iff it's equal to some value X - from the point of view of the child, the probability that the number was equal to X is 100%.

Conditional P(X|X) = 1.

However, unconditional P(X)  = 1/1,000,000. 

Just like you can still reason about unconditional probability of a fair coin even after observing an outcome of the toss, the child can still reason about unconditional probability of their existence even after o... (read more)

Imagine these factors didn't work out for Earth, and it was yet another uninhabitable rock. We'd be standing on some distant shore, having the same conversation, wondering why Florpglorp-iii was so perfectly fine-tuned.

This is valid. Alice is doing the usual mistake of confusing P(One Particular Thing) with P(Any Thing from a Huge Set of Things) and Bob is right to correct her. 

However, when we decrease the size of our Set of Things the difference between the two probabilities decreases. In the universe with billions of galaxies:

P(Life on at Least One... (read more)

1Fraser
If the observer is distinct from Alice, absolutely. If the observer is Alice, nothing needs explaining in either case. To put a bit of a crude metaphor on it, if you were to pick a random number uniformly between 0 and 1,000,000, and pre-commit to having a child on iff it's equal to some value X - from the point of view of the child, the probability that the number was equal to X is 100%. Apologies if there's something more subtle with your answer that I've missed.

If you can't generate your parents' genomes and everything from memory, then yes, you are in a state of uncertainty about who they are, in the same qualitative way you are in a state of uncertainty about who your young children will grow up to be.

Here you seem to confuse "which person has quality X" with "what are all the other qualities that a person, who has quality X has".

I'm quite confident about which people are my parents. I'm less confident about all the qualities that my parents have. The former is relevant to Doomsday argument, the latter is not.&... (read more)

I think that the core of the problem this post points out actually has very little to do with utility function. The core problem in using an extremely confusing term "possible world" for an element of a sample space.

Now, I don't mind when people use weird terminology for historical reasons. If everybody understood that "possible world" is simply a synonym for "mutually exclusive outcome of a probability experiment", there won't be an issue. 

But at this point:

The sample space  of a rational agent's beliefs is, more or less, the set of possib

... (read more)

As soon as we've established the notion of probability experiment that approximates our knowledge about the physical process that we are talking about - we are done. This works exactly the same way whether you are not sure about the outcome of a coin toss, oddness or evenness of an unknown to you digit of pi, or whether you live on a tallest or the coldest mountain.

And if you find yourself unable to formally express some reasoning like that - this is a feature not a bug. It shows when your reasoning becomes incoherent.

A root confusion may be whether differ

... (read more)
1Lorec
If you can't generate your parents' genomes and everything from memory, then yes, you are in a state of uncertainty about who they are, in the same qualitative way you are in a state of uncertainty about who your young children will grow up to be. Ditto for the isomorphism between your epistemic state w.r.t. never-met grandparents vs your epistemic state w.r.t. not-yet-born children. It may be helpful to distinguish the subjective future, which contains the outcomes of all not-yet-performed experiments [i.e. all evidence/info not yet known] from the physical future, which is simply a direction in physical time.

Not necessarily. There may be a fast solution for some specific cases, related to the vulnerabilities in the protocol. And then there is the question of brute force computational power, due to having a dyson swarm around the Sun.

I don't think you got the question.

You see, if we define "shouldness" as optimization of human values. Then it does indeed logically follows that people should act altruistically:

People should do what they should

Should = Optimization of human values

People should do what optimizes human values

Altruism ∈ Human Values

People should do altruism

Is it what you were looking for?

1Perry Cai
By "should" I mean any currently accepted model that you can derive alturism from, of which the only one I know of so far is evolution or stems from evolution.

I mean, at some point AI will simply be able to hack all crypto and then there is that. But that's probably not going to happen very soon and when it does happen it will probably be in 25% least important things going on. 

1ProgramCrafter
That's all conditional on P = NP, isn't it? Also, which part do you consider weaker: digital signatures or hash functions?

my opinion on your good faith depends on whether you are able to admit having  deeply misunderstood the post.

Then we can kill all the birds with the same stone. If you provide an substantial correction to my imaginary dialogue, showing which place of your post this correction is based on, you will be able to demonstrate how I indeed failed to understand your post, satisfy my curriocity and I'll be able to earn your good faith by acknowledging my mistake.

Once again, there is no need to go on any unnecessary tangents. You should just address the substan... (read more)

-6lumpenspace

This clearly marks me as the author, as separated from Land.

I mark you as an author of this post on LessWrong. When I say: 

You state Pythia mind experiment. And then react to it

I imply that in doing so you are citing Land. And then I expect you to make a better post and create some engagement between Land's ideas and Orthogonality thesis, instead of simply citing how he fails to grasp it.

More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn't depend in the slightest on whether you're citing Land or writ... (read more)

-4lumpenspace
er - this defeats all rules of conversational pragmatics but look, i concede if it stops further more preposterous rebuttals. of course it doesn't. my opinion on your good faith depends on whether you are able to admit having  deeply misunderstood the post. saying something of substance: i did, in the post. id respond to object-level criticism if you provided some - i just see status-jousting, formal pedantry, and random fnords. have you read The Obliqueness Thesis btw? as i mentioned above, that's a gloss on the same texts that you might find more accessible - per editor's note, i contributed this to help those who'd want to check the sources upon reading it, so im not really sure how writing my own arguments would help.

let’s not be overly pedantic.

It's not about pedantry, it's about you understanding what I'm trying to communicate and vice versa.

The point was that if your post not only presented the a position that you or Nick Land disagrees with but also engaged with that in a back and forth dynamics with authentic arguments and counterarguments that would've been an improvememt over it's current status. 

This point still stands no matter what definition for ITT or its purpose you are using.

anyway, you failed the Turing test with your dialogue

Where exactly? What is ... (read more)

wait - are you aware that the texts in question are nick land's?

Yes, this is why I wrote this remark in the initial comment:

Most of blame of course goes to original author, Nick Land, not @lumpenspace, who simply has reposted the ideas. But I think low effort reposting of poor reasoning also shouldn't be rewarded and I'd like to see less of it on this site. 

But as an editor and poster you still have the responsibility to present ideas properly. This is true regardless of the topic, but especially so while presenting ideologies promoting systematic gen... (read more)

-4lumpenspace
Look friend. You said you understood from the beginning that the text in question was Land's. In your first comment, though, you clearly show that not to be the case: > I do not see how you are doing that. You state Pythia mind experiment. And then react to it: "You go girl!". I suppose both the description of the mind experiment and the reaction are faithful. But there is no actual engagement between orthogonality thesis and Land's ideas.  This clearly marks me as the author, as separated from Land. I find it hard to keep engaging under an assumption of good faith on these premises.
-5lumpenspace

What is this "should" thingy you are talking about? Do you by chance have some definition of "shouldness" or are you open to suggestions?

1Perry Cai
Yes, I'm open to any framework that describes altruism in a way other than an evolutionary process.

I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?

Propaganda of Nick Land's ideas. Let me explain.

The first thing that we get after the editor's note is the preemptive attempt at deflection against accusations of fashism, accepting a better sounding label of social darwinism and proclamation that many intelligent people actually agree with this view but just afraid to think it through. 

It's not an invitation to discuss which labels actually are appropriate to this ideology, there is no exploration of argum... (read more)

1lumpenspace
wait - are you aware that the texts in question are nick land's? i think it should be pretty clear from the editor's note. besides, in the first extract, the labels part was entirely incidental - and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text. i definitely see the issue of fixating on labels, now, tho - and i thank you for providing an object lesson. the purpose of the idelogical turing test is to represent the opposing views in ways that your opponent would find satisfactory. I have it from reliable sources that Bostrom found the opening paragraphs, until "sun's eventual expansion", satisfactory. i really cannot shake the feeling that you hadn't read the post to begin with, and that now you are simply scanning it in order to find rebuttals to my comments. your grasp of basic, factual statements seems to falter, to the point of suggesting that my engagement with what purport to be more fundamental points might be a suboptimal allocation of resources.

Logic simply preserves truth. You can arrive to a valid conclusion that one should act altruistically if you start from some specific premises, and can't if you start from some other presimes.

What are the premises you start from?

1Perry Cai
I guess most arguments would need to start from Cogito, ergo sum to make much sense, and you couldn't do much of anything without accepting that our observations of the world exist. But is there a set of premises that is generally accepted that can determine what one's actions should be without stating them outright?

Sleeping Beauty is more subtle problem, so it's less obvious why the application of centred possible worlds fails.

But in principle we can construct a similar argument. If we suppose that, in terms of the paper, ones epistemic state should follow function P', instead of P on awakening in Sleeping Beauty we get ourselves into this precarious situation:

P'(Today is Monday|Tails) = P'(Today is Tuesday|Tails) = 1/2

as this estimate stays true for both awakenings:

P'(At Least One Awakening Happens On Monday|Tails) = 1 -  P'(Today is Tuesday|Tails)^2 = 3/4 ... (read more)

There is, in fact, no way to formalize "Today" in a setting where the participant doesn't know which day it is, multiple days happens in the same iteration of probability experiment and probability estimate should be different on different days. Which the experiment I described demonstrates pretty well.

Framework of centered possible worlds is deeply flawed and completely unjustified. It's essentially talking about a different experiment instead of the stated one, or a different function instead of probability.

For your purposes, however it's not particularly important. All you need is to explicitly add the notion that propositions should be well-defined events. This will save you from all such paradoxical cases.

5Jack VanDrunen
You might be interested in this paper by Wolfgang Spohn on auto-epistemology and Sleeping Beauty (and related) problems (Sleeping Beauty starts on p. 388). Auto-epistemic models have more machinery than the basic model described in this post has, but I'm not sure there's anything special about your example that prevents it being modeled in a similar way.

I'm not sure if I fully understand why this is supposed to pose a problem, but maybe it helps to say that by "meaningfully consider" we mean something like, is actually part of the agent's theory of the world. In your situation, since the agent is considering which envelope to take, I would guess that to satisfy richness she should have a credence in the proposition. 

Okay, then I believe you definetely have a problem with this example and would be glad to show you where exactly.

I think (maybe?) what makes this case tricky or counterintuitive is that t

... (read more)
5Daniel Herrmann
Thanks for raising this important point. When modeling these situations carefully, we need to give terms like "today" a precise semantics that's well-defined for the agent. With proper semantics established, we can examine what credences make sense under different ways of handling indexicals. Matthias Hild's paper "Auto-epistemology and updating" demonstrates how to carefully construct time-indexed probability updates. We could then add centered worlds or other approaches for self-locating probabilities. Some cases might lead to puzzles, particularly where epistemic fixed points don't exist. This might push us toward modeling credences differently or finding other solutions. But once we properly formalize "today" as an event, we can work on satisfying richness conditions. Whether this leads to inconsistent attitudes depends on what constraints we place on those attitudes - something that reasonable people might disagree about, as debates over sleeping beauty suggest.  

I had an initial impulse to simply downvote the post based on ideological misalignment even without properly reading it, caught myself in the process of thinking about it, and made myself read the post first. As a result I strongly downvoted it based on its quality. 

Most of it is low effor propaganda pamphlet. Vibes based word salad instead of clear reasoning. Thesises mostly without justifications. And where there is some, it's so comically weak that there is not much to have a productive discussion about, like the idea that the existence of instrume... (read more)

2lumpenspace
how is meditations on moloch a better explanation of the will-to-think, or a better rejection of orthogonality, than the above? I think the argument is stated as clearly as it’s appropriate under the assumption of a minimally charitable audience; in particular, I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon? I cannot shake the feeling that the commenter might have only read the first extract and either fell victim of fnords or found it expedient to leave a couple of them for the benefit of less sophisticated leaders - in particular, has the commenter not noticed that the whole first part of Pythia unbound is an ideological Turing test, passed with flying colours?

Richness: The model must include all the propositions the agent can meaningfully consider, including those about herself. If the agent can form a proposition “I will do X”, then that belongs in the space of propositions over which she has beliefs and (where appropriate) desirabilities.

I see a potential problem here, depending on what exactly is meant by "can meaningfully consider".

Consider this set up:

You participate in the experiment for seven days. Every day you wake up in a room and can choose between two envelopes. One of them has 100$ the other is emp

... (read more)
5Daniel Herrmann
Thanks for this example. I'm not sure if I fully understand why this is supposed to pose a problem, but maybe it helps to say that by "meaningfully consider" we mean something like, is actually part of the agent's theory of the world. In your situation, since the agent is considering which envelope to take, I would guess that to satisfy richness she should have a credence in the proposition.  I think (maybe?) what makes this case tricky or counterintuitive is that the agent seems to lack any basis for forming beliefs about which envelope contains the money - their memory is erased each time and the location depends on their previous (now forgotten) choice. However, this doesn't mean they can't or don't have credences about the envelope contents. From the agent's subjective perspective upon waking, they might assign 0.5 credence to each envelope containing the money, reasoning that they have no information to favor either envelope. Or they might have some other credence distribution based on their (perhaps incorrect) theory of how the experiment works. The richness condition simply requires that if the agent does form such credences, they should be included in their algebra. We're not making claims about what propositions an agent should be able to form credences about, nor about whether those credences are well-calibrated. The framework aims to represent the agent's actual beliefs about the world, as in, how things are or might be from the agent's perspective, even in situations where forming accurate beliefs might be difficult or impossible. This also connects to the austerity condition - if the agent truly believes it's impossible for them to have any credence about the envelope contents, then such propositions wouldn't be in their algebra. But that would be quite an unusual case, since most agents will form some beliefs in such situations, even if those beliefs end up being incorrect or poorly grounded.
Load More