You chose word length generator as you know that typical length of words is 1-10. Thus not random.
This is not relevant to my point. After all you also know that typical month is 1-12
No, the point is that I specifically selected a number via an algorithm that has nothing to do with sampling months. And yet your test outputs positive result anyway. Therefore your test is unreliable.
I didn't rejected any results – it works in any test I have imagined
That's exactly the problem. Essentially you are playing a 2,4,6 game, got no negative result yet and are ...
First of all, your experimental method can really benefit from a control group. Pick a setting where a thing is definitely not randomly sampled from a set. Perform your experiment and see what happens.
Consider. I generated a random word using this site https://randomwordgenerator.com/
This word turned out to be "mosaic". It has 6 letters. Let's test whether it's length is randomly sampled from the number of months in a year.
As 6*2=12, this actually works perfectly, even better than estimating the number of months in a year based on your birth month!
It also ...
I personally didn't expect Trump to do any tarrifs at all
Just curious, how comes? Were you simply not paying attention to what he was saying? Or were you not believing in his promises?
I think "mediocre" is a quite appropriate adjective when describing a thing that we had high hopes for, but now received evidence, according to which while the thing technically works, it performs worse than expected, and the most exciting use cases are not validated.
I indeed used a single example here, so the strength of the evidence is arguable, but I don't see why this case should be an outlier. I could've searched for more, like this one, that is particularly bad:
In any case, you can consider this post my public prediction that othe...
I think the problem here is that you do not quite understand the problem.
There is definetely some kind of misunderstanding that is going on, and I'd like to figure it out.
It's not that we "imagine that we've imagined the whole world, do not notice any contradictions and call it a day".
How it's not the case? Citing you from here:
When you are conditioning on empirical fact, you are imaging set of logically consistent worlds where this empirical fact is true and ask yourself about frequency of other empirical facts inside this set.
How do you know which ...
In this post I've described a unified framework that allows to reason about any type of uncertainty be it logical or empirical. I would appreciate engagement from people who think that logical uncertainty is still unsolved.
Are you arguing that the distinction between objective and subjective are "very unhelpful," because the state of people's subjective beliefs are technically an objective fact of the world?
It's unhelpful due to a an implicit (and in our case somewhat explicit) assumption that "subjective" and "objective" are in opposition to each other. That it's two different magisteriums and things are either one or the other.
why don't you argue that all similar categorizations are unhelpful, e.g. map vs. territory
Map and territory framework lacks this assumption. I...
This debate seems hampered by a lack of clarity on what “objective” and “subjective” moralities are.
Absolutely.
Coyne gave a sensible definition of “objective” morality as being the stance that something can be discerned to be “morally wrong” through reasoning about facts about the world, rather than by reference to human opinion.
That's a poor definition. It tries to oppose facts about the worlds to human opinions. While whether humans have particular opinions or not is also a matter of facts about the world.
The fault here lies on the terms itself. Such dyc...
Yes, you are correct! Thanks for noticing it.
Actually... I will say it: This feels like a fast rebranding of the Halting Problem, like without actually knowing what it implies.
Being able to rebrand an argument so that it could talk about a different problem in a valid way is exactly what is to understand it - not just repeat the same words in the same context that the teacher said but generalize it. We can go into the realm of second order logic and say that
For every property that at least one program has, a universal detector of this property has to itself have this property on at least some input.
M...
You basically left our other more formal conversation to engage in the critique of prose.
Not at all. I'm doing both. I specifically started the conversation in the post which is less... prose. But I suspect you may also be interested in engagement with the long post that you put so much effort to write. If it's not the case - nevermind and let's continue the discussion in the argument thread.
These are metaphors to lead the reader slowly to the idea...
If you require flawed metaphors, what does it say about the idea?
Now you might say I have a psychotic fit
Fr...
So, essentially, it's like trying to explain to a halting machine—which believes it is a universal halting machine—that it is not, in fact, a universal halting machine.
Don't tell me what it's like. Construct the actual argument, that is isomorphic to Turing proof.
Let me give you an example. Let's prove that no perfect antivirus is possible.
Let a perfect antivirus A be a program that receives some program P and it's input X as arguments and returns 1 if P is malevolent on input X and 0 otherwise. And A itself is not malevolent on any input.
Suppose A e...
If you are familiar with it, just say “yes,” and we’ll proceed.
Yes.
In The Terminator, we often see the world through the machine’s perspective: a red-tinged overlay of cascading data, a synthetic gaze parsing its environment with cold precision. But this raises an unsettling question: Who—or what—actually experiences that view?
Is there an AI inside the AI, or is this merely a fallacy that invites us to project a mind where none exists?
Nothing is preventing us from designing a system consisting of a module generating a red-tinged video stream and image recognition software that looks at the stream and b...
"This, if I'm not missing anything" Yes you This is called a Modus tollens. We are not concerned about the boolean of each of the statements.
1.
"if I'm not missing anything" it is likely you do let me explain. This is called a Modus Tollens. We are not concerned about Lisas logic as a boolean. We look each proposition its entirety. I advice you to read about Turings proof on the halting problem, because it is the same technique.
I struggle to parse this. In general the coherency of your reply is poor. Are you by chance using an LLM?
I apprec...
First of all, I think you are confusing incompleteness with having false beliefs.
A. Lisa is not a P-Zombie
B. Lisa asserts that she is a not P-Zombie
C. Lisa would be complete: Not Possible ✗
C doesn't follow. Lisa would need to be able to formally prove that she is not P-Zombie, not merely assert that she is not one, so that completeness was relevant at all. Even then it's not clear that Lisa would be complete - maybe there is some other statement that Lisa can't prove which, nonetheless, has to be true?
...A. Lisa is a P-Zombie
B. Lisa asserts that she is a not
I think picking axioms is not necessary here and in any case inconsequential.
By picking your axioms you logically pinpoint what you are talking in the first place. Have you read Highly Advanced Epistemology 101 for Beginners? I'm noticing that our inferential distance is larger than it should be otherwise.
"Bachelors are unmarried" is true whether or not I regard it as some kind of axiom or not.
No, you are missing the point. I'm not saying that this phrase has to be axiom itself. I'm saying that you need to somehow axiomatically define your individual words...
Yes, the meaning of a statement depends causally on empirical facts. But this doesn't imply that the truth value of "Bachelors are unmarried" depends less than completely on its meaning.
I think we are in agreement here.
My point is that if your picking of particular axioms is entangled with reality, then you are already using a map to describe some territory. And then you can just as well describe this territory more accurately.
...I think the instrumental justification (like Dutch book arguments) for laws of epistemic rationality (like logic and probability) i
Ok, let me see if I'm understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don't know which, so you can't make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model.
Basically yes. Strictly speaking it's not just any arbitrary digit, but any digit your knowledge about values of which works the same way as about value of X.
For any digit you can exe...
Is there a formal way you'd define this? My first attempt is something like "information that, if it were different, would change my answer"
I'd say that the rule is: "To construct probability experiment use the minimum generalization that still allows you to model your uncertainty".
In the case with 1,253,725,569th digit of pi, if I try to construct a probability experiment consisting only of checking this paticular digit, I fail to model my uncertainty, as I don't know yet what is the value of this digit.
So instead I use a more general probability experime...
As soon as you have your axioms you can indeed analytically derive theorems from them. However, the way you determine which axioms to pick, is entangled with reality. It's an especially clear case with probability theory where the development of the field was motivated by very practical concerns.
The reason why some axioms appear to us appropriate for logic of beliefs and some don't, is because we know what beliefs are from experience. We are trying to come up with a mathematical model approximating this element of reality - an intensional definition ...
I think the probability axioms are a sort of "logic of sets of beliefs". If the axioms are violated the belief set seems to be irrational.
Well yes, they are. But how do you know which axioms are the correct axioms for logic of sets beliefs? How comes violation of some axioms seems to be irrational, while violation of other axioms does not? What do you even mean by "rational" if not "systematic way to arrive to map-territory correspondence"?
You see, in any case you have to ground your mathematical model in reality. Natural numbers may be logically pinpointe...
Degrees of belief adhering to the probability calculus at any point in time rules out things like "Mary is a feminist and a bank teller" to simultaneously receive a higher degree of belief than "Mary is a bank teller". It also requires e.g. that if and then . That's called "probabilism" or "synchronic coherence".
What is even the motivation for it? If you are not interested in your map representing a territory, why demanding that your map is coherent?
And why not assume some completely different axioms...
Yes, that's why I only said "less arbitrary".
I don't think I can agree even with that.
Previously we arbritrary assumed that a particular sample space correspond to a problem. Now we are arbitrary assuming that a particular set of possible worlds corresponds to a problem. In the best case we are exactly as arbitrary as before and have simply renamed our set. In the worst case we are making a lot of extra unfalsifiable assumptions about metaphysics.
...You could theoretically believe to degree 0 in the propositions "the die comes up 6" or "the die lands at
You may assume that it's the way how Albert managed to persuade Barry to continue)
A less arbitrary way to define a sample space is to take the set of all possible worlds.
And how would you know which worlds are possible and which are not?
How would Albert and Barry use the framework of "possible worlds" to help them resolve their disagreement?
But for subjective probability theory a "sample space" isn't even needed at all. A probability function can simply be defined over a Boolean algebra of propositions. Propositions ("events") are taken to be primary instead of being defined via primary outcomes of a sample space.
This simply passes the ...
Thankfully rising land prices due to agglomeration effect is not a thing and the number of people in town is constant...
Don't get me wrong, building more housing is good, actually. But it's going to be only marginal improvement, without addressing the systemic issues with land capturing a huge share of economic gains, renting economy and real-estate speculators. These issues are not solvable without a substantial Land Value Tax.
Yes, if you have 500 000 people in town, you need to produce food for 500 000 people all the time. While if you have 500 000 people in town, you only need to build houses for 500 000 once.
Unless, of course, some people who already have a house or people who do not even live in your town can buy houses in your town in order to use them as investment and/or to rent them to people living in your town. Thankfully, this is a completely ridiculous counterfactual and noone ever does that...
...But the logic of "there is a shortage of X, therefore the proper solution
Suppose, someone comes to you and shows you a red ball. Then they explain that they had two bags one with all red balls and one with balls of ten different colors. They have randomly picked the bag from which to pick the ball and then randomly picked the ball from this bag. Additionally they've precommited to come to you and show you the ball iff it happened to be red. What are the odds that the ball that is being shown to you have been picked from the all-red bag?
To put a bit of a crude metaphor on it, if you were to pick a random number uniformly between 0 and 1,000,000, and pre-commit to having a child on iff it's equal to some value X - from the point of view of the child, the probability that the number was equal to X is 100%.
Conditional P(X|X) = 1.
However, unconditional P(X) = 1/1,000,000.
Just like you can still reason about unconditional probability of a fair coin even after observing an outcome of the toss, the child can still reason about unconditional probability of their existence even after o...
Imagine these factors didn't work out for Earth, and it was yet another uninhabitable rock. We'd be standing on some distant shore, having the same conversation, wondering why Florpglorp-iii was so perfectly fine-tuned.
This is valid. Alice is doing the usual mistake of confusing P(One Particular Thing) with P(Any Thing from a Huge Set of Things) and Bob is right to correct her.
However, when we decrease the size of our Set of Things the difference between the two probabilities decreases. In the universe with billions of galaxies:
P(Life on at Least One...
If you can't generate your parents' genomes and everything from memory, then yes, you are in a state of uncertainty about who they are, in the same qualitative way you are in a state of uncertainty about who your young children will grow up to be.
Here you seem to confuse "which person has quality X" with "what are all the other qualities that a person, who has quality X has".
I'm quite confident about which people are my parents. I'm less confident about all the qualities that my parents have. The former is relevant to Doomsday argument, the latter is not.&...
I think that the core of the problem this post points out actually has very little to do with utility function. The core problem in using an extremely confusing term "possible world" for an element of a sample space.
Now, I don't mind when people use weird terminology for historical reasons. If everybody understood that "possible world" is simply a synonym for "mutually exclusive outcome of a probability experiment", there won't be an issue.
But at this point:
...The sample space of a rational agent's beliefs is, more or less, the set of possib
As soon as we've established the notion of probability experiment that approximates our knowledge about the physical process that we are talking about - we are done. This works exactly the same way whether you are not sure about the outcome of a coin toss, oddness or evenness of an unknown to you digit of pi, or whether you live on a tallest or the coldest mountain.
And if you find yourself unable to formally express some reasoning like that - this is a feature not a bug. It shows when your reasoning becomes incoherent.
...A root confusion may be whether differ
Not necessarily. There may be a fast solution for some specific cases, related to the vulnerabilities in the protocol. And then there is the question of brute force computational power, due to having a dyson swarm around the Sun.
I don't think you got the question.
You see, if we define "shouldness" as optimization of human values. Then it does indeed logically follows that people should act altruistically:
People should do what they should
Should = Optimization of human values
People should do what optimizes human values
Altruism ∈ Human Values
People should do altruism
Is it what you were looking for?
I mean, at some point AI will simply be able to hack all crypto and then there is that. But that's probably not going to happen very soon and when it does happen it will probably be in 25% least important things going on.
my opinion on your good faith depends on whether you are able to admit having deeply misunderstood the post.
Then we can kill all the birds with the same stone. If you provide an substantial correction to my imaginary dialogue, showing which place of your post this correction is based on, you will be able to demonstrate how I indeed failed to understand your post, satisfy my curriocity and I'll be able to earn your good faith by acknowledging my mistake.
Once again, there is no need to go on any unnecessary tangents. You should just address the substan...
This clearly marks me as the author, as separated from Land.
I mark you as an author of this post on LessWrong. When I say:
You state Pythia mind experiment. And then react to it
I imply that in doing so you are citing Land. And then I expect you to make a better post and create some engagement between Land's ideas and Orthogonality thesis, instead of simply citing how he fails to grasp it.
More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn't depend in the slightest on whether you're citing Land or writ...
let’s not be overly pedantic.
It's not about pedantry, it's about you understanding what I'm trying to communicate and vice versa.
The point was that if your post not only presented the a position that you or Nick Land disagrees with but also engaged with that in a back and forth dynamics with authentic arguments and counterarguments that would've been an improvememt over it's current status.
This point still stands no matter what definition for ITT or its purpose you are using.
anyway, you failed the Turing test with your dialogue
Where exactly? What is ...
wait - are you aware that the texts in question are nick land's?
Yes, this is why I wrote this remark in the initial comment:
Most of blame of course goes to original author, Nick Land, not @lumpenspace, who simply has reposted the ideas. But I think low effort reposting of poor reasoning also shouldn't be rewarded and I'd like to see less of it on this site.
But as an editor and poster you still have the responsibility to present ideas properly. This is true regardless of the topic, but especially so while presenting ideologies promoting systematic gen...
What is this "should" thingy you are talking about? Do you by chance have some definition of "shouldness" or are you open to suggestions?
I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?
Propaganda of Nick Land's ideas. Let me explain.
The first thing that we get after the editor's note is the preemptive attempt at deflection against accusations of fashism, accepting a better sounding label of social darwinism and proclamation that many intelligent people actually agree with this view but just afraid to think it through.
It's not an invitation to discuss which labels actually are appropriate to this ideology, there is no exploration of argum...
Logic simply preserves truth. You can arrive to a valid conclusion that one should act altruistically if you start from some specific premises, and can't if you start from some other presimes.
What are the premises you start from?
Sleeping Beauty is more subtle problem, so it's less obvious why the application of centred possible worlds fails.
But in principle we can construct a similar argument. If we suppose that, in terms of the paper, ones epistemic state should follow function P', instead of P on awakening in Sleeping Beauty we get ourselves into this precarious situation:
P'(Today is Monday|Tails) = P'(Today is Tuesday|Tails) = 1/2
as this estimate stays true for both awakenings:
P'(At Least One Awakening Happens On Monday|Tails) = 1 - P'(Today is Tuesday|Tails)^2 = 3/4 ...
There is, in fact, no way to formalize "Today" in a setting where the participant doesn't know which day it is, multiple days happens in the same iteration of probability experiment and probability estimate should be different on different days. Which the experiment I described demonstrates pretty well.
Framework of centered possible worlds is deeply flawed and completely unjustified. It's essentially talking about a different experiment instead of the stated one, or a different function instead of probability.
For your purposes, however it's not particularly important. All you need is to explicitly add the notion that propositions should be well-defined events. This will save you from all such paradoxical cases.
I'm not sure if I fully understand why this is supposed to pose a problem, but maybe it helps to say that by "meaningfully consider" we mean something like, is actually part of the agent's theory of the world. In your situation, since the agent is considering which envelope to take, I would guess that to satisfy richness she should have a credence in the proposition.
Okay, then I believe you definetely have a problem with this example and would be glad to show you where exactly.
...I think (maybe?) what makes this case tricky or counterintuitive is that t
I had an initial impulse to simply downvote the post based on ideological misalignment even without properly reading it, caught myself in the process of thinking about it, and made myself read the post first. As a result I strongly downvoted it based on its quality.
Most of it is low effor propaganda pamphlet. Vibes based word salad instead of clear reasoning. Thesises mostly without justifications. And where there is some, it's so comically weak that there is not much to have a productive discussion about, like the idea that the existence of instrume...
Richness: The model must include all the propositions the agent can meaningfully consider, including those about herself. If the agent can form a proposition “I will do X”, then that belongs in the space of propositions over which she has beliefs and (where appropriate) desirabilities.
I see a potential problem here, depending on what exactly is meant by "can meaningfully consider".
Consider this set up:
...You participate in the experiment for seven days. Every day you wake up in a room and can choose between two envelopes. One of them has 100$ the other is emp
Using the month of your birth to estimate the number of seconds in the year also won't work well, unless you multiply it by number of seconds in a month.
Likewise here. You can estimate the number of months in a year by number of letters in the world and then multiply it by number of seconds in a months.
... (read more)