First of all, your experimental method can really benefit from a control group. Pick a setting where a thing is definitely not randomly sampled from a set. Perform your experiment and see what happens.
Consider. I generated a random word using this site https://randomwordgenerator.com/
This word turned out to be "mosaic". It has 6 letters. Let's test whether it's length is randomly sampled from the number of months in a year.
As 6*2=12, this actually works perfectly, even better than estimating the number of months in a year based on your birth month!
It also works decently to estimate several other things. The number of hours in a day, number of days in a months. Number of minutes in an hour. It gets the order of magnitude right! If we use the number of minutes in your 15:14 timestamp to estimate the number of minutes in an hour we are simularly off.
Worse with number of days in a week and number of days in a year. But it's mistaken by only one order of magnitude, maybe it's also okay?
At this point the problems with your methodology should be clear:
Secondly, if we want to talk about DA or SH, there is the whole jump from
"I can approximate my birth date as a random sample from all the days in the month/months in a year"
to
"I can approximate my birth rank as a random sample from all the births of people throughout history/starting from the moment of knowing about DA".
The latter doesn't follow from the former, even though in semantic terms both can potentially be described as random sampling of a person throughout time.
The cyclical nature of days in a months and months in a year never allows you to be off more than by an order of magnitude. Even if your parents specifically timed your conception to give you birth on the first of January, therefore putting you in a very specific "reference class", you won't be extremely mistaken about these numbers following your methodology.
On the other hand, there is no such gurantee for birth orders of a people which work as natural numbers, not as elements of a finite field.
I personally didn't expect Trump to do any tarrifs at all
Just curious, how comes? Were you simply not paying attention to what he was saying? Or were you not believing in his promises?
I think "mediocre" is a quite appropriate adjective when describing a thing that we had high hopes for, but now received evidence, according to which while the thing technically works, it performs worse than expected, and the most exciting use cases are not validated.
I indeed used a single example here, so the strength of the evidence is arguable, but I don't see why this case should be an outlier. I could've searched for more, like this one, that is particularly bad:
In any case, you can consider this post my public prediction that other policy prediction markets would also follow a similar thread.
I think the problem here is that you do not quite understand the problem.
There is definetely some kind of misunderstanding that is going on, and I'd like to figure it out.
It's not that we "imagine that we've imagined the whole world, do not notice any contradictions and call it a day".
How it's not the case? Citing you from here:
When you are conditioning on empirical fact, you are imaging set of logically consistent worlds where this empirical fact is true and ask yourself about frequency of other empirical facts inside this set.
How do you know which worlds are logically consistent with your observations and which are not? For that you need to hold them in your mind one by one with all their details and checks for inconsistencies. Which requires you to be a logically omniscient supercomputator with unlimited memory. And none of us is that.
So you have to be doing something else. Only validate the consistency to the best of your cognitive resources, therefore - "imagine that we've imagined the whole world, do not notice any contradictions and call it a day".
It's that we know there exists idealized procedure which doesn't produce stupid answers, like, it can't be money-pumped.
Well, yes. That's the goal. What I'm doing is trying to pinpoint this procedure without the framework of possible worlds which, among other things, doesn't allow reasoning about logical uncertainty. I replace it with a better framework, of iterations of probability experiment that does allow that.
The whole computationally unbounded Bayesian business is more about "here is an idealized procedure X, and if we don't do anything visibly for us stupid from perspective of X, then we can hope that our losses won't be unbounded from certain notion of boundedness". It is not obvious that your procedure can be understood this way.
The bayesian procedure is the same, we've just got rid of all the bizarre metaphysics and now explicitly talking about values of a function approximating something in the real world. What is not obvious for you here? Do you expect that there is some case in which my framework fails, where framework of possible worlds doesn't? If so I'd like to see this example. But I'm also currious where such belief would even come from, considering that, once again, we simply talk about iterations of probability experiment instead of possible worlds.
In this post I've described a unified framework that allows to reason about any type of uncertainty be it logical or empirical. I would appreciate engagement from people who think that logical uncertainty is still unsolved.
Are you arguing that the distinction between objective and subjective are "very unhelpful," because the state of people's subjective beliefs are technically an objective fact of the world?
It's unhelpful due to a an implicit (and in our case somewhat explicit) assumption that "subjective" and "objective" are in opposition to each other. That it's two different magisteriums and things are either one or the other.
why don't you argue that all similar categorizations are unhelpful, e.g. map vs. territory
Map and territory framework lacks this assumption. It's core insight is that maps can and indeed quite often are embedded in the territory. Of course if one does not understand it and uses "map and territory" simply as synonyms to "subjective and objective" then it doesn't matter which terms are used and they are equally unhelpful.
This debate seems hampered by a lack of clarity on what “objective” and “subjective” moralities are.
Absolutely.
Coyne gave a sensible definition of “objective” morality as being the stance that something can be discerned to be “morally wrong” through reasoning about facts about the world, rather than by reference to human opinion.
That's a poor definition. It tries to oppose facts about the worlds to human opinions. While whether humans have particular opinions or not is also a matter of facts about the world.
The fault here lies on the terms itself. Such dychotomies as "objective/subjective" or "real/non-real" or "stance-independent/stance-dependent" are very unhelpful. Artifacts of ancient philosophy which didn't understand map-territory relations, treated mind and matter as separate magisteriums and therefore were tremendously confused about basically everything, unable to separate baby from the bathwater.
I don't think we need content on LessWrong that keeps perpetuating this confusion. Therefore, I'm downvoting this post even though I seem to agree with about 80% of it.
Yes, you are correct! Thanks for noticing it.
This is not relevant to my point. After all you also know that typical month is 1-12
No, the point is that I specifically selected a number via an algorithm that has nothing to do with sampling months. And yet your test outputs positive result anyway. Therefore your test is unreliable.
That's exactly the problem. Essentially you are playing a 2,4,6 game, got no negative result yet and are already confident about the rule.
Distance to equator is in fact cyclical in a very literal sense. Alphabet letters do not have anything to do with random sampling of you through time.
It's not more wrong for a person whose parents specifically tried to give birth at this date than for a person who just happened to be born at this time without any planning. And even in this extreme situation your mistake is limited by two orders of magnitude. There is no such guarantee in DA.