Posts

Sorted by New

Wiki Contributions

Comments

What I keep coming to here is, doesn't the entire point of this post come to the situations where the parameters in question, the bias of the coins, are not independent? And doesn't this contradict?

estimate 100 independent unknown parameters

Which leads me to read the later half of this post as, we can (in principle, perhaps not computably) estimate 1 complex parameter with 100 data sets better than 100 independent unknown parameters from individual data sets. This shouldn't be surprising. I certainly don't find it as such.

The first half just points out that in the independent case of this particular example, Bayesian and Frequentist perform equivalently for relatively similar assumptions. But cousin_it made a general claim about the Frequentist approach, so this isn't worth much weight on its own.

This post is a decent first approximation. But it is important to remember that even successful communication is almost always occurring on more than just one of these levels at once.

Personally I find it useful to think of communication as having spontaneous layers of information which may include things like asserting social context, acquiring knowledge, reinforcing beliefs, practicing skills, indicating and detecting levels of sexual interest, and even play. And by spontaneous layers, I mean that we each contribute to the scope of a conversation, and then those contributions become discerned as patterns (whether intended or not).

Then iterate this process a few times, with my attempting to perceive and affect your patterns and you attempting to perceive and affect mine. Add some habitual or built-in (it's extremely hard to tell the difference) models in the mind to start from and it seems simple (to me) how something as complex and variable as human communication can arise.

In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback.

As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.

When you're trying to communicate facts, opinions, and concepts - most especially concepts - it is a useful investment of effort to try to categorize both your audience's crystallography and your own.

This is something of an oversimplification. Categories are one possible first step, but eventually you will need more nuance than that. I suggest forming estimates based on the communication being serving also as a sequence of experiments. And being very strict about not ruling things out, especially if you have not managed to beat down your typical mind fallacy.

And that's just for a simply dialogue. Communication in a public forum with other audiences and even other participants, well, that is even more complex.

Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?

The number of assertions needed is now so large that it may be difficult for a human to acquire that much knowledge.

Especially given these are likely significantly lower bounds, and don't account for the problems of running on spotty evolutionary hardware, I suspect that the discrepancy is even greater than it first appears.

What I find intriguing about this result is that essentially it is one of the few I've seen that has a limit description of consciousness: you have on one hand a rating of complexity of your "conscious" cognitive system and on the other you have world adherence based on the population of your assertions. Consciousness is maintained if, as you increase your complexity, you maintain the variety of the assertion population.

It is possible that the convergence rates for humans and prospective GAI will simply be different, however. Which makes a certain amount of sense. Ideal consciousness in this model is unachievable, and approaching it faster is more costly, so there are good evolutionary reasons for our brains to be as meagerly conscious as possibly - even to fake consciousness when the resources would not otherwise be missed.

This should not be underestimated as an issue. Status as we use it here and at overcoming bias tends to be simplified into something not unlike a monetary model.

It is possible to try to treat things like status reductively, but in the current discussion it will hopefully suffice to characterize it with more nuance than "social wealth".

If you only expect to find one empirically correct cluster of contrarian beliefs, then you will most likely find only one, regardless of what exists.

Treating this is as a clustering problem we can extract common clusters of beliefs from the general contrarian collection and determine degrees of empirical correctness. Presupposing a particular structure will introduce biases on the discoveries you can make.

there's really no reason those numbers should too much higher than they are for a random inhabitant of the city

Actually simply being in the local social network of the victim should increase the probability of involvement by a significant amount. This would of course be based on population, murder rates, and so on. And likely would also depend on estimates of criminology models for the crime in question.

Proof of how dangerous this sort of list can be.

I entirely forget about:

  • act effectively

After all, how can you advance even pure epistemic rationality without constructing your own experiments on the world?

Load More