All of Maximum_Skull's Comments + Replies

I think your list of conditions is very restrictive; to the point at which it's really difficult to find something matching it.
Most (all?) modern difficult strategy games rely on some version of "game knowledge" as part of it's difficulty, expecting you to experiment with different approaches to find out what works best -- this is a core part of the game loop, something specifically designed into the game to make it more fun. This is baked into the design on a fundamental level and is extremely difficult to separate out.
Combine that with the one-shot natur... (read more)

And of course there is correlation between math knowledge and success at test. The problem is that at the extremes the tails come apart. Kids that are better at multiplication will be better at the tests. But the kids who score maximum at the tests and the kids who win math olympiads, are probably two distinct groups. I am saying this as a former kid who did well at math olympiads, but often got B's at high school, because of some stupid numeric mistake.


This is culture-dependent; e.g. in (some parts of Russian grading culture) making a numeric mistake migh... (read more)

3Viliam
The advantage of "a mistake is a mistake" approach is that it makes it much cheaper to evaluate. Imagine having to grade tests from 100 students -- how much time would it take if you only check the results, and how much time if you also check the process. It also depends on how much the teacher's verdict is final, or how much the students are allowed to argue. Imagine that out of those 100 students, at least twenty want to debate you individually on whether their processes were only 60% correct or 80% correct (because a partial point here and a partial point there might together make a difference in their final grade). It would drive me crazy. Also, it would be unfair against the less "litigious" students. A mistake of the same magnitude can have a different overall impact, depending on where you make it. If at the last step of long computation you accidentally do 20+50 instead of 20-50, but everything else was correct, it is obvious that you would have solved the problem correctly. But if you made exactly the same mistake at the beginning of the problem... it could have sent you on a completely different track. Like, maybe you needed to calculate a square root of that, and then you solve a task with real numbers, while all your classmates were solving a task with complex numbers. If the point of the test was to verify your knowledge of complex numbers, then this completely failed the purpose. And yet, it was the same mistake. I think my preferred option (assuming "veil of ignorance" where I can either be the student or the teacher) would be: a mistake is a mistake, but you can take the test again (with slightly different questions). * At math olympiad, both your process and your answer matter. The process matters in the sense that it has to be mathematically sound (but it can be completely different from what other people would use). If the answer is wrong because of some stupid mistake but the process is generally sound, you get partial points; so I think a t
4Dirichlet-to-Neumann
There's a big gap between "you have to complete the task in exactly this way" and "mistake is a mistake, only the end result count". I routinely gives full marks if the student made a small computation mistake but the reasoning is correct. My colleagues tend to be less lenient but follow the same principle. I always give full grade to correct reasoning even if it is not the method seen in class (but I quite insistently warn my students that they should not complain if they make mistakes using a different method).

The post I commented on is about a justification of induction (unless I have commited some egregious misreading, which is a surprisingly common error mode of mine, feel free to correct me on this part). It seemed natural for me that I would respond with linking the the strongest justification I know -- although again, might have misread myself into understanding this differently from what words were written.

[This is basically the extent to which I mean that the question is resolved; I am conceding on ~everything else.]

I find the questions of "how and when can you apply induction" vastly more interesting than the "why it works" question. I am more of a "this is a weird trick which works sometimes, how and when does it apply?" kind of guy.

Bayesianism is probably the strongest argument for the "it works" part I can provide: here are the rules you can use to predict future events. Easily falsifiable by applying the rules, making a prediction and observing the outcomes. All wrapped up in an elegant axiomatic framework.

[The answer is probabilistic because the nature of the problem is (unless you possess complete information about the universe, which coincidentally makes induction redundant).]

4TAG
Maybe you are personally not interested in the philosophical aspects, ...but then why say that Bayes fully answers them, when it doesn't and you don't care anyway? Says who? For a long time, knowledge was associated with certainty.

I might have an unusual preference here, but I find the "why" question uninteresting.

It's fundamentally non-exploitable, in a sense that I do not see any advantage to be gained from knowing the answer (not a straightforward /empirical way of finding which one out of the variants I should pay attention to).

2dr_s
Oh, I mean, I agree. I'm not asking "why" really. I think in the end "I will assume empiricism works because if it doesn't then the fuck am I gonna do about it" is as respectable a reason to just shrug off the induction problem as they come. It is in fact the reason why I get so annoyed when certain philosophers faff about how ignorant scientists are for not asking the questions in the first place. We asked the questions, we found as useful an answer as you can hope for, now we're asking more interesting questions. Thinking harder won't make answers to unsolvable questions pop out of nowhere, and in practice, every human being lives accordingly to an implicit belief in empiricism anyway. You couldn't do anything if you couldn't rely on some basic constant functionality of the universe. So there's only people who accept this condition and move on and people who believe they can somehow think it away and have failed one way or another for the last 2500 years at least. At some point, you gotta admit you likely won't do much better than the previous fellows.

Bayesian probability theory fully answers this question from a philosophical point of view, and answers a lot of it from the practical point of view (doing calculations on probability distributions is computationally intensive and can get intractable pretty quick, so it's not a magic bullet in practice).

It extends logic to handle be able to uniformly handle both probabilistic statements and statements made with complete certainty. I recommend Jaynes's "Probability Theory: The Logic of Science" as a good guide to the subject in case you are interested.

6TAG
There's more than one problem of induction. Bayesian theory doesn't tell you anything about the ontological question, what makes the future resemble the past, and it only answers the epistemological question probablistically.
3dr_s
It just pushes the question further. The essential issue with inference is "why should the universe be so nicely well-behaved and have regular properties?". Bayesian probability theory assumes it makes sense to e.g. assign a fixed probability to the belief that swans are white based on a certain amount of swans that we've seen being white, which already bakes in assumptions like e.g. that the swans don't suddenly change colour, or that there is a finite amount of them and you're sampling them in a reasonably random manner. Basically, "the universe does not fuck with us". If the universe did fuck with us, empirical inquiry would be a hopeless endeavour. And you can't really prove for sure that it doesn't. The strongest argument in favour of the universe really being so nice IMO is an anthropic/evolutionary one. Intelligence is the ability to pattern-match and perform inference. This ability only confers a survival advantage in a world that is reasonably well-behaved (e.g. constant rules in space and time). Hence the existence of intelligent beings at all in a world is in itself an update towards that world having sane rules. If the rules did not exist or were too chaotic to be understood and exploited, intelligence would only be a burden.

Oh, so human diseases in the form of bacteria/viruses! And humans working on gain-of-function research.

6lc
I actually have no problem calling either of those systems intelligent, as long as the bacteria/viruses are evolving on a short enough timescale. I guess you could call them slow.

Reminds me of a discussion I've had recently about whether humans solve complex systems of [mechanical] differential equations while moving. The counter-argument was "do you think that a mercury thermometer solves differential equations [while 'calculating' the temperature]?"

This one is a classic, so I can just copy-paste the solution from Google. The more interesting point is that this is one of those cases where math doesn't correspond to reality.

In the spirit of "trying to offer concrete models and predictions" I propose you a challenge: write a bot which would consistently beat my Rob implementation over the long run enough that it would show on numerical experiments. I need some time to work on implementing it (and might disappear for a while, in which case consider this one forfeited by me).

One of the rules I propose tha... (read more)

9Oscar_Cunningham
A simple way to do this is for ROB to output the pair of integers {n, n+1} with probability K((K-1)/(K+1))^|n|, where K is some large number. Then even if you know ROB's strategy the best probability you have of winning is 1/2 + 1/(2K). If you sample an event N times the variance in your estimate of its probability is about 1/N. So if we pick K >> √N then our probability of success will be statistically indistinguishable from 1/2. The only difficulty is implementing code to sample from a geometric distribution with a parameter so close to 1.

It does work for negative bases. Representation of a number in any base is in essence a sum of base powers multiplied by coefficients. The geometric series just has all coefficients equal to 1 after the radix point (and a 1 before it, if we start addition from the 0th power).

Oh, thanks, I did not think about that! Now everything makes much more sense.

Those probabilities are multiplied by s, which makes it more complicated.
If I try running it with s being the real numbers (which is probably the most popular choice for utility measurement), the proof breaks down. If I, for example, allow negative utilities, I can rearrange the series from a divergent one into a convergent one and vice versa, trivially leading to a contradiction just from the fact that I am allowed to do weird things with infinite series, and not because of proposed axioms being contradictory.
EDIT: concisely, your axioms do n... (read more)

8LGS
The rearrangement property you're rejecting is basically what Paul is calling the "rules of probability" that he is considering rejecting. If you have a probability distribution over infinitely (but countably) many probability distributions, each of which is of finite support, then it is in fact legal to "expand out" the probabilities to get one distribution over the underlying (countably infinite) domain.  This is standard in probability theory, and it implies the rearrangement property that bothers you.

The correct condition for real numbers would be absolute convergence (otherwise the sum after rearrangement might become different and/or infinite) but you are right: the series rearrangement is definitely illegal here.

4paulfchristiano
But in the post I'm rearranging a series of probabilities, 12,14,… which is very legal. The fact that you can't rearrange infinite sums is an intuitive reason to reject Weak Dominance,  and then the question is how you feel about that.

It actually would, as long as you reject a candidate password with probability proportional to it's relative frequency. "password" in the above example would be almost certainly rejected as it's wildly more common that one of those 1000-character passwords.

3James Payor
Well, the counterexample I have in mind is: flip a coin until it comes up tails, your password is the number of heads you got in a row. While you could rejection-sample this to e.g. a uniform distribution on the numbers [0,k), this would take 2k samples, and isn't very feasible. (As you need 22n work to get n bits of strength!) I do agree that your trick works in all reasonable cases, whenever it's not hard to reach k different possible passwords.

Stamp collecting (e.g. "history" and "English literature") does not count.

Interesting to see your perspective change from this post and it's comments, which suggested that history is a useful source of world models. Or am I misinterpreting past/current you?

3lsusr
My perspective hasn't changed. I value history. I'm currently reading The Gunpowder Age: China, Military Innovation and the Rise of the West in World History (2016) by Tonio Andrade. I just feel history is more of a "field of knowledge" than a "skill" with immediate practical applications. This post is about immediate practical applications.

You cannot falsify mathematics by experiment (except in the subjective Bayesian sense).

Actually, that's technically false. The statements mathematical axioms make about reality are bizarre, but they exist and are actually falsifiable.

One of the fundamental properties we want from our axiomatic systems is consistency — the fact that it does not lead to a logical contradiction. We would certainly reject our current axiomatic foundations in case we found them inconsistent.

Turns out it's possible to write a program which would halt if and only if ZFC is consis... (read more)

3lsusr
This is a good point. Mathematical axioms must be consistent.

I would suggest E.T. Jaynes' excellent Probability Theory: The Logic of Science. While this is a book about Bayesian probability theory and it's applications, it contains a great discussion of entropy, including, e.g., why entropy "works" in thermodynamics.

I would probably move the "spoilers ahead" section before the "Japanese history" one. Unsure if it's possible to make this non-spoilery somehow, but history section is written as if to make the ending twist obvious.

2lsusr
Done.

One way to keep bots out is to validate real-world identities.


Currently, the actual use case is more akin to an assistant for human writers, so validating the identity would not do much good. Additionally, if the demand for real-life tethered online identities ever gets high, there would appear a market for people selling theirs. I have a friend, who has found a Chinese passport online, because a (Chinese) online game required one as part of registration data.
Use of social media as a marketing platform for small, tightly-knit communities is probably the way to largely mitigate this problem.

3lsusr
If this remains the case then the application of ALMs is a difference of degree rather than kind, and there is little to worry about.

two her brain

This gave me the chills. I have never thought about digital brain modification safety before, although now the idea seems obvious. Wonder what else am I missing.

6lsusr
I have fixed the typo; "two her brain" is now "to her brain". There's all sorts of stuff that can go wrong when you're modifying human brains.
3Vanilla_cabs
You mean it's not a typo? I still don't get the hidden meaning.

The meritocratic part is the best are significantly more likely to rise to the top, real world is best thought of as a stochastic place, full of imperfect information and surprises.
Being the best at content creation is not the same as being the best at YouTube: size of one's target demographic matters, the ability to self-promote matters, ability to network matters, ad-friendliness of content matters... Akin to evolution, the system does not select the *best* creators in the conventional sense of creating the best videos, being the best at writing and so o... (read more)

3Adam Zerner
That makes sense, I agree.

I agree, but my reasoning for it is different.
Given that the simulacra levels framework is fake, I care mostly about the way it pumps my intuition. For me it has more impact with less levels. Grouping everything in levels 4+ as a single thing does speed processing up, and doesn't seem to meaningfully change my conclusions.
There likely exists some context where those extra levels are useful and offer new insights, but I've not seen it yet.

Excellent as always!
 

Throughout the entire process it is implied that words if you specify a value and if you specify a criterion...

 

By training children in the traditional of adversarial competitive rhetoric...

2lsusr
Thanks. Fixed.

"Can I get you a coffee?" a young quant named said.

It's really good, am waiting for the next part, keep it up!

3lsusr
Fixed typo. Also thanks.

There won't be any more harm done to Oliver by spreading the story, so, at least from utilitarian-ish point of view, the case is clear.

Right, but that’s why it’s interesting.

From a utilitarian perspective, is Oliver’s outing morally redeemed by using him as an example in journalistic ethics classes? Or would it be, if it helped reduce the incidence of future privacy invasions?

If so, then Harvey Milk is a hero in this story. He not only made Oliver into a gay hero, probably saving more than one life in the long run by advancing the cause of gay rights, but he also gave us a great example of the consequences of privacy invasion that we can use in ethics classes. A two-fer!

That doesn’t feel ... (read more)