In my experience teachers tend to only give examples of typical members of a category. I wish they'd also give examples along the category border, both positive and negative. Something like: "this seems to have nothing to do with quadratic equations, but it actually does, this is why" and "this problem looks like it can be solved using quadratic equations but this is misleading because XYZ". This is obvious in subjects like geography, (when you want to describe where China is, don't give a bunch of points around Beijing as examples, but instead d...
Thank you very much for posting this! I've been thinking about this topic for a while now and feel like this is criminally overlooked. There are so many resources on how to teach other people effectively, but virtually none on how to learn things effectively from other people (not just from textbooks). Yet we are often surrounded by people who know something that we currently don't and who might not know much about teaching or how to explain things well. Knowing what questions to ask and how to ask them makes these people into great teachers - while you reap the benefits! - this feels like a superpower.
While I agree that the algorithm might output 5, I don't share the intuition that it's something that wasn't 'supposed' to happen, so I'm not sure what problem it was meant to demonstrate. I thought of a few ways to interpret it, but I'm not sure which one, if any, was the intended interpretation:
a) The algorithm is defined to compute argmax, but it doesn't output argmax because of false antecedents.
- but I would say that it's not actually defined to compute argmax, therefore the fact that it doesn't output argmax is not a problem.
b) Regardless of th...
I don't quite follow why 5/10 example presents a problem.
Conditionals with false antecedents seem nonsensical from the perspective of natural language, but why is this a problem for the formal agent? Since the algorithm as presented doesn't actually try to maximize utility, everything seems to be alright. In particular, there are 4 valid assignments: , , ,
The algorithm doesn't try to select an assignment with largest , but ...
Could someone explain why this doesn't degenerate into an entirely circular concept when we postulate a stronger compiler; or why it doesn't become entirely dependent on the choice of the compiler?
Now we have a set of sequences that we'd like to encode: S ...
I thought of a slightly different exception for the use of "rational": when we talk about conclusions that someone else would draw from their experiences, which are different from ours. "It's rational for Truman Burbank to believe that he has a normal life."
Or if I had an extraordinary experience which I couldn't communicate with enough fidelity to you, then it might be rational for you not to believe me. Conversely, if you had the experience and tried to tell me, I might answer with "Based only on the information that I received from you, which is p...
Richard Feynman once said that if you really understand something in physics you should be able to explain it to your grandmother. I believed him.
Curiously enough, there is a recording of an interview with him where he argues almost exactly the opposite, namely that he can't explain something in sufficient detail to laypeople because of the long inferential distance.
It seems that the mistake that people commit is imagining the the second scenario is a choice between 0.34*24000 = 8160 and 0.33*27000 = 8910. Yes, if that was the case, then you could imagine a utility function that is approximately linear in the region 8160 to 8910, but sufficiently concave in the region 24000 to 27000 s.t. the difference between 8160 and 8910 feels greater than between 24000 and 27000... But that's not the actual scenario with which we are presented. We don't actually get to see 8160 or 8910. The slopes of the ...
But is the Occam's Razor really circular? The hypothesis "there is no pattern" is strictly simpler than "there is this particular pattern", for any value of 'this particular'.. Occam's Razor may expect simplicity in the world, but it is not the simplest strategy itself.
Edit: I'm talking about the hypothesis itself, as a logic sequence of some kind, not that, which the hypothesis asserts. It asserts maxentropy - the most complex world.
Originally I thought of an exception where the thing that we don't know was a constructive question. e.g. given more or less complete knowledge or material science, how to we construct a decent bridge? But it's an obvious limitation, no self-proclaimed reductionist would actually try to apply reductionism in such situation.
It seems to me that you're describing a reverse scenario: suppose we have an already constructed object, and want to figure out how works - can reductionism still be used? I'd still say yes.
Take an airplane, for example. Knowing relevant...
Something felt off about this example and I think I can put my finger on it now.
My model of the world gives the event with the blue tentacle probability ~0. So when you ask me to imagine it, and I do so, what it feels like to me like I'm coming up with a new model to explain it, which gives a higher probability to that outcome than my current model does. This seems to be the root of the apparent contradiction, it appears that I'm violating the invariant. But I don't think that that's what actually happening. Consider this fictional exchange:
...EY: Imagi
Care to elaborate? Also, that's not really an exception, but a boundary - it's exactly what you would expect if there are finitely many layers of composition i.e. the world is not like an infinite fractal.
Of course it doesn't work for problems where the objects in question are already fundamental and cannot be reduces any further. But that's what I meant in the original post - reductionist frameworks would fail to produce any new insights if we were already at the fundamental level.
If reductionism was wrong then I would expect reductionist approaches to be ineffective. Every attempt at gaining knowledge using a reductionist framework would fail do discover anything new, except by accident on very rare occasions. Or experiments would fail to replicate because the conservation of energy was routinely violated in unpredictable ways.
Conservation laws or not, you ought to believe in the existence of the photon because you continue having the evidence of its existence - it's your memory of having fired the photon! Your memory is entangled with the state of the universe, not perfectly, but still, it's Bayesian evidence. And if your memory got erased, then indeed, you'd better stop believing that the photon exists.
That seems unlikely. There is already a certain difficulty in showing that illusion of free will is an illusion. "It seems like you have free will, but actually, it doesn't seem." - The seeming is self-evident, so what does it mean to say that something actually doesn't seem if it feels like it seems. As far as I understand it, it's not like it doesn't really seem so, but you're mistaken about it and think that it actually seems so, and then mindfulness meditation clears up that mistake for you and you stop thinking that it seems that you have free will. I...
As Sam Harris points out, the illusion of free will is itself an illusion. It doesn't actually feel like you have free will if you look closely enough. So then why are we mistaken about things when we don't examine them closely enough? Seems like a too-open-ended question.
Update: a) is just wrong and b) is right, but unsatisfying because it doesn't address the underlying intuition which says that the stopping criterion ought to matter. I'm very glad that I decided to investigate this issue in full detail and run my own simulations instead of just accepting some general principle from either side.
MacKay presents it as a conflict between frequentism vs bayesianism and argues why frequentism is wrong. But I started out with a bayesian model and still felt that motivated stopping would have some influence. I'm going ...
Fixing my predictions now, before going to investigate this issue further (I have Mackay's book within the hand's reach and would also like to run some Monte-Carlo simulations to check the results; going to post the resolution later):
a) It seems that we ought to treat the results differently, because the second researcher in effect admits to p-hacking his results. b) But on the other hand, what if we modify the scenario slightly: suppose we get the results from both researchers 1 patient at a time. Surely we ought to update the priors by the same amo...
Fascinating subject indeed!
I tried to reason through the riddles, before reading the rest and I made the same mistake as the jester did. It is really obvious in hindsight; I thought about this concept earlier and I really thought I had understood it. Did not expect to make this mistake at all, damn.
I even invented some examples on my own, like in the programming language Python a statement like print("Hello, World!") is an instruction to print "Hello, World!" on the screen, but "print(\"Hello, World!\")" is merely a string, that represents the first string, it's completely inert. (i...
(Of course I don't know how the authors actually come up with the hypothesis and I could be wrong, and the conclusions seem very plausible anyway, but..) The study seem to be susceptible to stopping bias.
If the correlation was very strong right away, they could've said "Parental grief directly correlates with reproductive potential, Q.E.D!"
It wasn't, but they found a group resembling early hunter-gatherers; with the conclusion "Parental grief directly correlates with reproductive potential from back then, Q.E.D!"
If this didn't turn out either, and th...
I'm not sure whether the explanation at the end was right, but this is a very powerful technique nonetheless. I observed a similar problem many times, but couldn't quite put my finger on it.
Arguing against consistency itself. "I was trying to be consistent when I was younger, but now I'm more wise than that."
This feels very important.
Suppose that something *was* deleted. What was it? What am I failing to notice?
Maybe learning to 'regenerate' the knowledge that I currently possess is going to help me 'regenerate' the knowledge that 'was deleted'.
>>> 20.05751