It's not further evidence, but it's a good suggestion for a possible place for Hermione to be. It's safe from Quirrell and unexpected. It's also partially hidden by a different charm (assuming QQ can sense Harry's magic)
It is literary evidence, because EY is talking about the glasses.
On a side note, note that Quirrell is wrong or lying about already fulfilling the terms of the prophecy. The person that Quirrell marked as his equal, who has powers that Quirrell knows not, is the version of Harry Potter after Quirrell forked himself.
Hence, presumably, the sense of doom.
(... what is odd, though, is that Quirrell seems to be on the losing end of that conflict of magic.)
Quirrell marked Harry as his equal. I cannot imagine anything more marking someone as your equal than replacing their mind with your own.
Rationality skills are not something you can complete and move on to the next level. If rationality moves into your system 1, then you are doing it wrong (or maybe doing it REALLY REALLY well).
What app does less wrong recommend for to-do lists? I just started using Workflowy (recommended from a LW friend), but was wondering if anyone had strong opinions in favor of something else.
P.S. If you sign up for workflowy here, you get double space.
EDIT: The above link is my personal invite link, and I get told when someone signs up using it, and I get to see their email address. I am not going to do anything with them, but I feel obligated to give this disclaimer anyway.
Seriously, if you define evidence as "something that sways your beliefs because it is more likely to happen under one hypothesis than the alternative hypothesis," then Bayesianism is the math of evidence, and frequentism (which is used in "Real science") is not. (and does not even really try to be)
This looks seriously misleading to me. While it may be technically correct (because neither frequentism nor "Real science" care much about swaying your beliefs), the math of deciding what's "more likely to happen under one hypothesis than the alternative hypothesis" is a standard part of frequentist statistics where it goes by the name of maximum likelihood.
You might also be interested in the concept of Fisher information.
So, what doc on the web would most concisely rid me of exactly my misunderstanding?
I do not know the answer to you question. Here is my best guess after a couple minutes of trying to answer the question.
Short answer: Bayesianism is not about priors, it is about how evidence should change priors.
The Bayesian approach is all about evidence. Bayesian probability theory is the math of evidence. It needs a prior to work, because evidence is all about how much beliefs should change, so you need a prior to change. You could also do a lot of the Bayesian analysis without choosing a prior, and just write it down as "how much your beliefs would change." (but this doesn't end up with answers that are single numbers)
Seriously, if you define evidence as "something that sways your beliefs because it is more likely to happen under one hypothesis than the alternative hypothesis," then Bayesianism is the math of evidence, and frequentism (which is used in "Real science") is not. (and does not even really try to be)
Also, most of the people here would agree that if they do not have sufficient evidence, then they should still assign a probability, and you should be very quick to change it as you get evidence. This last claim might be controversial here, because people might have alternate hacks where they don't do this to avoid bias, but they will agree that if they could trust themselves, they would want to do this.
Why do this post get so many down votes? The topic isn't really about Charlie Hebdo. I could have used any other example in which emotionally strong counter theories has arisen.
My guess is that it is because
I guess we can agree that the most rational response would be to enter a state of aporia until sufficient evidence is at hand.
and
It sounds like a fine Bayesian approach for getting through life, but for real scientific knowledge, we can't rely on prior reasonings (even though these might involve Bayesian reasoning). Real science works by investigating evidence.
look like a significant misunderstanding of what the bayesian approach is.
Also in certain circles it may be mandatory to show support for certain movements, e.g., if you were living in the Holy Roman Empire in the 17th century it was mandatory to show support for the ruler's religion, if you were a professor at a university in the Soviet Union it was mandatory to show support for communism.
Application of this to Scott Aaronson's statement is left as an exercise to the reader.
I find myself very put off by this comment, and I am not sure if I fully understand why it is bothering me. (or if it is good that it is bothering me) My immediate reaction is that it is rude for you to accuse someone of dishonesty about his own preferences. Instead I feel that you should assume honesty (about statements of personal preferences) and try to cultivate a society where honesty is the optimal strategy.
I am not sure if I am willing to take on all of the consequences of adopting this strategy, and am I not sure if is really well defined as there is a grey area between "preferences" and "beliefs" (Here I mean beliefs as falsifiable claims/probabilities)
I think this issue is all about identifying clusters in a list of points in a large vector space. In particular, you want a method to identify these clusters which is independent of linear transformations on the space. (Replacing one questions with n^2 questions corresponds to multiplying the weight of that question by n) I do not know much about this, but this seems like it is doomed to fail. In particular, it seems like if the points are in any kind of general position, then the whole thing looks like a large simplex, and there is no way to tell the difference between points. You will probably always be able to change the "clusters" by sdding whatever questions you want.
I think the way past this is to allow each individual to choose their own weighting on the questions signifying how "important" that issue is to them. I think there is an important difference between two people who agree on all issues but prioritize them differently, and it is not a problem that they can agree with a movement to different degrees.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Here is my tentative submission to FF.net. Please comment.
I decline to help Harry out of the box.
Harry no longer has Harry-values; he has unbreakable-vow-values. He is smart, and he will do whatever he can to "not destroy the world." In the process maximizing the probability of "not destroying the world," he will likely destroy the world.
If you would allow me, I would like to appeal to Voldemort's rationality and cast Avada Kedavra on Harry before he says or does anything.
I do not think I will be able to stop other people from getting Harry out of the box. I expected people to believe me when I tried to explain why we should not let Harry out of the box. They did not. It was frustrating. You have taught me a valuable lesson about what it is like to be an FAI researcher. Thank you.
EDIT: I have posted it.