Space colonization is part of the transhumanist package of ideas originating with Nikolai Federov.
Regarding the ending comments about Godric's Hollow: there was some earlier discussion about the wizarding community's consensus here.
What happened here?
The Veritaserum was brought in then, and Hermione looked for a brief moment like she was about to sob, she was looking at Harry - no, at Professor McGonagall - and Professor McGonagall was mouthing words that Harry couldn't make out from his angle. Then Hermione swallowed three drops of Veritaserum and her face grew slack.
b) communicating something. If it's this, then I strongly suspect that McGonagall is cooperating with future Harry in some rescue plan. She might be communicating a simple message like "Don't worry" or "We'll get you out"...
To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:
There's a phrase that the tech world uses to describe the kind of people you want to hire: "smart, and gets things done." I'm willing to grant "smart", but what about the other one?
The sequences and HPMoR are fantastic introductory/outreach writing, but they're all a few years old at this point. The rhetoric about SI being more awesome than ever doesn't square with the trend I observe* in your actual productivity. To be blunt, why are you happy that you're doing less with more?
*I'm sure I don't know everything SI has actually done in the last year, but that's a problem too.
To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:
Right. Encryption is a lever; it permits you to use the secrecy of a small piece of data (the key) to secure a larger piece of data (the message). The security isn't in the encryption math. It's in the key storage and exchange mechanism.
*I stole this analogy from something I read recently, probably on HN.
There's a similar guideline in the software world:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
That was interesting, thanks. Here's another take - specific to the field of language modeling, but addresses the same question of statistical versus formal models: http://norvig.com/chomsky.html
As Kahneman points out in his new book, failures of reasoning are much easier to recognize in others than in ourselves. His book is framed around introducing the language of heuristics and biases to office water-cooler gossip. Practicing on the hardest level (self-analysis) doesn't seem like the best way to grow stronger.
Voted you down. This is deontologist thought in transhumanist wrapping paper.
...Ignoring the debate concerning the merits of eternal paradise itself and the question of Heaven's existence, I would like to question the assumption that every soul is worth preserving for posterity.
Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite p
Make sure you know which "SOPA" you're referring to. This piece of legislation has undergone significant change from the version that sparked popular outrage.
Added after reading some other comments: if you've made cynical predictions about SOPA's progress through Congress or its effects in the real world, don't forget to update your beliefs on the eventual outcome. Write this prediction down somewhere.
Regarding "convincing" children of things: this AI koan is relevant.
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky.
“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.
“Why is the net wired randomly?”, asked Minsky.
“I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes.
“Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.
Sure. S results from HH or from TT, so we'll calculate those independently and add them together at the end. We'll do that by this equation: P(p=x|S) = P(p=x|HH) P(H) + P(p=x|TT) P(T).
We start out with a uniform prior: P(p=x) = 1. After observing one H, by Bayes' rule, P(p=x|H) = P(H|p=x) P(p=x) / P(H). P(H|p=x) is just x. Our prior is 1. P(H) is our prior, multiplied by x, integrated from 0 to 1. That's 1/2. So P(p=x|H) = x1/(1/2) = 2x.
Apply the same process again for the second H. Bayes' rule: P(p=x|HH) = P(H|p=x,H) P(p=x|H) / P(H|H). The first term ...
There's a Stanford online course next semester called Probabilistic Graphical Models that will cover different ways of representing this sort of problem. I'm enrolled.
This recursive expected value calculation is what I implemented to solve my coinflip question. There's a link to the Python code in that post for anyone who is curious about implementation.
Speaking only for myself, I'm in that awkward middle stage - I understand probability well enough to solve toy problems, and to follow explanations of it in real problems, but not enough to be confident in my own probabilistic interpretation of new problem domains. I'm looking forward to this sequence as part of my education and definitely appreciate seeing the formality behind the applications.
Can you elaborate on the calculation for S? I think it should be this, but I'm not confident in my math.
Good post. I like how you explained both the technique and the process that you used to develop it.
I see another potential benefit in estimating VoI. Asking myself, "Does any state of knowledge exist that would make me choose differently here?" bypasses some of my involuntary defenses against, "What state of knowledge would make me choose differently here?" The difference is that the former triggers an honest search, while the latter queries for a counterfactual scenario but gives up quickly because one isn't available.
The meta-pattern for reasoning errors is question substitution. A question with an available answer is substituted for the actual query and the answer is translated using intensity matching if the units don't match.
In this case, the subjects were primed to recall the cheers of their football team by the context of a political survey. The question they substituted was, "Does this statement resemble any of the professed beliefs of my political affiliation?"
Their answers were never considered empirically. Most questions never are.
Sorry, I don't know what morality is. I thought we were talking about "morality". Taboo your words.
That's a good start. Let's take as given that "morality" refers to an ordered list of values. How do you compare two such lists? Is the greater morality:
Once you decide what actually makes one list better than another, then consider what observable evidence that difference would produce. With a prediction in hand, you can look at the world and gather evidence for or against the hypothesis that "morality" is increasing.
My most important thought was to ensure that all CPU time is used. That means continuing to expand the search space in the time after your move has been submitted but before the next turn's state is received. Branches that are inconsistent with your opponent's move can be pruned once you know it.
Architecturally, several different levels of planning are necessary: food harvesting and anticipating new food spawns. Pathfinding, with good route caching so you don't spend all your CPU here. Combat instances, evaluating a small region of the map with alpha/beta ...
'Shall be' refers to a change of future state, so it can't be about the way things are now.