Ordinary probability theory and expected utility are sufficient to handle this puzzle. You just have to calculate the expected utility of each strategy before choosing a strategy. In this puzzle a strategy is more complicated than simply putting some number of coins in the machine: it requires deciding what to do after each coin either succeeds or fails to succeed in releasing two coins.
In other words, a strategy is a choice of what you'll do at each point in the game tree - just like a strategy in chess.
We don't expect to do well at chess if we deci...
Right: a game where you repeatedly put coins in a machine and decide whether or not to put in another based on what occurred is not a single 'event', so you can't sum up your information about it in just one probability.
Once you assume:
1) the equations describing gravity are invariant under all coordinate transformations,
2) energy-momentum is not locally created or destroyed,
3) the equations describing gravity involve only the flow of energy-momentum and the curvature of the spacetime metric (and not powers or products or derivatives of these),
4) the equations reduce to ordinary Newtonian gravity in a suitable limit,
then Einstein's equations for general relativity are the only possible choice... except for one adjustable parameter, the cosmological constant.
(First Ei...
I agree that math can teach all these lessons. It's best if math is taught in a way that encourages effort and persistence.
One problem with putting too much time into learning math deeply is that math is much more precise than most things in life. When you're good at math, with work you can usually become completely clear about what a question is asking and when you've got the right answer. In the rest of life this isn't true.
So, I've found that many mathematicians avoid thinking hard about ordinary life: the questions are imprecise and the answers m...
As of December 31, 2012, the Treasury had received over $405 billion in total cash back on Troubled Assets Relief Program investments, equaling nearly 97 percent of the $418 billion disbursed under the program.
But TARP was just a small part of the whole picture. What concerns me is that there seem to have been somewhere between $1.2 trillion and $16 trillion in secret loans from the Fed to big financial institutions and other corporations. Even if they've been repaid, the low interest rates might represent a big transfer of w...
Very nice article!
I too wonder exactly what you mean by
effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact.
Which kinds of qualitative analysis do you think are important, and why? Is that what you're talking about when you later write this:
...Estimating the cost-effectiveness of health interventions in the developing world has proved to be exceedingly difficult, and this in favor of giving more weight to inputs for which it’s possible
Maybe this is not news to people here, but in England, a judge has ruled against using Bayes' Theorem in court - unless the underlying statistics are "firm", whatever that means.
I studied particle physics for a couple of decades, and I would not worry much about "mirror matter objects". Mirror matter is just of many possibilities that physicists have dreamt up: there's no good evidence that it exists. Yes, maybe every known particle has an unseen "mirror partner" that only interacts gravitationally with the stuff we see. Should we worry about this? If so, we should also worry about CERN creating black holes or strangelets - more theoretical possibilities not backed up by any good evidence. True, mirror m...
If you make choices consistently, you are maximizing the expected value of some function, which we call "utility".
Unfortunately in real life many important choices are made just once, taken from a set of choices that is not well-delineated (because we don't have time to list them), in a situation where we don't have the resources to rank all these choices. In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don't apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the s...
Baez replies with "Ably argued!" and presumably returns to his daily pursuits.
Please don't assume that this interview with Yudkowsky, or indeed any of the interviews I'm carrying out on This Week's Finds, are having no effect on my activity. Last summer I decided to quit my "daily pursuits" (n-category theory, quantum gravity and the like), and I began interviewing people to help figure out my new life. I interviewed Yudkowsky last fall, and it helped me decide that I should not be doing "environmentalism" in the customar...
Since XiXiDu also asked this question on my blog, I answered over there.
I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?
I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.
In my interview of Gregory Benford I wrote:
If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!
If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!
It sounds like you find the second argument so unconvincing that you don't see why people consider it a paradox.
For what it's worth, I'd take only one box.
XiXiDu wrote:
I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI.
Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his. But the last part of this interview will touch on global warming, and if you want to ask me questions, that would be a great time to do it.
(Week 311 is just the first part of a multi-part interview.)
For now, you might be interested to read about Gregory Benford's assessment of...
What do you believe because others believe it, even though your own evidence and reasoning ("impressions") point the other way?
I don't know. If I did, I'd probably try to do something about it. But my subconscious mind seems to have prevented me from noticing examples. I don't doubt that they exist, lurking behind the blind spot of self-delusion. But I can't name a single one.
Feynman: "The first principle is that you must not fool yourself - and you are the easiest person to fool."
Thurston said:
Quickness is helpful in mathematics, but it is only one of the qualities which is helpful.
Gowers said:
The most profound contributions to mathematics are often made by tortoises rather than hares.
Gelfand said it in a more funny way:
You have to be fast only to catch fleas.
It's a bit easier in math than other subjects to know when you're right and when you're not. That makes it a bit easier to know when you understand something and when you don't. And then it quickly becomes clear that pretending to understand something is counterproductive. It's much better to know and admit exactly how much you understand.
And the best mathematicians can be real masters of "not understanding". Even when they've reached the shallow or rote level of understanding that most of us consider "understanding", they are di...
The author of this post pointed out that he said "t's noticeably less common for mathematicians of the highest caliber to engage in status games than members of the general population do." Somehow I hadn't noticed that.
I'm not sure how this affects my reaction, but I wouldn't have written quite what I wrote if I'd noticed that qualifier.
In my 25 years of being a professional mathematician I've found many (though certainly not all) mathematicians to be acutely aware of status, particularly those who work at high-status institutions. If you are a research mathematician your job is to be smart. To get a good job, you need to convince other people that you are smart. So, there is quite a well-developed "pecking order" in mathematics.
I believe the appearance of "humility" in the quotes here arises not from lack of concern with status, but rather various other factors:
1...
The author of this post pointed out that he said "t's noticeably less common for mathematicians of the highest caliber to engage in status games than members of the general population do." Somehow I hadn't noticed that.
I'm not sure how this affects my reaction, but I wouldn't have written quite what I wrote if I'd noticed that qualifier.
It's some sort of mutant version of "just because you're paranoid doesn't mean they're not out to get you".
My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.
(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)
Or: it says "This is undecidable in Zermelo-Fraenkel set theory plus the axiom of choice". In the case of P=NP, I might believe it
Ask again, with another famously unsolved math problem. Repeat until it stops saying that or you run out of problems you know.
I would not believe a purported god if it said all 9 remaining Clay math prize problems are undecidable.
Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism.
So maybe some form of forced socialism is right. But you don't seem interested in considering that possibility. Why not?
While Utilitarianism is excellent for considering consequences, I think it's a mistake to try and raise it as a moral principle.
Why not?
It seems like you have some pre-established moral principles which you are using in your arguments against utilitarianism. Right?
...I don't see how you can compromise on these principles. Either each
Thanks for writing this - I've added links in my article recommending that people read yours.