All of John_Baez's Comments + Replies

John_Baez130

Thanks for writing this - I've added links in my article recommending that people read yours.

John_Baez370

Ordinary probability theory and expected utility are sufficient to handle this puzzle. You just have to calculate the expected utility of each strategy before choosing a strategy. In this puzzle a strategy is more complicated than simply putting some number of coins in the machine: it requires deciding what to do after each coin either succeeds or fails to succeed in releasing two coins.

In other words, a strategy is a choice of what you'll do at each point in the game tree - just like a strategy in chess.

We don't expect to do well at chess if we deci... (read more)

4David_Chapman
So, let me try again to explain why I think this is missing the point... I wrote "a single probability value fails to capture everything you know about an uncertain event." Maybe "simple" would have been better than "single"? The point is that you can't solve this problem without somehow reasoning about probabilities of probabilities. You can solve it by reasoning about the expected value of different strategies. (I said so in the OP; I constructed the example to make this the obviously correct approach.) But those strategies contain reasoning about probabilities within them. So the "outer" probabilities (about strategies) are meta-probabilistic. [Added:] Evidently, my OP was unclear and failed to communicate, since several people missed the same point in the same way. I'll think about how to revise it to make it clearer.

Right: a game where you repeatedly put coins in a machine and decide whether or not to put in another based on what occurred is not a single 'event', so you can't sum up your information about it in just one probability.

John_Baez200

Once you assume:

1) the equations describing gravity are invariant under all coordinate transformations,

2) energy-momentum is not locally created or destroyed,

3) the equations describing gravity involve only the flow of energy-momentum and the curvature of the spacetime metric (and not powers or products or derivatives of these),

4) the equations reduce to ordinary Newtonian gravity in a suitable limit,

then Einstein's equations for general relativity are the only possible choice... except for one adjustable parameter, the cosmological constant.

(First Ei... (read more)

5pragmatist
I can see why Einstein would assume 1), 2) and 4), but what was his motivation for assuming 3)? Just some intuition about simplicity?
John_Baez470

I agree that math can teach all these lessons. It's best if math is taught in a way that encourages effort and persistence.

One problem with putting too much time into learning math deeply is that math is much more precise than most things in life. When you're good at math, with work you can usually become completely clear about what a question is asking and when you've got the right answer. In the rest of life this isn't true.

So, I've found that many mathematicians avoid thinking hard about ordinary life: the questions are imprecise and the answers m... (read more)

5[anonymous]
Thanks for pointing me toward the Azimuth Project. I used to follow your "this week" blog for a while, but I must have lost track of it a few years ago. Must have been before this showed up on the radar.
-4private_messaging
Yes, but it is genuinely the case that imprecision and low quality of answers indicate lower utility of an activity, or lower gains due to mathematical skill. Furthermore, what you are saying contradicts existence of mathematicians who did contribute to philosophy (e.g. Godel). edit: I mostly meant, the stories of such - it seems to me that mathematicians who come up with important insights not so rarely try to apply them.
8Risto_Saarelma
Have you noticed any difference between pure mathematicians and theoretical physicists in this regard?
John_Baez120

According to Wikipedia:

As of December 31, 2012, the Treasury had received over $405 billion in total cash back on Troubled Assets Relief Program investments, equaling nearly 97 percent of the $418 billion disbursed under the program.

But TARP was just a small part of the whole picture. What concerns me is that there seem to have been somewhere between $1.2 trillion and $16 trillion in secret loans from the Fed to big financial institutions and other corporations. Even if they've been repaid, the low interest rates might represent a big transfer of w... (read more)

3Alsadius
The Fed's balance sheet isn't anywhere approaching $16T - the last I looked it was under $2T. About half of that is growth since the recession hit, in the form of the Fed printing money and loaning it out. If the $16T number is anything more than a fever dream, the number comes from something like "We loaned $1 to you yesterday, you paid it back, now we're loaning $1 to my mother, so that's $2 of loans" - literally true, but you're counting the same dollar multiple times.
2Larks
They might represent a transfer from taxpayers to bondholders and shareholders of banks, but not to the tune of $9 billion. Also thank you for providing the reference I was in too much of a hurry to.
2CronoDAS
Taxpayers aren't on the hook for "loans" made by the Federal Reserve. And we're in a liquidity trap - until the economy recovers and nominal interest rates rise above the zero lower bound, the Fed can print all the cash it wants without causing inflation (because people are going to save, rather than spend or invest, that cash). Which is pretty much what happened. The U.S. monetary base tripled between 2008 and 2011 and yet prices have not risen accordingly.

Very nice article!

I too wonder exactly what you mean by

effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact.

Which kinds of qualitative analysis do you think are important, and why? Is that what you're talking about when you later write this:

Estimating the cost-effectiveness of health interventions in the developing world has proved to be exceedingly difficult, and this in favor of giving more weight to inputs for which it’s possible

... (read more)
0JonahS
Thanks John! Yes. See also the first section of my response to wdcrouch. * Empirically, best guess cost-effectiveness estimates as measured in lives directly saved have consistently moved in the direction of worse cost-effectiveness. So taking the outside view, one would expect more such updates. Thus, one should expect the factors that could give rise to less cost-effectiveness as measured lives directly saved to outweigh the factors that could give rise to more cost-effectiveness as measured in lives directly saved. * I didn't make a concerted effort to look for ways in which the cost-effectiveness as measured in lives directly saved could be better rather than worse. But I also don't know of any compelling hypotheticals. I would welcome any suggestions here. I agree that these could be very significant. See the second section of my response to wdcrouch's comment.
John_Baez120

Maybe this is not news to people here, but in England, a judge has ruled against using Bayes' Theorem in court - unless the underlying statistics are "firm", whatever that means.

I studied particle physics for a couple of decades, and I would not worry much about "mirror matter objects". Mirror matter is just of many possibilities that physicists have dreamt up: there's no good evidence that it exists. Yes, maybe every known particle has an unseen "mirror partner" that only interacts gravitationally with the stuff we see. Should we worry about this? If so, we should also worry about CERN creating black holes or strangelets - more theoretical possibilities not backed up by any good evidence. True, mirror m... (read more)

2Will_Newsome
Mirror matter is indeed very speculative, but surely not less than 4.3 out of a million speculative, no? Mirror matter is significantly more worrisome than Apophis. I have no idea whether it's more or less worrisome than the entire set of normal-matter Apophis-like risks; does anyone have a link to a good (non-alarmist) analysis of impact risks for the next century? Snippets of Global Catastrophic Risks seem to indicate that they're not a big concern relatively speaking. ETA: lgkglgjag anthropics messes up everything

If you make choices consistently, you are maximizing the expected value of some function, which we call "utility".

Unfortunately in real life many important choices are made just once, taken from a set of choices that is not well-delineated (because we don't have time to list them), in a situation where we don't have the resources to rank all these choices. In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don't apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the s... (read more)

6cousin_it
It seems to me that you give up on VNM too early :-) 1) If you don't know about option A, it shouldn't affect your choice between known options B and C. 2) If you don't know how to order options A and B, how can you justify choosing A over B (as you do)? Not trying to argue for FAI or against environmentalism here, just straightening out the technical issue.
-2XiXiDu
Problem is that the expected utility of an outcome often grows faster than its probability shrinks. If the utility you assign to a galactic civilization does not outweigh the low probability of success you can just take into the account all beings that could be alive until the end of time in the case of a positive Singularity. Whatever you care about now, there will be so much more of it after a positive Singularity that it does always outweigh the tiny probability of it happening.
John_Baez340

Baez replies with "Ably argued!" and presumably returns to his daily pursuits.

Please don't assume that this interview with Yudkowsky, or indeed any of the interviews I'm carrying out on This Week's Finds, are having no effect on my activity. Last summer I decided to quit my "daily pursuits" (n-category theory, quantum gravity and the like), and I began interviewing people to help figure out my new life. I interviewed Yudkowsky last fall, and it helped me decide that I should not be doing "environmentalism" in the customar... (read more)

-2XiXiDu
If you can't work directly on something you could just use your skills to earn as much money as possible and donate that.

Since XiXiDu also asked this question on my blog, I answered over there.

I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.

-9XiXiDu
2XiXiDu
That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn't manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?

In my interview of Gregory Benford I wrote:

If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!

If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!

It sounds like you find the second argument so unconvincing that you don't see why people consider it a paradox.

For what it's worth, I'd take only one box.

0XiXiDu
It doesn't make sense given the rules. The rules say that there will only be a million in box B iff you only take box B. I'm not the kind of person who calls the police when faced with the trolley problem thought experiment. Besides that, the laws of physics obviously do not permit you to deliberately take both boxes if a nearly perfect predictor knows that you'll only take box B. Therefore considering that counterfactual makes no sense (much less than a nearly perfect predictor).
John_Baez160

XiXiDu wrote:

I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI.

Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his. But the last part of this interview will touch on global warming, and if you want to ask me questions, that would be a great time to do it.

(Week 311 is just the first part of a multi-part interview.)

For now, you might be interested to read about Gregory Benford's assessment of... (read more)

1XiXiDu
Would you be willing to write a blog post reviewing his arguments and explaining why you either reject them, don't understand them or accept them and start working to mitigate risks from AI? It would be valuable to have someone like you, who is not deeply involved with the SIAI (Singularity Institute) or LessWrong.com, to write a critique on their arguments and objectives. I myself don't have the education (yet) to do so and welcome any reassurance that would help me to take action. If you don't have the time to write a blog post, maybe you can answer just the following question. If someone was going to donate $100k and you could pick the charity, would you choose the SIAI? Yes/No answer if you're too busy, a short explanation if you've the time. Thank you! You mean, "before we take on the galaxy, let’s do a smaller problem"? So you don't think that we'll have to face risks from AI before climate change takes a larger toll? You don't think that working on AGI means working on the best possible solution to the problem of climate change? And even if we had to start taking active measures against climate change in the 2020s, you don't think we should rather spend that time on AI because we can survive a warmer world but no runaway AI? Gregory Benford writes that "we still barely glimpse the horrors we could be visiting on our children and their grandchildren’s grandchildren". That sounds to me like he assumes that there will be grandchildren, which might not be the case if some kind of AGI doesn't take care of a lot of other problems we'll have to face soon. I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?

What do you believe because others believe it, even though your own evidence and reasoning ("impressions") point the other way?

I don't know. If I did, I'd probably try to do something about it. But my subconscious mind seems to have prevented me from noticing examples. I don't doubt that they exist, lurking behind the blind spot of self-delusion. But I can't name a single one.

Feynman: "The first principle is that you must not fool yourself - and you are the easiest person to fool."

John_Baez150

Thurston said:

Quickness is helpful in mathematics, but it is only one of the qualities which is helpful.

Gowers said:

The most profound contributions to mathematics are often made by tortoises rather than hares.

Gelfand said it in a more funny way:

You have to be fast only to catch fleas.

John_Baez180

It's a bit easier in math than other subjects to know when you're right and when you're not. That makes it a bit easier to know when you understand something and when you don't. And then it quickly becomes clear that pretending to understand something is counterproductive. It's much better to know and admit exactly how much you understand.

And the best mathematicians can be real masters of "not understanding". Even when they've reached the shallow or rote level of understanding that most of us consider "understanding", they are di... (read more)

John_Baez110

The author of this post pointed out that he said "t's noticeably less common for mathematicians of the highest caliber to engage in status games than members of the general population do." Somehow I hadn't noticed that.

I'm not sure how this affects my reaction, but I wouldn't have written quite what I wrote if I'd noticed that qualifier.

John_Baez220

In my 25 years of being a professional mathematician I've found many (though certainly not all) mathematicians to be acutely aware of status, particularly those who work at high-status institutions. If you are a research mathematician your job is to be smart. To get a good job, you need to convince other people that you are smart. So, there is quite a well-developed "pecking order" in mathematics.

I believe the appearance of "humility" in the quotes here arises not from lack of concern with status, but rather various other factors:

1... (read more)

3ThomasR
My experiences, as a kind of outsider who is just curious about some themes in math too and asks around for infos, explanations and preprints/slides, is that mathematicians are by far the easiest science community to communicate with. I conclude that status is of little relevance.
John_Baez110

The author of this post pointed out that he said "t's noticeably less common for mathematicians of the highest caliber to engage in status games than members of the general population do." Somehow I hadn't noticed that.

I'm not sure how this affects my reaction, but I wouldn't have written quite what I wrote if I'd noticed that qualifier.

It's some sort of mutant version of "just because you're paranoid doesn't mean they're not out to get you".

John_Baez220

My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.

(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)

2Vladimir_Nesov
Link to John Baez's blog
0[anonymous]
It's new? I'm already following it for some time. Can't remember how I came across it in the first place though...very cool but over my head, thanks.
4cousin_it
Wow. Hello. I didn't expect that. It feels like summoning Gauss, or something. Thank you a lot for twf!
0Paul Crowley
The markup syntax here is a bit unusual and annoying - click the "Help" button at the bottom right of the edit window to get guidance on how to include hyperlinks. Unlike every other hyperlinking system, the text goes first and the URL second!

Or: it says "This is undecidable in Zermelo-Fraenkel set theory plus the axiom of choice". In the case of P=NP, I might believe it

Ask again, with another famously unsolved math problem. Repeat until it stops saying that or you run out of problems you know.

I would not believe a purported god if it said all 9 remaining Clay math prize problems are undecidable.

1Tasky
If it really is undecidable, God must be able to prove that. However, I think an easier way to establish whether something is just your hallucination or a real (divine) being is asking them about something you couldn't possibly know about and then check if it's true.
John_Baez200

Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism.

So maybe some form of forced socialism is right. But you don't seem interested in considering that possibility. Why not?

While Utilitarianism is excellent for considering consequences, I think it's a mistake to try and raise it as a moral principle.

Why not?

It seems like you have some pre-established moral principles which you are using in your arguments against utilitarianism. Right?

I don't see how you can compromise on these principles. Either each

... (read more)
3JohnH
Utilitarianism itself requires the use of some pre-established moral principles.