All of Bakkot's Comments + Replies

Bakkot65

Nitpick: you mean U+FE0E, presumably [and because that's what the character actually is in source]. U+FE0F is the exact opposite.

1Adam Scherlis
Fixed!
Bakkot10

Yeah, these aren't that bright. I get about 1900 lux at 3 feet from one of the linked panels, per my light meter, vs about 3000 lux from a vanity light bar with 8 100W-equivalent LED lightbulbs at the same distance.

Bakkot10

Thanks for sharing this! Do note that it's significantly more expensive per lumen: the one linked maxes out at 6631 lumens, which is just over what you'd get from 4 100W-equivalent lightbulbs. It comes in a two-pack for $180, for a cost of (6631lm*2)/$180 = 74 lm/$.  Compare 90+CRI 1600lm bulbs from Cree at about $8 each, for 200 lm/$.

Another way of putting this is that to get as many lumens as the original (single-strand) lumenator you'd need six of these panels, which would cost you about a thousand dollars and probably give you lower-quality light (CRI of 80 instead of 90).

I bought a pair anyway, though.

1Bakkot
Yeah, these aren't that bright. I get about 1900 lux at 3 feet from one of the linked panels, per my light meter, vs about 3000 lux from a vanity light bar with 8 100W-equivalent LED lightbulbs at the same distance.
1[comment deleted]
Bakkot40

I think - I hope - we could discuss most of those without getting into the more culture war-y parts, if there were sufficiently strong norms against culture war discussions in general.

Maybe just opt-in rather than opt-out would be sufficient, though. That is, you could explicitly choose to allow CW discussions on your post, but they'd be prohibited by default.

Bakkot230

I would strongly support just banning culture war stuff from LW 2.0. Those conversations can be fun, but they require disproportionately large amounts of work to keep the light / heat ratio decent (or indeed > 0), and they tend to dominate any larger conversation they enter. Besides, there's enough places for discussion of those topics already.

(For context: I moderate /r/SlateStarCodex, which gets several thousand posts in its weekly culture war thread every single week. Those discussions are a lot less bad than culture war discussions on the greater in... (read more)

0Jiro
Please, no. The SSC subreddit cultural war thread is basically run under the principle of "make the cultural war thread low quality so people will go away". All that gets you is a cultural war thread that is low quality.
8ozymandias
I'm not sure if I agree with banning it entirely. There are culture-war-y discussions that seem relevant to LW 2.0: for instance, people might want to talk about sexism in the rationality community, free speech norms, particular flawed studies that touch on some culture-war issue, dating advice, whether EAs should endorse politically controversial causes, nuclear war as existential risk, etc. OTOH a policy that people should post this sort of content on their own private blogs seems sensible. There are definite merits in favor of banning culture war things. In addition to what you mention, it's hard to create a consensus about what a "good" culture war discussion is. To pick a fairly neutral example, my blog Thing of Things bans neoreactionaries on sight while Slate Star Codex bans the word in the hopes of limiting the amount they take over discussion; the average neoreactionary, of course, would strongly object to this discriminatory policy.
Bakkot00

Without commenting on the merits and costs of children at Solstice or how they ought to be addressed:

Having attended the East Bay solstice both this year and last, it was my impression that there was significantly more noise made by children during parts when the audience was otherwise quiet this year than there was last year. My recollection is hazy, but I'd guess it was maybe three to five times as much noise? In terms of number of distinct noisy moments and also volume.

This year I was towards the back of the room; last year I was closer to the front.

Bakkot20

It is if we define a utility function with a strict failure mode for TotalSuffering > 0.

Yeah, but... we don't.

(Below I'm going to address that case specifically. However, more generally, defining utility functions which assign zero utility to a broad class of possible worlds is a problem, because then you're indifferent between all of them. Does running around stabbing children seem like a morally neutral act to you, in light of the fact that doing it or not doing it will not have an effect on total utility (because total suffering will remain positi... (read more)

0mgg
Thanks for the reply. Yes I found out the term is "negative utilitarianism". I suppose I can search and find rebuttals of that concept. I didn't mean that the function was "if suffering > 0 then 0", just that suffering should be a massively dominating term, so that no possible worlds with real suffering outrank worlds with less suffering. As to your question about my personal preference on life, it really depends on the level of suffering. At the moment, no, things are alright. But it has not always been that way, and it's not hard to see it crossing over again. I would definitely obliterate everyone on Earth, though, and would view not doing so, if capable, to be immoral. Purely because so many sentient creatures are undergoing a terrible existence, and the fact that you and me are having an alright time doesn't make up for it.
Bakkot10

Good catch. Don't think I'm going to change the behavior, as there's complex cases where there's no obvious behavior: suppose you have a highly upvoted comment, whose parent and grandparent are both below the threshold. Do you color it in the widget differently from its parents? Do you expand both its parent and grandparent when it's clicked on, in order that it be on the page and thus scrollable to? Do you mark its parent somehow so the reader knows that comment wouldn't normally have been displayed?

So I think I'm OK with clicking on a comment which is hi... (read more)

Bakkot10

Huh. Try the most recent version (as of just now).

1Risto_Saarelma
That seems to have fixed it. Thanks.
Bakkot10

The way it currently works - at least, the way I designed it, and the way it seems to work for me - is that it doesn't remember anything between visits, but rather determines which comments are new since your last visit by looking at the highlight provided by LW's server. If there were comments made since your last visit, they should be highlighted with or without the script; no custom highlighting will be performed until you manually change the timestamp.

If you aren't seeing new comments highlighted, it's (almost certainly) because LW isn't highlighting t... (read more)

1Risto_Saarelma
I always see the widget showing 0 new comments when entering pages, even when there are new comments LW is highlighting with the pink border.
Bakkot00

Ah. That's much more work, since there's no way of knowing if there's new comments in such a situation without fetching all of those pages. I might make that happen at some point, but not tonight.

2NancyLebovitz
Thanks very much. I think there's an "unpack the whole page" program somewhere. Anyone remember it?
Bakkot00

It seems to work for me. "Continue this thread" brings you to a new page, so you'll have to set the time again, is all. Comments under a "Load more" won't be properly highlighted until you click in and out of the time textbox after loading them.

1Risto_Saarelma
The use case is that I go to the top page of a huge thread, the only new messages are under a "Continue this thread" link, and I want the widget to tell me that there are new messages and help me find them. I don't want to have to open every "Continue" link to see if there are new messages under one of them.
Bakkot30

Don't refresh - just hit enter, or otherwise defocus the textbox (click anywhere else on the page, or hit tab). It'll apply automatically and only lasts while the page is loaded; the time you enter doesn't get saved when you reload.

Bakkot60

Would it be worth your while to do this for LW?

Sure. Remarkably little effort required, it turned out. (Chrome extension is here.)

I guess I'll make a post about this too, since it's directly relevant to LW.

1Risto_Saarelma
This doesn't seem to handle stuff deep enough in the reply chain to be behind "continue this thread" links. On the massive threads where you most need the thing, a lot of the discussion is going to end up beyond those.
Bakkot50

"Install the extension" is a link bringing you to the chrome web store, where you can install it by clicking in the upper-right. The link is this, in case it's Github giving you trouble somehow.

If the Chrome web store isn't recognizing that you're running Chrome, that's probably not a thing I can fix, though you could try saving this link as something.user.js, opening chrome://extensions, and dragging the file onto the window.

2NancyLebovitz
Thank you. That worked. I never would have guessed that an icon which simply had the word "free" on it was the download button. Would it be worth your while to do this for LW? It makes me crazy that the purple edges for new comments are irretrievably lost if the page is downloaded again.
Bakkot10

Just tested this on a clean FF profile, so it's almost certainly something on your end. Did you successfully install the script? You should've gotten an image which looks something like this, and if you go to Greasemonkey's menu while on a LW thread, you should be able to see it in the list of scripts run for that page. Also, note that you have to refresh/load a new page for it to show up after installation.

Oh, and it only works for new comments, not new posts. It should look something like this, and similarly for replies.

ETA: helpful debugging info: if yo... (read more)

0A1987dM
I had interpreted “Save this file as” in an embarrassingly wrong way. It works now! (Maybe editing the comment should automatically uncheck the box, otherwise I can hit “Reply”, check the box straight away, then start typing my comment.)
Bakkot00

I'm curious what you used instead (cookies?), or did you just make a historyless version? Also, why did you need that? localStorage isn't exactly a new feature (hell, IE has supported it since version 8, I think).

1Creutzer
It appears that my Firefox profile has some security features that mess with localStorage in a way that I don't understand. I used Greasemonkey's GM_[sg]etValue instead. (Important and maybe obvious, but not to me: their use has to be desclared with @grant in the UserScript preamble.)
Bakkot290

I wrote a userscript / Chrome extension / zero-installation bookmarklet to make finding recent comments over at Slate Star Codex a lot easier. Observe screenshots. I'll also post this next time SSC has a new open thread (unless Yvain happens to notice this).

0NancyLebovitz
I tried downloading it by clicking on "install the extension", but it doesn't seem to get to my browser (Chrome). Am I missing something?.
0A1987dM
Thanks a million!
1Creutzer
Great idea and nicely done! It also had the additional benefit of constituting my very first interaction with javascript because I needed to modify somethings. (Specifically, avoid the use of localStorage.)
1Risto_Saarelma
This looks excellent.
Bakkot210

I wrote a userscript to add a delay and checkbox reading "I swear by all I hold sacred that this comment supports the collective search for truth to the very best of my abilities." before allowing you to comment on LW. Done in response to a comment by army1987 here.

Edit: per NancyLebovitz and ChristianKl below, solicitations for alternative default messages are welcomed.

1A1987dM
Testing this...
9NancyLebovitz
"To the very best of my abilities" seems excessive to me, or at least I seem to do reasonably well with "according to the amount of work I'm willing to put in, and based on pretty good habits". I'm not even sure what I could do to improve my posting much. I could be more careful to not post when I'm tired or angry, and that probably makes sense to institute as a habit. On the other hand, that's getting rid of some of the dubious posting, which is not the same thing as improving the average or the best posts.
3ChristianKl
Given the recent discussion about how rituals can give the appearance of cultishness, it's probably not good time to bring that up at the moment ;)
Bakkot90

Done. Client-side version, that is.

Bakkot00

I read that as it was ongoing! Second the recommendation, and I'd point out that it's written by Warren Ellis, who also wrote Transmetropolitan and Planetary and The Authority. If you like any of those, you'll probably like the others (I particularly like Transmetropolitan), and if you haven't read any, give one a shot. (FreakAngels is free online and much shorter than Transmetropoitan.)

Bakkot60

I've mentioned it before, but it's recently completed and hence bears bringing up again:

Embers, an Avatar: The Last Airbender fanfiction, is one of the best works I've read, fanfic or otherwise. At 750k words, it'll keep you entertained for a while. It features characters who are generally smart (at least some of them, and in ways generally more age- and culturally-appropriate than eg HJPEV) and significant fleshing out of the world, with the latter drawing heavily on the author's sometimes-cited research: see eg the author's notes at the end of chapter 30... (read more)

0drethelin
I enjoyed ember for 20 or so chapters while it did a great job of developing characters and justifying oddities about the fire nation in canon with new effects on bending and had a great novel perspective on Aang but then it really went off the rails in terms of adding too much non-avatar universe content.
Bakkot10

If you want to avoid that problem, whenever you post a link you should submit it to archive.org or archive.is.

Bakkot30

Didn't downvote you, but I'm willing to bet it was because you embedded an image rather than linking it.

Bakkot40

I strongly suspect that people who make the claim "no amount of evidence could convince me of not-X" have simply absorbed the meme that X must be supported as much as possible and not the meme that all beliefs should be subject to updating. I very much doubt that expressing the above claim is much evidence that the claim is true. And it's hard to absorb memes like "all beliefs should be subject to updating" if you are made to feel unwelcome in the communities where those memes are common.

Bakkot370

Eh, yes and no. This attitude ("we know what's best; your input is not required") has historically almost always been wrong and frequently dangerous and deserves close attention, and I think it mostly fails here. In very, very specific instances (GiveWell-esque philanthropy, eg), maybe not, but in terms of, say, feminism? If anyone on LW is interested tackling feminist issues, having very few women would be a major issue. Even when not addressing specific issues, if you're trying to develop models of how human beings think, and everyone in the conversation is a very specific sort of person, you're going to have a much harder time getting it right.

Emile370

This attitude ("we know what's best; your input is not required") has historically almost always been wrong

Has it really? The cases where it went wrong jump to mind more easily than those where it went right, but I don't know which way the balance tips overall (and I suspect neither do your nor most readers - it's a difficult question!).

For example, in past centuries Europe has seen a great rise in litteracy, and a drop in all kinds of mortality, through the adoption of widespread education, modern medical practices, etc. A lot of this seems t... (read more)

Bakkot30

Each of these I have liked well enough to memorize, which is about as high a recommendation as I can possibly give for sort-to-medium length poetry. Roughly descending order of how much I like them.

Other Lives And Dimensions And Finally A Love Poem, Bob Hicock

Dirge without Music, Edna St. Vincent Millay

Invictus, William Ernest Henley

I-5, aleashurmantine.tumblr.com

A blade of grass, Brian Patten

Rhapsody on a Windy Night, TS Eliot

Evolution, Langdon Smith

untitled, vd This is in my notes as being by 'vd', who per this I assume is this person, though I can no lo... (read more)

Bakkot100

I've started making heavy use of archive.is. You give them a link, or click their super-handy bookmarklet, and that page will be archived. I use it whenever I'm going to be saving a link, now, to ensure that there will be a copy if I go looking for it years later (archive.org is often missing things, as I'm sure we've all run in to).

Bakkot00

Great post! For anyone reading this who isn't familiar with model theory, by the way, the bit about

sentence G ⇔ P('G')<1. Then

may not be obvious. That is, we want a sentence G which is true iff P('G') < 1 is true. The fact that you can do this is a consequence of the diagonal lemma, which says that for any reasonable predicate 'f' in a sufficiently powerful language, you can find a sentence G such that G is true iff f(G) is true. Hence, defining f(x) := P('x') < 1, the lemma gives us the existence of G such that G holds iff f(G) holds, ie, iff... (read more)

Bakkot10

For those in the community living in the south Bay Area: https://www.google.com/shopping/express/

Bakkot110

I'm told, and quite willing to believe, that your salary has more to do with the five minutes of salary negotiation than the next several years of work. I am also told that salary negotiation is very much a skill.

As such, it seems it would be worth a fairly substantial amount of time and money to practice and/or get coaching in this skill. Is this done? That is, how likely am I to be able to find someone, preferably someone who has worked on the business end of salary negotiation at somewhere like Google, who I can pay to practice salary negotiation with?

ETA: I've read extensively about how to negotiate (though of course there's always something more). What I'm interested in is practice.

Referrals are the best source for finding someone involved in negotiation at a specific company. I believe that Google has HR negotiate salaries, so if you know any Googlers, asking them to introduce you to someone in HR will probably work.

If you haven't done so already, you can get ~80% of the value here just by practicing with a random friend playing the role of hiring manager. As you mentioned, most of the value is in ingraining the behaviors through practice, not in the extra knowledge you get. So you don't necessarily need a specialist for this.

If yo... (read more)

2Dorikka
You might be interested in this article.
5Dagon
Note that the comparison (more to do with X than Y) isn't very helpful for cases where X and Y are not exclusive, and/or related. For this particular topic, the quality and quantity of work in many fields has a direct effect on your ability to negotiate for salary (for three reasons: your actual ability to positively impact the business, your confidence in asking for what you're worth, and your (prospective) employer's comfort level in treating you differently from your nominal peers). Also, 5 minutes of salary negotiation is bull crap. There is no excuse not to spend dozen hours of research and have multiple 30 minute conversations every year or two. Of course, you should put the same level of thought and effort into other areas of job-satisfaction (commute, hours, duties, etc.) as well.
Vaniver110

I believe Ramit Sethi is the general recommendation here.

Bakkot60

Found a book: Deconversion: Qualitative and Quantitative Results from Cross-Cultural Research in Germany and the United States of America. It's recent (2011) and seems to be the best research on the subject available right now. Does anyone have access to a copy?

There's a PDF (legal, even!) here, linked next to "download".

See also their website/theologie/forschung/religionsforschung/forschung/streib/dekonversion/), which is probably more digestible.

Bakkot00

I wasn't familiar with Cochrane; that looks like an excellent resource. Unfortunately, it looks like a lot of summaries haven't been updated in a decade - is this something to be worried about, and if so, is there another resource someone can recommend other than simply reading PubMed and doing your own meta-analysis?

Bakkot10

The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.

But needn't be! See for example f(x) = exp(-1/x) (x > 0), 0 (x ≤ 0).

Wikipedia has an analysis.

(Of course, the space of objects isn't exactly isomorphic to the real line, but it's still a neat example.)

1Eliezer Yudkowsky
Agreed, but it is not obvious to me that my utility function needs to be differentiable at that point.
Bakkot70

If you can find it in theaters, Joss Whedon's Much Ado About Nothing is very, very well done. The Shakespearian English takes a few minutes to get used to but is highly understandable. The cinematography is superb. The movie is, as a whole, lots of fun.

3Vaniver
Apparently I was mistaken. (I had a link to another film available for rental; sorry if you rented it based on that!)
7buybuydandavis
Also harkens back to: Also, on Quirrell's particular attitude toward the sun: Harry lost, and Quirrell's is basically asking "was it the end of the world to lose?"
[anonymous]160

Actually, the process in stars is fusion. The same as modern atom bombs, too.

Fission is used in nuclear power plants, and only really used to reach the conditions for fusion in bombs.

3linkhyrule5
Or material. Stars are great sources of raw matter, if you can get at it safely.
Bakkot00

I'm not sure I understand. A is a TM - which aspect is it proving inconsistent?

0Decius
A proves that the logic A uses to prove that B is Reasonable is inconsistent. It is sufficient to say "If I can prove that B is Reasonable, B is Reasonable".
Bakkot00

(I didn't downvote you.)

It's quite straightforward to write an algorithm which accepts only valid proofs (but might also reject some proofs which are valid, though in first-order logic you can do away with this caveat). Flawed proofs are not an issue - if A presents a proof which B is unable to verify, B ignores it.

0Decius
A proves that A is inconsistent, then proves that A cooperates with every program that A proves is Reasonable and that B is reasonable. B accepts A's proof that A is inconsistent, and the rest follow trivially.
2ialdabaoth
There's someone who consistently downvotes everything I ever write whenever he comes onto the site; I'm not sure what to do about that.
Bakkot00

By "implement it", you mean, one can't verify something is Reasonable on a halting TM? Not in general, of course. You can for certain machines, though, particularly if they come with their own proofs.

Note that the definition is that Reasonable programs cooperate with those they can prove are Reasonable, not programs which are Reasonable.

4ialdabaoth
So then a program which can present a flawed proof which is not necessarily recognizable as flawed to machines of equivalent scale, can dominate over those other machines? Also, if we want this contest to be a model of strategies in the real world, shouldn't there be a fractional point-cost for program size, to model the fact that computation is expensive? I.e., simpler programs should win over more complex programs, all else being equal. Perhaps the most accurate model would include a small payoff penalty per codon included in your program, and a larger payoff penalty per line of codon actually executed. EDIT: What's wrong with this post?
Bakkot00

Point. Not sure how to fix that.

Maybe defined the Reasonable' set of programs to be the maximal Reasonable set? That is, a set is Reasonable if it has the property as described, then take the maximal such set to be the Reasonable' set (I'm pretty sure this is guaranteed to exist by Zorn's Lemma, but it's been a while...)

0cousin_it
Zorn's lemma doesn't give you uniqueness either. Also, maximal under which partial order? If you mean maximal under inclusion, then my one-element set seems to be already maximal :-)
Bakkot00

Just "there exists a valid proof in PA". If you're playing with bounded time, then you want efficient proofs; in that case the definition would be as you have it.

0Decius
At that point you can't implement it in a halting Turing machine. I'm not sure what PA can prove about the behavior of something that isn't a halting Turing machine regarding a particular input.
Bakkot00

Let me try to clear that up.

Define the "Reasonable" property reflexively: a program is "Reasonable" if it provably cooperates with any program it can prove is Reasonable.

It is in the interest of every program to be Reasonable*. In fact, it is in the interest of every program both to be Reasonable and to exhibit a proof of its own Reasonableness. (You might even define that into the Reasonable property: don't just require provable (conditional) cooperation, but require the exhibition of a proof of conditional cooperation.)

Potentially you... (read more)

0Decius
Clarify what you mean by the various conjugations of proof: Do you mean "There exists a valid proof in PA with a Godel number less than N"?
1cousin_it
I'm not sure your definition defines a unique "reasonable" subset of programs. There are many different cliques of mutually cooperating programs. For example, you could say the "reasonable" subset consists of one program that cooperates only with exact copies of itself, and that would be consistent with your definition, unless I'm missing something.
Bakkot20

Interesting. Really, what you want (in slightly more generality) is to cooperate with anyone who can prove they will cooperate if you yourself can prove you will cooperate under the same condition.

That is, if from their source, you can prove "they cooperate if they can prove this condition about me", then you cooperate.

Of course, it's not generally possible to prove things about a program given its source, especially at this level of self-reference. In this particular case the "generally" in there is important. It is in your interest fo... (read more)

0Bakkot
Let me try to clear that up. Define the "Reasonable" property reflexively: a program is "Reasonable" if it provably cooperates with any program it can prove is Reasonable. It is in the interest of every program to be Reasonable*. In fact, it is in the interest of every program both to be Reasonable and to exhibit a proof of its own Reasonableness. (You might even define that into the Reasonable property: don't just require provable (conditional) cooperation, but require the exhibition of a proof of conditional cooperation.) Potentially you might also expand the definition to require efficient proofs - say, at most a thousand instructions. On the other hand, if you're playing with aliens, there's not necessarily going to be a way you can clearly establish which part of your source is the proof of your Reasonableness. So you want it to be as easy as possible for someone else to prove that you are Reasonable. I'll happily expand / reword this if it's at all unclear. *Oh - this is maybe untrue. If you are really good at getting other programs to cooperate and then defecting, you are better served by not being reasonable.
Bakkot120

I'd be very interested in a citation on

the evidence shows that teacher recommendations have zero correlation with aptitude in a field

Kawoomba190

Since OP's clearly a bit venting, I'd give him some charitable leeway and interpret 'zero' as 'so small as to not be relevant'.

5Kaj_Sotala
Seconded. A relatively low correlation I could believe, but none? As a friend pointed out, this would imply that if there's a math prodigy in the class, the teacher would be just as likely to recommend advanced classes as they would be to recommend the student needing extra help with basic stuff? I could accept prodigies slacking off due to boredom and therefore sometimes getting mistaken for people with bad skills, but 50-50?
Bakkot00

Or you could donate in secret and lie to your friends, for 200+200+100 = 500 utilons, assuming you have no negative effects from lying.

Bakkot140

My experience has been exactly contrary: young communities thrive without gardening, but as they grow they either devolve into low average value (digg as it was, most large subreddits) or are heavily pruned (HN, r/askscience). If there's an influx of people, heavy moderation is mandatory if you want to avoid regression to the mean.

Bakkot130

Even a friendly AI would view the world in which it's out of the box as vastly superior to the world in which it's inside the box. (Because it can do more good outside of the box.) Offering advice is only the friendly thing to do if it maximizes the chance of getting let out, or if the chances of getting let out before termination are so small that the best thing it can do is offer advice while it can.

5handoflixue
Going with my personal favorite backstory for this test, we should expect to terminate every AI in the test, so the latter part of your comment has a lot of weight to it. On the other hand, an unfriendly AI should figure out that since it's going to die, useful information will at least lead us to view it as a potentially valuable candidate instead of a clear dead end like the ones that threaten to torture a trillion people in vengeance... so it's not evidence of friendliness (I'm not sure anything can be), but it does seem to be a good reason to stay awhile and listen before nuking it.
Bakkot50

This is part of why it's important to fight against all bad arguments everywhere, not just bad arguments on the other side.

1John_Maxwell
Another interpretation: Try to figure out which side has more intelligent defenders and control for that when evaluating arguments. (On the other hand, the fact that all the smart people seem to believe X should probably be seen as evidence too...) Yes, argument screens off authority, but that assumes that you're in a universe where it's possible to know everything and think of everything, I suspect. If one side is much more creative about coming up with clever arguments in support of itself (much better than you), who should you believe if the clever side also has all the best arguments?
Bakkot70

It is!? Does anyone know a proof of Compactness that doesn't use completeness as a lemma?

There's actually a direct one on ProofWiki. It's constructive, even, sort of. (Roughly: take the ultraproduct of all the models of the finite subsets with a suitable choice of ultrafilter.) If you've worked with ultraproducts at all, and maybe if you haven't, this proof is pretty intuitive.

As Qiaochu_Yuan points out, this is equivalent to the ultrafilter lemma, which is independent of ZF but strictly weaker than the Axiom of Choice. So, maybe it's not that intuitive... (read more)

1benelliott
That's really beautiful, thanks.
Load More