All of Halfwitz's Comments + Replies

I would be interested in organizing this if no one else will. Would like the tips

Not really. May be worth listening to while washing dishes or something but nothing essential.

If people agree the test is fair and the randomization is fair, I'm not convinced it would not be stable after a generation or two. Pure sortition does retain that advantage, the IQ filter reduces this but the filter could be adjusted to increase stability. For example, say it took only the 50th percentile. At this level, coordination would be difficult as no one would want to publicly admit they weren't eligible for sortition. Perhaps this would remain true if only the 90th percentile were selected, if not the 99th.

0Lumifer
The question is whether people agree the whole system is fair. Let's imagine that I'm a person below the whatever IQ cutoff you prefer. Please explain to me (ELI5, preferably) how your proposed system is fair to me.

If anyone is interested in playing an AI box experiment game, I'd be interested in being the gatekeeper.

1)

Just be sure I'm understanding you correctly, what you're saying is average utilitarianism prescribes creating lives that are not worth living so long as they are less horrible than average. This does seem weird. Creating a life that is not worth living should be proscribed by any sane rule!

2)

I don't find this objection super compelling. Isn't the reason average utilitarianism was proposed was because people find mere addition unattractive?

3)

Another fine point. People with lives worth living shouldn't feel the need to suicide when they learn they are d... (read more)

[This comment is no longer endorsed by its author]Reply
0entirelyuseless
Average utilitarianism does not prescribe "creating lives that are not worth living" as long as they are better than average. Rather, it says that a life is worth living if it is better than average, and not worth living if it is worse than average. Which of course is one of the most absurd claims ever made.

200, or 400 if you count matching.

4So8res
Thanks!

I watched it based on this recommendation. I'll second it - great fun, great animation, but I don't mind CGI. I thought I detected some Hannu Rajaniemi influences, too.

Gubhtu V guvax vg fubhyq unir raqrq jvgu gur gjb wbvavat gur iblntr. Qvatb'f bowrpgvbaf gb fvzhyngvba jrer cheryl n znggre bs gur cbyvgvpf naq Natryn pyrneyl cersreerq yvsr nf na rz. Gur erny jbeyq ybbxrq cerggl pehzzl.

Halfwitz
100

I imagine a lot of the selection was indirect selection for neoteny. I think it would be much, much harder to select for domestication in octopi, as they do not raise their young.

I've been looking for a good Anime/Manga podcast? The one's I've found have been ok but not exactly what I'm hoping for. Anyone know of one?

2lsusr
Trash Taste covers lots of topics but the hosts come from an anituber background so they talk about anime a lot.

I agree, there is some magic to NGE that RahXephon doesn't have - but I'm not sure how much of that is caused by the fact that I saw NGE first and it was the first Anime I ever watched. I love Neuromancer, but much of my love for it comes from the fact that it was the first science fiction novel I ever read. I had no antibodies. If I had read Vinge first, it's likely I wouldn't have been too impressed with Neuromancer, which has as many flaws as NGE.

I can't justify giving NGE a higher score for the reasons you described, but I do slightly prefer it - though less so after re-watching RahXephon.

Read The Martian - not bad I guess, but a sort of celebration of terrible ethics.

5gwern
The ethical dimension is lampshaded at a few points and it's pointed out that it's not quite that clearcut as 'we are wasting billions of dollars to riskily save one volunteer'; I felt he implies that the death might also kill or set back the space program, which makes the choice a bit different. I'm not sure he's entirely wrong: the public has really weird beliefs and attitudes (as this absurd 'Cecil the Lion' dustup has reminded us yet again) and it's entirely possible things might play out as depicted in Weir's novel. The last shuttle deaths did kill that program, after all.

Watched a lot of robot anime last month.

Rewatched RahXephon. I'd tie it with NGE at 9/10. I especially liked the first two episodes. I thought the romance in it was quite good, too. The animation goes off-model from time to time, but it's serviceable. The music is wonderful, especially the closing theme https://www.youtube.com/watch?v=8aTUy44JA8w

I also watched Eureka Seven and found it vastly inferior to RahXephon - maybe 5/10 and that's pushing it.

I've been enjoying Knights of Sidonia [slight spoilers] - a half-and-half mix of neat science fiction and an... (read more)

0ShardPhoenix
I still enjoyed the setting and some of the fights but I thought the second season was a fair bit worse than the first due to (as you mention) too much focus on harem antics, combined with a reduction in the overall sense of urgency/plot momentum.
0gwern
What I find interesting about RahXephon is that having watched it and NGE several times and read staff interviews for both etc is that in almost every respect, RX is clearly better thought out and more competently executed*, and yet, it's NGE which ultimately somehow turns out to be greater than the sum of its parts of RX and a part of anime history, while RX is 'merely' one of the best mechas around, especially from that era. * with the exception of the music - RX's is quite good but Sagisu's NGE work is still considered one of the best, up there with Kanno's Cowboy Bebop.

I'm with Yvain on measure, I just can't bring myself to care.

0jacob_cannell
Relative measure matters, but its equivalent to probability and thus adds up to normality.

I'm confused. What were you referring to when you said, "on this assumption"?

0Fivehundred
That you find yourself randomly selected from a pool of all conceivable observers, rather than a pool with probabilities assigned to them. EDIT: Actually, the former option is flatly impossible, because my mindstate would jump to any conceivable one that could be generated from it. I would have an infinitesimal chance of becoming coherent enough to have anything resembling a 'thought.'

If you make Egan's assumption, I think it is an extremely strong argument.

Why don't you buy it?

0Fivehundred
I don't reject it, I simply think that Dust Theory based on this assumption is so unlikely that we may as well assume the opposite- that different patterns can be more common; have more measure, than others.

It isn't me at all anymore.

There will be a "thread" of subjective experience that identifies with the state of you now no matter what insult or degeneration you experience. I assumed you were pro-teleporter. If you're not why are you even worried about dust theory?

0Fivehundred
What is 'me?' I'm not an ontologically basic thing. As long as it is a process, I don't see why I wouldn't just die.

Well, it might be that such observers are less 'dense' than ones in a stable universe

In that case most of your measure is in stable universes and dust theory isn't anything to worry about.

But that can't be the case, as isn't the whole point of dust theory that basically any set of relations can be construed as a computation implementing your subjective experience, and this experience is self-justifying? If that's the case the majority of your measure must be dust.

Dust theory has a weird pulled-up-by-your-own bootstraps taste to it and I have a strong... (read more)

0TheAncientGeek
There are different ways of defining ,measure. DT guarantees that lack of continuity, and therefore low density, won't be subjectivtly noticeable....at least, it will look like chaotic observations , not feral like "I'm dead" Maybe you could include: 1. construed as a computation BY WHOM? 2. Computation is a process, and not any process, so the idea of an instantaneous computational state. (There is a possible false dichotomy there: consciousness isnt the output of a computation that takes a lifetime to perform, but there could be still be millions of computatioNs required to generate a "specious present")
0Fivehundred
Not necessarily to you. It doesn't have to make much sense to you at all. But our observations are orderly, and that is something that can't be explained by the majority of our measure being dust. Why would it default to this? If you make Egan's assumption, I think it is an extremely strong argument.

That doesn't seem very air tight. There is still a world where a "you" survives or avoids all forms of degradation. It doesn't matter if it's non-binary. There are worlds were you never crossed the street without looking and very, very, very, very improbable worlds where you heal progressively. It's probably not pleasant but it is immortality.

0Fivehundred
How would I contact a version of me in another branch? It isn't me at all anymore. You can receive and experience permanent brain damage, so why would a death experience be any different? And what about sleep? If this was true it seems like you wouldn't be able to let go of any of your mental faculties at all.

Dust theory is beautiful and terrifying, but what do you say to Egan's argument against it: http://gregegan.customer.netspace.net.au/PERMUTATION/FAQ/FAQ.html

-1Fivehundred
Erm, you mean his argument that we would expect to find ourselves in a more chaotic universe? Well, it might be that such observers are less 'dense' than ones in a stable universe (I never grasped the mathematics of it), and if that's the case than I don't see how the argument works. But then the opposite problem applies- our universe is far too complex, and relies upon contingencies that are highly improbable, merely for observers to exist. The only solution for the 'Big World' is that universes like these have a high-base rate, and that this really is the most common type of scenario that produces life. But that can't save Dust Theory, and probably not Ultimate Ensemble either. On the other hand, if Egan is right, losing mental awareness in a chaotic universe could have the opposite effect of what I first thought- propelling you into more stable worlds by virtue of your continued existence. This may explain our current observations. But that's a very cloudy line of thinking. Maybe such beings "join" with sleeping human infants if the observations match theirs. But still, this universe seems too stable; why would they have defaulted to this one? And why would the most common type of observer be similar enough to human even to have that much in common? I'm pretty sure someone who knows what they're talking about could put this question to rest. But no one here will even understand it.

Do you have a link to Max Tegmark's rebuttal? What I've read so far seemed like a confused dodge.

0Fivehundred
https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality#Max_Tegmark.27s_work

If you're interested in robotics, this video is a must see: https://youtu.be/EtMyH_--vnU?t=32m34s

I have to say I'm baffled. I was genuinely shocked watching the thing. Its speed is incredible. I remember writing off general robots after closely following Willow Robotics' Work. That was only three years ago. Again, I'm pretty shocked.

0Houshalter
This is six years old.

This forum doesn't allow you to comment if you have <2 karma. How does one get their first 2 karma then?

5philh
Doesn't it? IIRC you need 2 karma to post a top-level thread in discussion, more than that to post a top-level thread in main, but none to comment.
0Gondolinian
I'm not sure I accept your premises? I could certainly be wrong, but I have not gotten the impression that comments can be prevented by low karma, only posts to Discussion or Main. (And I recall the minimum as 20, not 2.*) The most obvious way to get the karma needed to post is by commenting on existing posts (including open threads and welcome threads), and new users with zero initial karma regularly do this without any apparent difficulty, so unless I'm missing something, I don't think it's a problem? *ETA: It seems that 2 is the minimum for Discussion, while 20 is the minimum for Main.
Halfwitz
-40

I doubt there's much to be done. I wouldn't be surprised if MIRI shut down LessWrong soon. It's something of a status drain because of the whole Roko thing and no one seems to use it anymore. Even the open threads seem to be losing steam.

We still get most of the former value from the SlateStarCodex, Gwern.net, and the tumblr scene. Even for rationality, I'm not sure LessWrong is needed now that we have CFAR.

It's true that Less Wrong has a reputation for crazy ideas. But as long as it has that reputation, we might as well continue posting crazy ideas here, since crazy ideas can be quite valuable. If LW was "rebooted" in some other form, and crazy ideas were discussed there, the new forum would probably acquire its own reputation for crazy ideas soon enough.

The great thing about LW is that it allows a smart, dedicated, unknown person to share their ideas with a bunch of smart people who will either explain why it's wrong or change their actions base... (read more)

3[anonymous]
Eh, it is just useful to have a generic discussion forum on the Internet with a high average IQ and a certain culture of epistemic sanity / trying to avoid at least the worst fallacies and biases. If out of the many ideas in the sequences, at least "tabooing" would get out into the wild so people on other forums would get more used to discussing actual things instead of labels and categories, it could become bearable out there. For example you can hardly have a sane discussion in economics.reddit.com because labels like capitalism and socialism being used as rallying flags.
D_Malik
160

I don't think a shutdown is even remotely likely. LW is still the Schelling point for rationalist discussion; Roko-gate will follow us regardless; SSC/Gwern.net are personal blogs with discussion sections that are respectively unusable and nonexistent. CFAR is still an IRL thing, and almost all of MIRI/CFAR's fans have come from the internet.

Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.

9raydora
I recently joined this site after lurking for awhile. Are blog contributions of that sort are the primary purpose of Less Wrong? It seems like it fulfills a niche that the avenues you listed do not: specifically, in the capacity of a community rather than an individual, academic, or professional endeavor. There are applications of rational thought present in these threads that I don't see gathered anywhere else. I'm sure I'm missing something here, but could viewing Less Wrong as a potential breeding ground for contributors of that kind be useful? I realize it's a difficult line to follow without facing the problems inherent to any community, especially one that preaches a Way. I haven't encountered the rationalist tumblr scene. Is such a community there?

If you’re looking for a useful major, Computer science is the obvious choice. I also think statistics majors are undersupplied, though only anecdotal data there. I know a few stats majors (none overly clever) that have done far more with the degree than I would have guessed as an undergraduate. But this could have changed since, markets being anti-inductive. If your goal is effective egotism, you’re probably not in the best major. Probably the best way to go about your goal is to follow the advice of effective altruists and then donate all the money to your future self, via a Vanguard fund. If this sounds too evil, paying a small tithe, 1%, would more than make up for this at a managable cost.

2RowanE
I'm not really considering a change in major as on the table, for various reasons, mostly personal. I'm more thinking of what career to try for given the degree I'm on track for and that I've rejected the obvious choices for that degree. The difference with the "effective egoist" approach is the diminishing returns value of money - altruists want to earn as much as they can over the course of their lives, I want to earn a set amount in as little time as possible, and might want to earn more if I'm making lots of money quickly or without stress. That's the main reason the "get PhD, become quant" track is ruled out - the "teaching sounds horrible" aside was referring to actually becoming a teacher, which is a common suggestion for what to do with a physics degree when ruling out science, I wasn't actually considering how bad teaching undergrads would be. And there's not really a "too evil" for me, my response to the ethical obligation to donate to efficient charity is to notice the that I don't feel guilty even though the logic seems perfectly sound and say "well I guess I'm already an unrepentant murderer, and therefore evil" and then functionally be an egoist while still using utilitarianism for actual moral questions.

After reading a biography of Hugh Everett, I checked out his son's music and was pleasntly surprised. I especially liked this one: https://www.youtube.com/watch?v=ZYvj7oeIMCc

Good rec. And not just for the education - the whole show is very charming, though I agree nothing too special.

1diegocaleiro
Doesn't refer to Principal Agency.

If you liked the visual style,

I liked it, but I think the static textures should have been used with a bit more subtlety.

Mononoke and Ayakashi.

I'll check those out, looks like they're both on Crunchyroll.

Finished it last night.

Gehr, V gubhtug pnfgvat Rqjneq nf zber ivyynvabhf guna pnaaba jnf vafcverq - gur snpg gung ur jnf cbffrffrq fbeg bs ehvarq gung.

Still one of the better animes I've seen recently, and probably the best adaptation of The Count of Monte Cristo I've ever seen - though I haven't seen many.

Now I need a new animie.

0lmm
If you like visual weirdness in anime I highly recommend Kaiba; it's a rather cool sci-fi story too.
2gwern
If you liked the visual style, you could check out Mononoke and the last part of Ayakashi.

I've been enjoying Gankutsuou: The Count of Monte Cristo

0moonshadow
It's awesome until the plot does a 90 degree turn near the end. Unfortunately their authors just aren't as good as Dumas and oynzvat Rqzbaq'f npgvbaf ba zvaq pbagebyyvat fcnpr nyvraf xvaq bs erzbirf gur cbvag.

After all Eliezer's warnings, you constructed a superintelligence in your own house.

3Gondolinian
And it looks to be a candy maximizer. :)

Or is it the apparent resemblance to Pascal's wager?

That and believing in hell is more low status than believing in heaven. Cryonics pattern matches to the a belief in a better life after death, the basilisk to hell.

Halfwitz
160

I remain impressed by how much awareness one high-status academic can raise by writing a book.

Halfwitz
232

Those two quotes that are dated before 2004 are the least outrageous.

This is the most outrageous one to me:

I must warn my reader that my first allegiance is to the Singularity, not humanity. I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I’m with Them. You have been warned.

And it's clearly the exact opposite of what present Eliezer belives.

Halfwitz
100

The stuff that bothers me are Usenet and mailing list quotes (they are equivalent to passing notes and should be considered off the record) and anything written when he was a teenager. The rest, I suppose, should at least be labeled with the date they were written. And if he has explicitly disclaimed the statement, perhaps that should be mentioned, too.

Young Eliezer was a little crankish and has pretty much grown out of it. I feel like you're criticising someone who no longer exists.

Also, the page where you try to diagnose him with narsisism just seems mean.

-1XiXiDu
I can clarify this. I never intended to write that post but was forced to do so out of self-defense. I replied to this comment whose author was wondering why Yudkowsky is using Facebook more than LessWrong these days. To which I replied with an on-topic speculation based on evidence. Then people started viciously attacking me, to which I had to respond. In one of those replies I unfortunately used the term "narcissistic tendencies". I was then again attacked for using that term. I defended my use of that term with evidence, the result of which is that post. What do you expect that I do when I am mindlessly attacked by a horde of people? That I just leave it at that and let my name being dragged into dirt? Many of my posts and comments are direct responses to personal attacks on me from LessWrong members.
Halfwitz
100

As far as I can tell, Yudkowsky basically grew up on the internet. I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently. I think this goes against some notion of journalistic tact.

3XiXiDu
This is exactly the kind of misrepresentation that make me avoid deleting my posts. Most of the most outrageous things he said have been written in the past ten years. I suppose you are partly referring to the quotes page? Please take a look, there are only two quotes that are older than 2004, for one of which I explicitly note that he doesn't agree with it anymore, and a second which I believe he still agrees with. Those two quotes that are dated before 2004 are the least outrageous. They are there mainly to show that he has long been believing into singularitarian ideas and that he can save the world. This is important in evaluating how much of the later arguments are rationalizations of those early beliefs. Which is in turn important because he's actually asking people for money and giving a whole research field a bad name with his predictions about AI.
Halfwitz
120

For the record, I genuinely object to being thought of as a "highly competent CEO."

But that's exactly what the Dunning-Kruger effect would lead us to expect a highly-competent CEOs to say! s/

non-natural CEO working hard and learning fast and picking up lots of low-hanging fruit but also making lots of mistakes along the way because he had no prior executive experience

To be honest, I didn't mean much by it. Just that MIRI has been more impressive lately, and presumably a good portion of this is due to your leadership.

But that's exactly what the Dunning-Kruger effect would lead us to expect a highly-competent CEOs to say!

No. It would lead us to expect that the top quartile would rank themselves well above the median but below their actual scores.

(And then to ask why we're thinking in such coarse granularity as quartiles.)

Halfwitz
470

To be honest, I had you pegged as being stuck in a partisan spiral. The fact that you are willing to do this is pretty cool. Have some utils on the house. I don’t know if officially responding to your blog is worth MIRI’s time; it would imply some sort of status equivalence.

Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors. Mining someone’s juvenilia for outrageous statements is not productive – I mean he was 16 when he wrote some of the... (read more)

-2XiXiDu
If I post an embarrassing quote by Sarah Palin, then I am not some kind of school bully who likes causing people distress. Instead I highlight an important shortcoming of an influential person. I have posted quotes of various people other than Yudkowsky. I admire all of them for their achievements and wish them all the best. But as influential people they have to expect that someone might highlight something they said. This is not a smear campaign.
6John_Maxwell
Still, it could be nice if XiXiDu put some kind of disclaimer he came up with himself at the top of his posts.
lukeprog
480

For the record, I genuinely object to being thought of as a "highly competent CEO." I think "non-natural CEO working hard and learning fast and picking up lots of low-hanging fruit but also making lots of mistakes along the way because he had no prior executive experience" is more accurate. The good news is that I've been learning even more quickly since Matt Fallshaw joined the Board, since he's able and willing to put in the time to transfer to me what he's learned from launching and running multiple startups.

So the claim isn’t so much traditionalism is great, only enlightenment is worse than traditionalism after controlling for technology? I was thinking of neoreactionaries as deformed utopians, but the tone is more like, “let’s reset social ‘progress’ and then very carefully consider positive proposals.’

-1MichaelAnissimov
Sort of. Traditionalism is great, though. You have the tone right. When people see the headline "monarchy!" they're missing the 2-3 years of thinking and 2,000+ pages of reading that go between step 1 (let's reset social progress and then very carefully consider positive proposals) and step 2 (maybe, in some specific contexts, something like a certain class of monarchies would be useful for certain small-to-medium states). Monarchy is just a tentative positive proposal (with limited potential application) I came to after several years of searching after the Cathedral mind virus had been dispelled. Moldbug seems to have come to something closer to anarchocapitalist seasteading-type city state proposals. Land leans even more anarchocapitalist than Moldbug. So, the positive recommendations vary widely. We are definitely not utopians, and admit our proposals are flawed just like any other.

That makes sense, but now that I think about it I don’t find this claim particularly neoreactionary: Enlightenment memes induce a sort of agnosia that prevents the rational design of non-enlightenment social structures. Treating this agnosia will increase the amount of possible social structures we are able to consider and the chances that we will be able to design something better.

What I see proposed are specific forms of monarchy or corporate-like governmental structures. More exotic proposals like futarchy and liquid democracy are dismissed, at least by Moldbug. So pre-enlightenment (or maybe anti-enlightenment) does feel like a better label to my non-expert ears.

1MichaelAnissimov
First and foremost, neoreaction is about a critique. Positive proposals are less frequently discussed and there is great disagreement about them within neoreaction. So, many people involved in neoreaction are involved primarily for the negative critique, and make no commitment to any specific positive proposals.
2MichaelAnissimov
As long as it's clear that the term isn't doing any semantic heavy-lifting here, it's safe in this context. No flattering claims are being made about non-Enlightenment principles in general, just that they correspond to a vast space.

You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.

The basic athematic of AI risk is, [orthogonality thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]

These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.

Ironically, I thin... (read more)

It takes years of study to write as poorly as he does.

Thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars

You’re confusing peoples’ goals with their expectations.

The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.

Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me ... (read more)

4XiXiDu
I don't know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field? The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don't just tell the general public that certain vaccines cause autism, that genetically modified food is dangerous, or scare them about nuclear power...you don't do that if all you got are arguments that you personally find convincing. What you do is hard empirically science in order to verify your hunches and eventually reach a consensus among experts that your fears are warranted. I am aware of many of the tactics that the sequences employ to dismiss the above paragraph. Tactics such as reversing the burden of proof, conjecturing arbitrary amounts of expected utility etc. All of the tactics are suspect. Yes, and they are convincing enough to me that I dismiss the claim that with artificial intelligence we are summoning the demon. Mostly the arguments made by AI risk advocates suffer from being detached from an actual grounding in reality. You can come up with arguments that make sense in the context of your hypothetical model of the world, in which all the implicit assumptions you make turn out to be true, but which might actually be irrelevant in the real world. AI drives are an example here. If you conjecture the sudden invention of an expected utility maximizer that quickly makes huge jumps in capability, then AI drives are much more of a concern than e.g. within the context of a gradual development of tools that become more autonomous due to their increased ability of understading and doing what humans mean.
1Punoxysm
I feel like citing Malthus as striking you as starkly true is a poor argument.
Halfwitz
250

Do not spam high-status people. That's a recipe for an ugh field. I'm pretty confident that Elon Musk is capable of navigating this terrain, including finding a competent guide if needed. He's obviously read extensively on the topic, something that’s not possible to do without discovering MIRI and its proponents.

-4[anonymous]
Who is talking about spamming anyone? You are completely missing my point. The goal is not to help Elon navigate the terrain. I know he can do that. The point is to humbly ask for his advice as to what we could be doing given his track record of good ideas in the past.

Singularity 1 on 1 is a podcast that has interviewed people associated with this forum, like Lukeprog, Robin Hanson and James Miller. However there seems to be a lot of inferential distance between the host and his guests. I think someone like James Miller or Yvain would make a better host for this type of podcast.

Side note, if you find podcasts almost unlistenable at normal speed, you should use Overcast, which has the best speed-up effects of any app I've tried.

1Capla
I concur. I started using overcast a few month ago. Will look into S1o1.

I just wattched Tim's Vermeer. It was very good, fun Documentary.

Good call here, btw. I've been going through random reddit comments to posts that link to LessWrong (http://www.reddit.com/domain/lesswrong.com), discarding threads on /r/hpmor /r/lesswrong and other affiliated subs. The basilisk is brought up far more than I expected – and widely mocked. This also seems to occur in Hacker News, too – on which LessWrong was once quite popular. I wasn’t around when the incident occurred, but I’m surprised by how effective it’s been at making LessWrong low status – and its odd persistence years after its creation. Unless ... (read more)

4gwern
It works much better than the previous go-to slur, cryonics and freezing heads, ever did. I'm not sure why - is it the censorship aspect? Or is it the apparent resemblance to Pascal's wager?
Load More