All of JoachimSchipper's Comments + Replies

I've used microcovid occasionally, to make sure my intuitive feelings about risk were not completely crazy (and that did cause some updates; notably, putting numbers to staying outdoors had an influence.) I'm not a heavy user, but I do appreciate the work you've done!

I'd basically like to see more of the same - update microcovid.org for omicron and keep it going.

(FWIW, I'm in the Netherlands, where we just entered a new lockdown for omicron. So COVID unfortunately isn't "over".)

4Sameerishere
Agreed - I'm struggling to figure out how to apply microcovid estimates in the wake of omicron.  Without an adjustment for that, it seems like no other improvements would matter. I would be willing to pay $1000 if microcovid were updated to reflect omicron. (Please agree with my post if you have a similar willingness to pay, and agree with JoachimSchipper's post if you just generally support updates for omicron but don't have a similar willingness to pay). (I'm a little confused as to why it's not clear that this is the best next step for microcovid, and if anyone has suggestions for making ad-hoc adjustments to use microcovid given omicron, would appreciate them!)
Answer by JoachimSchipper10

You're right that a negative affect to NFTs in particular / blockchain stuff in general is part of the reaction, but I don't see the reasoning error in

  • "<X> causes greater electricity consumption;
  • on the margin, greater electricity consumption currently causes <more pollution / finite resources to be consumed faster / more birds to die due to windmills / ...>, which is bad;
  • this is a downside to <X>."

It's probably the case that NFTs do not directly cause greater electricity consumption, but NFTs do plausibly indirectly cause greater electricity consumption, e.g. via making Ethereum more valuable, thus increasing mining rewards, thus increasing competition.

1sxae
Thanks for your thoughtful reply, Joachim! I am definitely not saying that NFTs are not bad for the environment - they are, as is anything that draws on the mains grid and creates demand. It is absolutely correct to judge cryptocurrency negatively, just as it is correct to judge things like driving a gas guzzler or flying 20 times a year. But it seems like we completely ignore some ways in which the energy we produce is wasted, and hyperfocus on others, and we just end up with a completely screwed up valuation of what needs changing.

Although I've heard the advice to leave after a year, my experience has been different - after three years, I'm still learning a lot and I'm beginning to tackle the really hard problems. Basically, I find myself agreeing with Yossi Kreinin's reply to Patrick McKenzie's advice, at least so far. (Both links are very much worth reading.)

Of course, you do need to push for interesting assignments and space to learn. Also, be sure to pick a company that actually does something interesting in the first place - I work on embedded crypto devices for the government ... (read more)

2Viliam_Bur
I was thinking about the programmers unable to do the FizzBuzz test, and the interesting insight that Eliezer linked long ago -- that maybe it's not so many programmers who are extremely stupid; maybe it's just the fact that the non-extremely-stupid ones sooner or later get a job, while the extremely stupid ones remain in circulation for a long time, thus more recruiters have the misfortune of meeting them. Maybe it also works the other way round. When a great IT company is hiring, the employees are happy to tell their friends, so the positions are filled quickly. When a crappy IT company is hiring, it takes them much longer to find someone, and meanwhile other programmers are quitting, so they have to keep hiring to replace their positions. Which means: if you apply for a random IT company that is hiring at the moment, you are not going to see an average company; you are most likely to see a crappy one. Of course, you can be lucky. But most people are not. In a good company you can keep learning even after the one year. But those are rare. A higher-level advice is to be strategic and find the good company. For this you want to maintain many contacts with programmers who share your values (who recognize the type of job you would like to have). But how do you get these contacts? Keep in touch with your university classmates (I failed to do this, because at university I had no idea how much will I care about this one day), or take a few crappy jobs and keep in touch with the former colleagues (who may change their jobs later and find something better). Alternatively, do something to become famous (e.g. contribute to open-source software, or write a popular programming blog) and then the people will contact you.
  • Batman is a murderer no less than the Joker, for all the lives the Joker took that Batman could've saved by killing him. ch. 85
  • "It's not fair to the innocent bystanders to play at being Batman if you can't actually protect everyone under that code." ch. 91
  • Harry had no intention of saying it out loud, of course, but now that he'd failed decisively to prevent any deaths during his quest, he had no further intention of being restrained by the law or even the code of Batman.ch. 97.
0dspeyer
More immediately relevant: Even in the world of comic books, the only reason a superhero like Batman even looks successful is that the comic-book readers only notice when Important Named Characters die, not when the Joker shoots some random nameless bystander to show off his villainy.

Thanks, Nancy, for putting in this effort.

Some people do need to see that link, but note that it, too, is rather dangerous.

And, of course, encouraging homeownership makes this worse. Good thing that most of the Western world hasn't made that an explicit policy goal for the past decade...

3Strange7
Homeownership makes employees less willing to relocate, but also more tolerant of short-term decreases in the demand for their skills, since they can postpone maintenance on a house (or perform it inefficiently themselves with the surplus time) more safely than they can miss rent payments to a landlord.
0[anonymous]
Try three.

I was pretty happy about that, actually.

I assume that TheAncientGeek has actually submitted the survey; in that case, their comment is "proof" that they deserve karma.

I, too, took the survey. (And promptly forgot to claim my karma; oh well.)

I didn't exactly disagree with the content, right?

Part of the problem is just that writing something good about epistemic rationality is really hard, even if you stick to the 101 level - and, well, I don't really care about 101 anymore. But I have plenty of sympathy for those writing more practical posts.

1undermind
No, you didn't. And kudos (in the form of an upvote) to you for suggesting something to improve the niceness of rationalists -- as has been pointed out many times, that's something we should work on. Yeah, instrumental rationality is (epistemically) easier -- on the writer as well as on the reader. Epistemic rationality requires rigor, which usually implies a lot of math. Instrumental rationality can be pretty successful with a few examples and a moderately useful analogy.

This is not nice - could you try to find a more pleasant way to say this?

Also, LW does do epistemic rationality - but it's easier to say something useful and new about practical matters, so there are more posts of that kind.

2undermind
Sure, it was snarky, but I thought it was funny. It's a decent criticism of a decent chunk of LW, such that I don't have a great response to it. Check your accuracy at a meta-level to determine when to lie to yourself? That seems to be how this technique is used, but it feels like an unsatisfactory response.

Note, though, that (a) "Lisp doesn't look like C" isn't as much of a problem in a world where C and C-like languages are not dominant, and (b) something like Common Lisp doesn't have to be particularly functional - that's a favored paradigm of the community, but it's a pretty acceptable imperative/OO language too.

"Doesn't run well on my computer" was probably a bigger problem. (Modern computers are much faster; modern Lisp implementations are much better.)

Edit: still, C is clearly superior to any other language. ;-)

2lmm
I suspect the main reason lisp failed is the syntax, because the first thing early computer users would try to do is get the computer to do arithmetic. In C/Fortran/etc. you can write arithmetic expressions that look more-or-less like arithmetic expressions, e.g. (a + b/2) ** 2 / c. In Lisp you can't.

Surveyed.

Also, spoiler: the reward is too small and unlikely for me to bother thinking through the ethics of defecting; in particular, I'm fairly insensitive to the multiplier for defecting at this price point. (Morality through indecisiveness?)

0Lumifer
Doesn't help me much. The purpose of weapons -- all weapons -- is to kill. What exactly is the moral difference between a nuclear bomb and a conventional bomb?

Assuming that you become some kind of superintelligence, I'd expect you to find better ways of amusing yourself, yes; especially if you're willing and able to self-modify.

Unless I am badly mistaken, indemnify would mean that Harry has to pay etc. if e.g. Dumbledore decides to demand recompense of his own. (Note that Dumbledore may well have similar power over her as he has over Harry himself.)

This is obviously much worse than just giving up his own claim ("exonerate").

Relatedly, most TCP scheduler are variants of the Reno algorithm, which basically means that they increase transmission rate until (the network is so congested that) packets begin dropping. In contrast, Vegas-type schedulers increase transmission rate until packets begin to take longer to arrive, which starts happening in congested networks shortly before packets are actually lost. A network of Vegas machines has considerably lower latency, and only minimally worse throughput, than a network of Reno machines.

Unfortunately, since Reno backs off later than V... (read more)

2sketerpot
Unless they're idle most of the time, that is. Anybody who's run a modern BitTorrent client alongside a web browser has been in this situation: the congestion control protocol used by most BitTorrent clients watches for packet delays and backs off before TCP, so it's much lower-priority than just about everything else. Even so, it can end up using the vast majority of bandwidth, because nobody else was using it.

Is she particularly powerful, though? She's extraordinarily talented, very knowledgeable for her age, and has more raw power than anyone in her year including Draco; but Rita is more experienced, and most importantly older - it has been repeatedly pointed out that HP lacks the raw power for something-or-other, and the twins are far stronger than he despite not being particularly talented. It seems that Rita should have an edge in the "raw power" department, and I'd expect this effect to key off raw power.

Note that it's also sufficient to assume that Quirrel and/or Mary's room can suppress this effect.

This is a bit un-LW-ian, but: I'm earnestly happy for you. You sound, if not happier, more fulfilled than in your first post on this site. (Also, ambition is good.)

katydee290

If this is in fact un-LW-ian, it shouldn't be. :)

Sounds like the Buddha and his followers to me.

patio11 is something of a "marketing engineer", and his target audience is young software enthusiasts (Hacker News). What makes you think that this isn't pretty specific advice for a fairly narrow audience?

Spoiler: Gura ntnva, gur nyvra qbrf nccneragyl znantr gb chg n onpxqbbe va bar bs gur uhzna'f oenvaf.

I agree that the AI you envision would be dangerously likely to escape a "competent" box too; and in any case, even if you manage to keep the AI in the box, attempts to actually use any advice it gives are extremely dangerous.

That said, I think your "half an inch" is off by multiple orders of magnitude.

My comment was mostly inspired by (known effective) real-world examples. Note that relieving anyone who shows signs of being persuaded is a de-emphasized but vital part of this policy, as is carefully vetting people before trusting them.

Actually implementing a "N people at a time" rule can be done using locks, guards and/or cryptography (note that many such algorithms are provably secure against an adversary with unlimited computing power, "information theoretic security").

Note that the AI box setting is not one which security-minded people would consider "competent"; once you're convinced that AI is dangerous and persuasive, the minimum safeguard would be to require multiple people to be present when interacting with the box, and to only allow release with the assent of a significant number of people.

It is, after all, much harder to convince a group of mutually-suspicious humans than to convince one lone person.

(This is not a knock on EY's experiment, which does indeed test a level of security that really was proposed by several real-world people; it is a knock on their security systems.)

I think this is making a five-inch fence half an inch higher. It's just not relevant on the scale of an agent to which a human is a causal system made of brain areas and a group of humans is just another causal system made of several interacting copies of those brain areas.

4accolade
That sounds right. Would you have evidence to back up the intuition? (This knowledge would also be useful for marketing and other present life persuasion purposes.) #( TL;DR: Mo' people - mo' problems? I can think of effects that could theoretically make it easier to convince a group: * For some reason, Boxy might be better skilled at manipulating social/group dynamics than at influencing a lone wolf. * More people make the system more complex. Complexity generally increases the likelihood of security holes. * Every extra person makes another target and will bring new soft spots to the table, which the AI could pounce on. * Supposing that the most competent person available would get the position of the lone Gatekeeper, the average competence would fall when adding more staff. * Then the machine could go for an inductive approach - convince the weakest link first, proceed from there with this human ally on her side. * Persuaded humans could principally be employed as actuators, e.g. for pressuring, even attacking opposing group members. * The lone wolf could be strong against a computer but weak against fellow humans. * Surely you will say "But any communication with the terminal will be supervised by everyone!" But that does not strictly make such influence impossible as far as I can tell. * Also the superintelligence could get creative, e.g. instill a discussion among the colleagues so that most of them are distracted. (You could take preemptive measures against these worries, but Boxy might find security holes in every 'firewall' you come up with - an arms race we could win?) #)

For me, high (insight + fun) per (time + effort).

(Are you sure you want this posted under what appears to be a real name?)

2AlanCrowe
When should seek the protection of anonymity? Where do I draw the line? On which side do pro-bestiality comments fall?
5MugaSofer
Don't be absurd. How could advocating population control via shotgun harm one's reputation?

I have no problem with this passage. But it does not seem obviously impossible to create a device that stimulates that-which-feels-rightness proportionally to (its estimate of) the clippiness of the universe - it's just a very peculiar kind of wireheading.

As you point out, it'd be obvious, on reflection, that one's sense of rightness has changed; but that doesn't necessarily make it a different qualia, any more than having your eyes opened to the suffering of (group) changes your experience of (in)justice qua (in)justice.

I don't think it's unfair to put some restrictions on the universes you want to describe. Sure, reality could be arbitrarily weird - but if the universe cannot even be approximated within a number of bits much larger than the number of neurons (or even atoms, quarks, whatever), "rationality" has lost anyway.

(The obvious counterexample is that previous generations would have considered different classes of universes unthinkable in this fashion.)

2Eugine_Nier
Why? If the universe has features that our current computers can't approximate, maybe we could use those features to build better computers.

It's not too hard to write Eliezer's 2^48 (possibly invalid) games of non-causal-Life to disk; but does that make any of them real? As real as the one in the article?

3Bugmaster
I am having trouble figuring out what the word "real" means when applied to the game of Life. I do know, however, that if my Life game client had a "load game" function, then it would accept any valid string of bits, regardless of where they came from -- a previously saved game, or a random number generator.

It's true that intelligence wouldn't do very well in a completely unpredictable universe; but I see no reason why it doesn't work in something like HPMoR, and there are plenty of such "almost-sane" possibilities.

1CCC
Woudn't HPMoR count as "highly, but not completely, causal"?

Mostly, what David_Gerard says, better than I managed to express it; in part, "be nice to whatever minorities you have"; and finally, yes, "this is a good cause; we should champion it". "Arguments as soldiers" is partly a valid criticism, but note that we're looking at a bunch of narratives, not a logical argument; and note that very little "improvement of the other's arguments" seem to be going on.

All of what you say is true; it is also true that I'm somewhat thin-skinned on this point due to negative experiences on non-LW fora; but I also think that there is a real effect. It is true that the comments on this post are not significantly more critical/nitpicky than the comments on How minimal is our intelligence. However, the comments here do seem to pick far more nits than, say, the comments on How to have things correctly.

The first post is heavily fact-based and defends a thesis based on - of necessity - incomplete data and back-projection of mecha... (read more)

If a post has 39 "short comments saying "I want to see more posts like this post."" and 153 nitpicks, that says something about the community reaction. This is especially relevant since "but this detail is wrong" seems to be a common reaction to these kinds of issues on geek fora.

(Yes, not nearly all posts are nitpicks, and my meta-complaining doesn't contribute all that much signal either.)

Vaniver120

This is especially relevant since "but this detail is wrong" seems to be a common reaction to these kinds of issues on geek fora.

It feels to me like we both have an empirical disagreement about whether or not this behavior is amplified when discussing "these kind of issues" and a normative disagreement about whether this behavior is constructive or destructive.

For any post, one should expect the number of corrections to be related to the number of things that need to be corrected, modulated by how interesting the post is. A post whic... (read more)

One relevant datum: when I started my studies in math, about 33% of the students was female. In the same year, about 1% (i.e. one) of the computer science students was female.

It's possible to come up with other reasons - IT is certainly well-suited to people who don't like human interaction all that much - but I think that's a significant part of the problem.

6A1987dM
IME maths is the most feminine STEM field excluding life sciences. The first few math students I know personally that spring to my mind are all female. (Of course, since I am a straight guy, "springs to my mind" will be a biased criterion, but if I do the same with (say) engineering students, most of the first few are male.)
6Morendil
Uh, I'm pretty sure this assertion is the result of the particular culture that's developed in IT, rather than its truth being a cause of it. Is this claim actually even close to true? To the extent that there are in fact professions "well-suited to people who don't like human interaction", by virtue of which problems the professionals are working to solve, I would think of farming or legal medicine first, not IT. IT jobs require constant interaction with people, because they are mainly about turning vague desiderata into working solutions; on the "solution" end you are interacting a lot with machines, but you absolutely can't afford to ignore the "desiderata" side of things, and that is primarily a matter of human communication. Our current IT culture has managed to make it the norm that much of this communication can take place over cold channels, such as email or Word documents. I think of that as pathological; but more importantly, this still counts as human interaction! Then there's the extra implication in your statement - that jobs "well-suited to people who don't like human interaction" will attract males more. That may well be true, but it'll take actual evidence to convince me.

CS and IT have become less gender-balanced (more male) in the past 20-30 years — over the same time frame that the lab sciences have gotten more balanced.

Nornagest220

I never consciously noticed that, but you're right. From what I remember the proportion of women in my CS classes wasn't quite that low, but it was still south of 10%. 33% also sounds about right for non-engineering STEM majors in my (publicly funded, moderately selective) university in the early-to-mid-Noughties, though that's skewed upward a bit by a student body that's 60% female.

It seems implausible, though, that a poor professional culture regarding gender would skew numbers that heavily in a freshman CS class -- most of these students are going to ... (read more)

It bothers me how many of these comments pick nits ("plowing isn't especially feminine", "you can't unilaterally declare Crocker's Rules") instead of actually engaging with what has been said.

(And those are just women's issues; women are not the only group that sometimes has problems in geek culture, or specifically on Less Wrong.)

6wedrifid
Those are things that actually are said. If a point is blatantly wrong or the entire usage of "Crocker's Rules" is, in fact, inappropriate then those things are wrong and inappropriate and can be declared as such. If it happened that nobody engaged with the intended point of the article that would perhaps just indicate that people weren't interested (or weren't interested in discussing it here). That is... not the case.
[anonymous]210

Have you read the comment sections on this site before? I don't think LWers where any more nitpicky than usual.

-29Dallas
9JoshuaZ
It is possible for people to criticize or comment on specific (possibly minor issues) while still learning from or getting the overall set of points made by something.
Emile190

I don't know what you expect when you say "actually engaging what has been said" - the post is a collection of interesting and well-written anecdotes, but it doesn't actually have a strong central point that is asking for a reaction.

It's not saying "you should change your behavior in such-and-such a way" or "doing such-and-such a thing is wrong and we should all condemn it" or asking for help or advice or an answer or even opinions ...

It sounds like you are complaining that people are treating arguments as logical constructions that stand or fall based on their own merit, rather than as soldiers for a grand and noble cause which we must endorse lest we betray our own side.

If that's not what you mean, can you clarify your point better?

Vaniver320

It bothers me how many of these comments pick nits ("plowing isn't especially feminine", "you can't unilaterally declare Crocker's Rules") instead of actually engaging with what has been said.

What would differentiate picking nits and engaging with what was said?

Like SaidAchmiz points out, there's not all that much to say when someone shares information. I'm certainly not going to share the off-site experiences of female friends that were told to me in confidence, and my experiences are not particularly relevant, and so I don't have m... (read more)

Perhaps an instance of Why Our Kind Can't Cooperate; people who agree, do not respond... as for me, I find myself with two kinds of responses to these anecdotes. For some, I think "Wow, what an unfortunate example of systemic sexism etc.; how informative, and how useful that this is here." Other people have already commented to that effect. I'm not sure what I might say in terms of engaging with such content, but perhaps something will come to me, in which case I'll say something.

For others... well, here's an example:

It's lunchtime in fourth gr

... (read more)

From AlexanderD's comment:

"The point, though, is that the narrowness of focus in the adventure precluded exploration of a large set of options."

If playing D&D with a bunch of girls consistently leads to solutions being proposed that do not fit the traditional D&D mold, that can teach us something about how well that mold fits a bunch of girls. More generally, the author is a pretty smart woman who thought this was a good example - you'd do well to take a second look.

If you interpret the father's statement as "all else being equal, being a better cook is good" and you completely divorce it from a historical and cultural context, it is indeed not really problematic. But given that we are, in fact, talking culture here, I do not think that this is the interpretation most likely to increase your insight.

9Emile
(not disagreeing, but note that I'm not saying the statement isn't problematic, merely saying that some objections are better than others)

That's not a high bar. I love my IT job, but IT is shamefully bad at this.

6JoshuaZ
You know, I've noticed issues and heard about problems in math and the sciences before of this sort, but it seems like much more of a problem in IT. Any idea why?

Automatic dishwashers are really cheap per hour saved. The actual costs will vary widely (esp. in the US, where the cost of electricity is much lower than where I live), but our best estimate at the time of buying was $2/hour saved (based on halving the 30 minutes we need to do the dishes, and assuming it breaks the moment it's out of warranty - not entirely unreasonable, since we pretty much bought the cheapest option.) Locally, about half is depreciation of the dishwasher and half is electricity/washing powder/water (negligible).

(I've brought this up before: http://lesswrong.com/lw/9pk/rationality_quotes_february_2012/5tsb.)

Computers have revolutionized most fields of science. I take it as a general "yay science/engineer/computers" quote.

-4Jayson_Virissimo
Babbage's prediction isn't about computers in general; it is about the Analytical Engine (which, as I pointed out, has never been constructed in its entirety).

Sure, thorium reactors do not appear to immediately allow nuclear weapons - but the scientific and technological advances that lead to thorium reactors are definitely "dual-use".

I'm not entirely convinced of either the feasibility or the ethics of the "physicists should never have told politicians how to build a nuke" argument that's been made multiple times on LW (and in HPMOR), but the existence of thorium reactors doesn't really constitute a valid argument against it - an industry capable of building thorium reactors is very likely able to think up, and eventually build, nukes.

Aren't you just confusing distributions (2d2) and samples ('3') here?

0CCC
Thank you, I shall suitably edit my post.

This is true in theory, but do you think it's an accurate description of our real world?

(Nuclear power is potentially great, but with a bit more patience and care, we could stretch our non-nuclear resources quite a bit further, which would have given us more time to build stable(r) political systems.)

2elspood
I think you set a false dichotomy here - we can generate relatively safe nuclear power (thorium reactors) without existential risk, and without creating the byproducts necessary to create nuclear weapons. This is not an argument against the root comment, however.
3Nominull
No, I was responding to the "no one in their right mind" bit. It seems to me that when you are in your right mind is precisely the time to build artifacts that could destroy your civilization, and it doesn't seem to me that you could conclude from building such artifacts that you are not in your right mind. Rather, I think there's other evidence that humanity can't be trusted with e.g. nuclear weaponry, and this suggests that we should not build it. lukeprog's quote seems to me to be of the form "Humanity can't be trusted with nuclear weapons, yet builds them anyway, so it must be crazy, so it can't be trusted with nuclear weapons."

I'm not completely aware of the correct protocol here, but "with what gender do you primarily identify? ... M (transgender f -> m) ..." is not something I would expect a transgender person to say - if I'd made that much of an effort to be (fe)male, I'd want to be "(fe)male", not "(fe)male (transgender ...)".

Splitting out blog referrals from general referrals seems odd; is there a reason you cannot use "[ ] some blog, [ ]" and "[ ] other, [ ]".

I see no benefit to "revealed" in "Wh... (read more)

1A1987dM
One more reason to support splitting the question.

We already have a poll about whether this is useful content, and it's currently at +32. I can imagine a few reasons why you made this second poll, but none of them are exactly complimentary.

Load More