You're right that a negative affect to NFTs in particular / blockchain stuff in general is part of the reaction, but I don't see the reasoning error in
It's probably the case that NFTs do not directly cause greater electricity consumption, but NFTs do plausibly indirectly cause greater electricity consumption, e.g. via making Ethereum more valuable, thus increasing mining rewards, thus increasing competition.
Although I've heard the advice to leave after a year, my experience has been different - after three years, I'm still learning a lot and I'm beginning to tackle the really hard problems. Basically, I find myself agreeing with Yossi Kreinin's reply to Patrick McKenzie's advice, at least so far. (Both links are very much worth reading.)
Of course, you do need to push for interesting assignments and space to learn. Also, be sure to pick a company that actually does something interesting in the first place - I work on embedded crypto devices for the government ...
Thanks, Nancy, for putting in this effort.
Some people do need to see that link, but note that it, too, is rather dangerous.
And, of course, encouraging homeownership makes this worse. Good thing that most of the Western world hasn't made that an explicit policy goal for the past decade...
I was pretty happy about that, actually.
I assume that TheAncientGeek has actually submitted the survey; in that case, their comment is "proof" that they deserve karma.
I, too, took the survey. (And promptly forgot to claim my karma; oh well.)
I didn't exactly disagree with the content, right?
Part of the problem is just that writing something good about epistemic rationality is really hard, even if you stick to the 101 level - and, well, I don't really care about 101 anymore. But I have plenty of sympathy for those writing more practical posts.
This is not nice - could you try to find a more pleasant way to say this?
Also, LW does do epistemic rationality - but it's easier to say something useful and new about practical matters, so there are more posts of that kind.
Note, though, that (a) "Lisp doesn't look like C" isn't as much of a problem in a world where C and C-like languages are not dominant, and (b) something like Common Lisp doesn't have to be particularly functional - that's a favored paradigm of the community, but it's a pretty acceptable imperative/OO language too.
"Doesn't run well on my computer" was probably a bigger problem. (Modern computers are much faster; modern Lisp implementations are much better.)
Edit: still, C is clearly superior to any other language. ;-)
The Dutch figures [are closer to yours than I expected|https://www.swov.nl/ibmcognos/cgi-bin/cognos.cgi?b_action=powerPlayService&m_encoding=UTF-8&BZ=1AAAB7pUZHH542oVOXW~CIBT9M1C3F3Oh1o_HPtBSo8ummzXZM7PXhrUFQxuX7NePWhNjlmU3cM7J4cAhyLfjfL~dZWsZt511uJYPlHM9S6eMQyKFWLIJiGweZnKazMVSzESSJNJnHoP_biZ26epV7Fcx5cuDNR2azqujrQt0NEroBIxqkIZytEFv1coU7YhG8o~QTrf6YK_BkzpUqsT7xDu6Cmv9WSHloJTpVO1FYQs0ngdwpu106dW5D6NrS~yyZkicfCWHpi48OtTfuvTnla5tg51XfXUg83ScbjebLN2vPYmXLL6rtc1ZmfL~x4LkLT4CEAYAjAEhBMg0isLoikB67xm7FuvLpyksnpRyngjlc8pDoBwZ5R_ULwaD3Qzya9hl9WIovezb~ACpc4u...
Surveyed.
Also, spoiler: the reward is too small and unlikely for me to bother thinking through the ethics of defecting; in particular, I'm fairly insensitive to the multiplier for defecting at this price point. (Morality through indecisiveness?)
Assuming that you become some kind of superintelligence, I'd expect you to find better ways of amusing yourself, yes; especially if you're willing and able to self-modify.
Unless I am badly mistaken, indemnify would mean that Harry has to pay etc. if e.g. Dumbledore decides to demand recompense of his own. (Note that Dumbledore may well have similar power over her as he has over Harry himself.)
This is obviously much worse than just giving up his own claim ("exonerate").
Relatedly, most TCP scheduler are variants of the Reno algorithm, which basically means that they increase transmission rate until (the network is so congested that) packets begin dropping. In contrast, Vegas-type schedulers increase transmission rate until packets begin to take longer to arrive, which starts happening in congested networks shortly before packets are actually lost. A network of Vegas machines has considerably lower latency, and only minimally worse throughput, than a network of Reno machines.
Unfortunately, since Reno backs off later than V...
Is she particularly powerful, though? She's extraordinarily talented, very knowledgeable for her age, and has more raw power than anyone in her year including Draco; but Rita is more experienced, and most importantly older - it has been repeatedly pointed out that HP lacks the raw power for something-or-other, and the twins are far stronger than he despite not being particularly talented. It seems that Rita should have an edge in the "raw power" department, and I'd expect this effect to key off raw power.
Note that it's also sufficient to assume that Quirrel and/or Mary's room can suppress this effect.
This is a bit un-LW-ian, but: I'm earnestly happy for you. You sound, if not happier, more fulfilled than in your first post on this site. (Also, ambition is good.)
If this is in fact un-LW-ian, it shouldn't be. :)
Sounds like the Buddha and his followers to me.
patio11 is something of a "marketing engineer", and his target audience is young software enthusiasts (Hacker News). What makes you think that this isn't pretty specific advice for a fairly narrow audience?
Spoiler: Gura ntnva, gur nyvra qbrf nccneragyl znantr gb chg n onpxqbbe va bar bs gur uhzna'f oenvaf.
I agree that the AI you envision would be dangerously likely to escape a "competent" box too; and in any case, even if you manage to keep the AI in the box, attempts to actually use any advice it gives are extremely dangerous.
That said, I think your "half an inch" is off by multiple orders of magnitude.
My comment was mostly inspired by (known effective) real-world examples. Note that relieving anyone who shows signs of being persuaded is a de-emphasized but vital part of this policy, as is carefully vetting people before trusting them.
Actually implementing a "N people at a time" rule can be done using locks, guards and/or cryptography (note that many such algorithms are provably secure against an adversary with unlimited computing power, "information theoretic security").
Note that the AI box setting is not one which security-minded people would consider "competent"; once you're convinced that AI is dangerous and persuasive, the minimum safeguard would be to require multiple people to be present when interacting with the box, and to only allow release with the assent of a significant number of people.
It is, after all, much harder to convince a group of mutually-suspicious humans than to convince one lone person.
(This is not a knock on EY's experiment, which does indeed test a level of security that really was proposed by several real-world people; it is a knock on their security systems.)
I think this is making a five-inch fence half an inch higher. It's just not relevant on the scale of an agent to which a human is a causal system made of brain areas and a group of humans is just another causal system made of several interacting copies of those brain areas.
For me, high (insight + fun) per (time + effort).
(Are you sure you want this posted under what appears to be a real name?)
I have no problem with this passage. But it does not seem obviously impossible to create a device that stimulates that-which-feels-rightness proportionally to (its estimate of) the clippiness of the universe - it's just a very peculiar kind of wireheading.
As you point out, it'd be obvious, on reflection, that one's sense of rightness has changed; but that doesn't necessarily make it a different qualia, any more than having your eyes opened to the suffering of (group) changes your experience of (in)justice qua (in)justice.
I don't think it's unfair to put some restrictions on the universes you want to describe. Sure, reality could be arbitrarily weird - but if the universe cannot even be approximated within a number of bits much larger than the number of neurons (or even atoms, quarks, whatever), "rationality" has lost anyway.
(The obvious counterexample is that previous generations would have considered different classes of universes unthinkable in this fashion.)
It's not too hard to write Eliezer's 2^48 (possibly invalid) games of non-causal-Life to disk; but does that make any of them real? As real as the one in the article?
It's true that intelligence wouldn't do very well in a completely unpredictable universe; but I see no reason why it doesn't work in something like HPMoR, and there are plenty of such "almost-sane" possibilities.
Mostly, what David_Gerard says, better than I managed to express it; in part, "be nice to whatever minorities you have"; and finally, yes, "this is a good cause; we should champion it". "Arguments as soldiers" is partly a valid criticism, but note that we're looking at a bunch of narratives, not a logical argument; and note that very little "improvement of the other's arguments" seem to be going on.
All of what you say is true; it is also true that I'm somewhat thin-skinned on this point due to negative experiences on non-LW fora; but I also think that there is a real effect. It is true that the comments on this post are not significantly more critical/nitpicky than the comments on How minimal is our intelligence. However, the comments here do seem to pick far more nits than, say, the comments on How to have things correctly.
The first post is heavily fact-based and defends a thesis based on - of necessity - incomplete data and back-projection of mecha...
If a post has 39 "short comments saying "I want to see more posts like this post."" and 153 nitpicks, that says something about the community reaction. This is especially relevant since "but this detail is wrong" seems to be a common reaction to these kinds of issues on geek fora.
(Yes, not nearly all posts are nitpicks, and my meta-complaining doesn't contribute all that much signal either.)
This is especially relevant since "but this detail is wrong" seems to be a common reaction to these kinds of issues on geek fora.
It feels to me like we both have an empirical disagreement about whether or not this behavior is amplified when discussing "these kind of issues" and a normative disagreement about whether this behavior is constructive or destructive.
For any post, one should expect the number of corrections to be related to the number of things that need to be corrected, modulated by how interesting the post is. A post whic...
One relevant datum: when I started my studies in math, about 33% of the students was female. In the same year, about 1% (i.e. one) of the computer science students was female.
It's possible to come up with other reasons - IT is certainly well-suited to people who don't like human interaction all that much - but I think that's a significant part of the problem.
CS and IT have become less gender-balanced (more male) in the past 20-30 years — over the same time frame that the lab sciences have gotten more balanced.
I never consciously noticed that, but you're right. From what I remember the proportion of women in my CS classes wasn't quite that low, but it was still south of 10%. 33% also sounds about right for non-engineering STEM majors in my (publicly funded, moderately selective) university in the early-to-mid-Noughties, though that's skewed upward a bit by a student body that's 60% female.
It seems implausible, though, that a poor professional culture regarding gender would skew numbers that heavily in a freshman CS class -- most of these students are going to ...
It bothers me how many of these comments pick nits ("plowing isn't especially feminine", "you can't unilaterally declare Crocker's Rules") instead of actually engaging with what has been said.
(And those are just women's issues; women are not the only group that sometimes has problems in geek culture, or specifically on Less Wrong.)
Have you read the comment sections on this site before? I don't think LWers where any more nitpicky than usual.
I don't know what you expect when you say "actually engaging what has been said" - the post is a collection of interesting and well-written anecdotes, but it doesn't actually have a strong central point that is asking for a reaction.
It's not saying "you should change your behavior in such-and-such a way" or "doing such-and-such a thing is wrong and we should all condemn it" or asking for help or advice or an answer or even opinions ...
It sounds like you are complaining that people are treating arguments as logical constructions that stand or fall based on their own merit, rather than as soldiers for a grand and noble cause which we must endorse lest we betray our own side.
If that's not what you mean, can you clarify your point better?
It bothers me how many of these comments pick nits ("plowing isn't especially feminine", "you can't unilaterally declare Crocker's Rules") instead of actually engaging with what has been said.
What would differentiate picking nits and engaging with what was said?
Like SaidAchmiz points out, there's not all that much to say when someone shares information. I'm certainly not going to share the off-site experiences of female friends that were told to me in confidence, and my experiences are not particularly relevant, and so I don't have m...
Perhaps an instance of Why Our Kind Can't Cooperate; people who agree, do not respond... as for me, I find myself with two kinds of responses to these anecdotes. For some, I think "Wow, what an unfortunate example of systemic sexism etc.; how informative, and how useful that this is here." Other people have already commented to that effect. I'm not sure what I might say in terms of engaging with such content, but perhaps something will come to me, in which case I'll say something.
For others... well, here's an example:
...It's lunchtime in fourth gr
From AlexanderD's comment:
"The point, though, is that the narrowness of focus in the adventure precluded exploration of a large set of options."
If playing D&D with a bunch of girls consistently leads to solutions being proposed that do not fit the traditional D&D mold, that can teach us something about how well that mold fits a bunch of girls. More generally, the author is a pretty smart woman who thought this was a good example - you'd do well to take a second look.
If you interpret the father's statement as "all else being equal, being a better cook is good" and you completely divorce it from a historical and cultural context, it is indeed not really problematic. But given that we are, in fact, talking culture here, I do not think that this is the interpretation most likely to increase your insight.
Automatic dishwashers are really cheap per hour saved. The actual costs will vary widely (esp. in the US, where the cost of electricity is much lower than where I live), but our best estimate at the time of buying was $2/hour saved (based on halving the 30 minutes we need to do the dishes, and assuming it breaks the moment it's out of warranty - not entirely unreasonable, since we pretty much bought the cheapest option.) Locally, about half is depreciation of the dishwasher and half is electricity/washing powder/water (negligible).
(I've brought this up before: http://lesswrong.com/lw/9pk/rationality_quotes_february_2012/5tsb.)
Computers have revolutionized most fields of science. I take it as a general "yay science/engineer/computers" quote.
Sure, thorium reactors do not appear to immediately allow nuclear weapons - but the scientific and technological advances that lead to thorium reactors are definitely "dual-use".
I'm not entirely convinced of either the feasibility or the ethics of the "physicists should never have told politicians how to build a nuke" argument that's been made multiple times on LW (and in HPMOR), but the existence of thorium reactors doesn't really constitute a valid argument against it - an industry capable of building thorium reactors is very likely able to think up, and eventually build, nukes.
Aren't you just confusing distributions (2d2) and samples ('3') here?
This is true in theory, but do you think it's an accurate description of our real world?
(Nuclear power is potentially great, but with a bit more patience and care, we could stretch our non-nuclear resources quite a bit further, which would have given us more time to build stable(r) political systems.)
I'm not completely aware of the correct protocol here, but "with what gender do you primarily identify? ... M (transgender f -> m) ..." is not something I would expect a transgender person to say - if I'd made that much of an effort to be (fe)male, I'd want to be "(fe)male", not "(fe)male (transgender ...)".
Splitting out blog referrals from general referrals seems odd; is there a reason you cannot use "[ ] some blog, [ ]" and "[ ] other, [ ]".
I see no benefit to "revealed" in "Wh...
We already have a poll about whether this is useful content, and it's currently at +32. I can imagine a few reasons why you made this second poll, but none of them are exactly complimentary.
I've used microcovid occasionally, to make sure my intuitive feelings about risk were not completely crazy (and that did cause some updates; notably, putting numbers to staying outdoors had an influence.) I'm not a heavy user, but I do appreciate the work you've done!
I'd basically like to see more of the same - update microcovid.org for omicron and keep it going.
(FWIW, I'm in the Netherlands, where we just entered a new lockdown for omicron. So COVID unfortunately isn't "over".)