Great idea! When everyone has inhaled the gas Harry can truthfully say in parseltongue that if he dies, everyone present will die (because that would cancel the transfiguration).
Edit: This work well with all the early foreshadowing about how transfiguration is extremely dangerous. In Ghostbusters we establish early on that you're not supposed to cross the streams because that is extremely dangerous. And then, at the end of the move, when all is lost, what you do is to deliberately cross the streams.
It's of course possible that this Bock guy knows what he's doing on the hiring front. But in these interviews he has no incentive to give Google's competitors coherent helpful information on how to hire people - and every incentive to send out obfuscated messages which might flatter the preconceptions of NYT readers.
I've pointed out in the past that in the Google context, range restriction is a problem (when everyone applying to Google is ultra-smart, smartness ceases to be a useful predictor), so Bock could be saying something true & interesting in picking out some other traits which vaguely sound like IQ but aren't (maybe 'processing speed'?), but then he or the writer are being very misleading (intentionally or unintentionally). I don't know which of these possibilities might be true.
Bock said ... that learning ability was much more important indicator of whether someone would be a good fit for Google than I.Q.
I have limited trust in a source which says things like that.
Edited to add: More on Bock's learning ability:
For every job, though, the No. 1 thing we look for is general cognitive ability, and it’s not I.Q. It’s learning ability. It’s the ability to process on the fly. It’s the ability to pull together disparate bits of information.
Yeah, nope.
Truth has her throne on the shadowy back of doubt.
-- Sri Aurobindo (1872-1950), Savitri - A Legend and a Symbol
...Ye say that those ancient prophecies are true. Behold, I say that ye do not know that they are true.
Ye say that this people is a guilty and a fallen people, because of the transgression of a parent. Behold, I say that a child is not guilty because of its parents.
And ye also say that Christ shall come. But behold, I say that ye do not know that there shall be a Christ. And ye say also that he shall be slain for the sins of the world –
And thus ye lead away this people after the foolish traditions of your fathers, and according to your own desires; and ye ke
I'm hardly an expert on the Book of Mormon, but this quote surprised me so I googled it. It appears to be an accurate quote but is not fully attributed. As best I can make out, the speaker is the antichrist (or some such evil character; not sure on the exact mythology in play here).
Failure to note that means this quote gives either an incorrect view of the Book of Mormon, or of the significance of the text, or both.
When quoting fiction, I recommend identifying both the character and the author. E.g.
...Ye say that those ancient prophecies are true. Behold,
No, if I were inclined to go ahead and believe in ghosts, I would not then proceed to dismiss their threat so easily.
I agree, that seems to be the weakest step. What I guess he means is that if there are ghosts they seem to be quite wispy and unobtrusive. If they went around and did a lot of stuff we would presumably have good evidence for their existence.
...You don't believe in ghosts, right? Well, neither do I. But how would you like to spend a night alone in a graveyard? I am subject to night fears, and I can tell you that I shouldn't like it at all. And yet I am perfectly well aware that fear of ghosts is contrary to science, reason, and religion. If I were sentenced to spend a night alone in a graveyard, I should know beforehand that no piece of evidence was going to transpire during the night that would do anything to raise the infinitesimal prior probability of the hypothesis that there are ghosts. I s
This was my attempt to make up a story where the math would match something real:
Statistically comparing two samples of equids would make some sense if Dr. Yagami had sampled 2987 horses and 8 zebras while Dr. Eru had sampled 2995 horses and 0 zebras. Then Fisher's exact test could tell us that they did, with high probability, not sample the same population with the same methods.
But in the actual case what we have is just a "virtual sample". I'm wondering if there are any conceivable circumstances where a virtual sample would make sense.
Yes, I'd prefer not to give Dr. Yagami's exact words so as not to make it too easy to find him - or for him to stumble on this post. I, too, worry that I may have left something essential out - but I can't for the life of me see what.
If I can swear you to secrecy, I'd be happy to send you a scan of the actual couple of pages from the actual book.
The main reason I posted this is that I am sometimes wrong about things. Maybe the zebra example turns out to make sense in some way I hadn't thought of. Maybe Yagami is using some sort of standard method. Maybe there's some failure mode I haven't thought of. It would be really good to know this before I make an ass of myself with the review. And talking about asses - there are some wild asses in Mongolia which got left out of my parable - but they're kind of cute so here is a link.
But do we come with pre-programmed methods for moving around - or do we just pick it up as we go along? I noticed that my two children used very different methods for moving around as babies. My daughter sat on her butt and pushed herself around. My son somehow jumped around on his knees. Both methods were surprisingly effective. There's supposedly a "crawling stage" in development but neither of my kids did any crawling to speak of. I guess this isn't as straightforwardly innate as one might think. Maybe Esther Thelen had it right.
These people white-knuckle it, constantly engaging their full slow and unsuitable System 2 in the loop, and consequently they find the normal driving activity exhausting, rather than relaxing
There's some of that in me. I probably am an overcautious driver.
Thus your hope for a safe AGI .. seems misplaced
Fair enough. Your regularly scheduled doom and gloom will resume shortly.
(e.g. moon-landing, relativity theory, computers, science in general, etc).
Or nuclear weapon design. Chicago Pile-1 did work. Trinity did work. Little Boy did work. Burster-Able failed - but not catastrophically. Who knows if whatever the North Koreans cobbled together worked as intended - but it doesn't seem to have destroyed anything it wasn't supposed to. No-one has yet accidentally blown up a city. That's something. Anyway, I'll edit the post.
I'm occasionally still amazed that traffic works as well as it does. I must say I'm hesitant at using this example to claim that people are more capable than you might think.
I actually agree. I'm not sure what lesson to draw from the fact that humans can drive. But it's interesting that so many of you seem to share my intuition that this is surprising or counterintuitive.
By the way, this is a good example showing that social life and human behaviour in general is much more "law-like" and indeed predictable than many "anti-positivists" in the social sciences would have it.
A good point - compare with this comic.
While I do like that visualization a lot, I think it is misleading in some ways. It is trivial to add to the sum of human knowledge. Go and count the coins in your wallet. I don't know how many are in mine so I'll go and check. Okay, there are 18 coins in my wallet. Now we know something we didn't know before.
"Oh, but that's not knowledge, that's just data - and just one datum at that. By 'knowledge' we mean stuff you can get published in research papers - something containing analysis and requiring insight." Bah, I tell you. You totally could ge...
So... you value following duty as a character trait?
I guess you could spin it that way - but let me take an example.
For the last couple of weeks, my wife and I have been involved in some drama in our extended family. When we discuss in private and try to decide how we should act, I've noticed my wife keeps starting off with "If we were to do X, what would happen?". She likes to try to predict different outcomes and she wants to pick the action that leads to the best one. So maybe she is a consequentialist through and through.
I tend to see the ...
OK, so you use virtue ethics (doing one's duty is virtuous) and deontology as shortcuts for consequentialism, given that you lack resources and data to reliably apply the latter. This makes perfect sense. Your wife applies bounded consequentialism, which also makes sense. Presumably your shortcuts will keep her schemes in check, and her schemes will enlarge the list of options you can apply your rules to.
Lots of good points here. In addition to the Matrix analogy (which, as you point out, is hardly a neutral way to frame the divide), keep in mind that in the US, blue and red are also the conventional colors of the left and the right.
We continue to have our little 'reactionary paradox' in that the census results show overwhelming support for feminism, but the discussion on the ground seems oddly 'red'. As you have already suggested this effect might be partially explained by LessWrong's fondness for contrarians.
I wasn't aware of these sub factions. Are they real?
It's an idealization, to be sure. And I don't think there are cliques meeting in smoke-filled IRC-channels to plot downvoting sprees. But still, I think my comment above describes something real.
Previous discussion here.
It's not that people hate your ex and want to downvote all sympathy for her. Rather, this is just one of many manifestations of our ongoing culture war. Roughly speaking, we have two teams:
Team Blue is on board with romantic love and feminism and emphasizes personal autonomy. On this view, a successful love relationship is about finding a person you click with, which could mean any number of quirky things. The problem with your marriage is that your wife was never that into you - which sucked for her. Now that she's found a person she clicks with, the seed...
Methinks Team Red are right about certain people and Team Blue are right about other people.. I guess the latter are a majority among the general population but the former are a majority among the kind of people who read LW.
I've downvoted comments that sound overconfident about what kind of person the OP's ex-wife is.
As I mentioned elsewhere, beware of other-optimizing.
My instinct is to agree with this. I spent decades learning the intricacies of North-European politeness and I think I've finally more or less got it. Now that I've learned it, I might be motivated to think that there is some actual point to all this dancing around!
I like Stefan's idea of connecting guess/ask with wait/interrupt. We might also want to bring the guilt/shame axis into this.
It sounds like ask/interrupt/shame should make for a more honest and efficient society. The guess/wait/guilt stuff sounds pretty frakked up when it is described. But in pr...
This is a very hard field to work in, psychologically, because there's no reliable process for producing valuable work (this might be true generally, but I get the sense that in the sciences it's easier to get moving in a worthwhile direction).
I think you're right that philosophy is particularly difficult in this respect. In many fields you can always go out, gather some data and use relatively standard methodologies to analyze your data and produce publishable work from it. This is certainly true in linguistics (go out and record some conversations or ...
worlds where outright complex hallucination is a normal feature of human experience
What sort of hallucinations are we talking about? I sometimes have hallucinations (auditory and visual) with sleep paralysis attacks. One close friend has vivid hallucinatory experiences (sometimes involving the Hindu gods) even outside of bed. It is low status to talk about your hallucinations so I imagine lots of people might have hallucinations without me knowing about it.
I sometimes find it difficult to tell hallucinations from normal experiences, even though my reas...
You're right, it's a horrible term. For one thing, the methods involved are pretty well-established by now. I just use it by habit. As for that old Marlowe/Shakespeare hubbub, here's a recent study which finds their style similar but definitely not identical.
I no longer try to steelman BETA-MEALR [Ban Everything That Anyone Might Experience And Later Regret] arguments as utilitarian. When I do, I just end up yelling at my interlocutor, asking how she could possibly get her calculations so wrong, only for her to reasonably protest that she wasn’t make any calculations and what am I even talking about?
I've always wanted to know more about how authorship attribution is done; is this, found with a quick search, a reasonable survey of current state of the art, or perhaps you'd recommend something else to read?
The Stamatatos survey you linked to will do fine. The basic story is "back in the day this stuff was really hard but some people tried anyway, then in 1964 Mosteller and Wallace published a landmark paper showing that you really could do impressive stuff, then along came computers and now we have a boatload of different algorithms, most of whi...
Do you think any part of what MIRI does is at all useful?
It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).
I never disagreed with the basic idea that research related to existential risk is underfunded...
I claim the bet is fair if both players expect to make the same profit on average.
I like this idea. As you say, it's not the only way to define it but it does seem like a very reasonable way. The two players have come upon a situation which seems profitable to both of them and they simply agree to "split the profit".
In order to give the players incentives to be honest, the algorithm seems to "use up" some of the total potential profit. For example, in the OP, the players are instructed to bet $2.72 and $13.28 when each was actually willing to bet up to $25. I think this also means that this method of coming up with bet amounts is not strategy proof if players are able to lie about their maximum bet amounts.
Can you talk about your specific field in linguistics/philology?
I've mucked about here and there including in language classification (did those two extinct tribes speak related languages?), stemmatics (what is the relationship between all those manuscripts containing the same text?), non-traditional authorship attribution (who wrote this crap anyway?) and phonology (how and why do the sounds of a word "change" when it is inflected?). To preserve some anonymity (though I am not famous) I'd rather not get too specific.
...what are the main chal
Back when you joined Wikipedia, in 2004, many articles on relatively basic subjects were quite deficient and easily improved by people with modest skills and knowledge. This enabled the cohort that joined then to learn a lot and gradually grow into better editors. This seems much more difficult today. Is this a problem and is there any way to fix it? Has something similar happened with LessWrong, where the whole thing was exciting and easy for beginners some years ago but is "boring and opaque" to beginners now?
I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I'll rephrase a little.
I estimate the chances that some AGI (in the sense of "roughly human-level AI") will be built within the next 100 years as 85%, which is shorthand for "very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up."
And "100 years" here is shorthand for "as far off as we can make reasonable estimates/guesses about the future of human...
Correct, but then you shouldn't handwave into existence an assertion which is really at the core of the dispute.
The argument I am trying to approach is about proposals which make sense under the assumption of little or no relevant technological development but may fail to make sense once disruptive new technology enters the picture. I'm assuming the tree plan made sense in the first way - the cost of planting and tending trees is such and such, the cost of quality wood is such and such and the problems with importing it (our enemies might seek to contro...
As usual, gwern has made a great comment. But I'm going to bite the bullet and come out in favor of the tree plan. Let's go back to the 1830s.
My fellow Swedes! I have a plan to plant 34,000 oak trees. In 120 years we will be able to use them to build mighty warships. My analysis here shows that the cost is modest while the benefits will be quite substantial. But, I hear you say, what if some other material is used to build warships in 120 years? Well, we will always have the option of using the wood to build warships and if we won't take that option it wi...
The distinction you are making between robustness and resilience was not previously familiar to me but seems useful. Thank you.
Obviously, "no significant technological advances" is a basically impossible scenario. I just mean it as a baseline. If you're able to handle techno-stagnation in all domains you're able to handle any permutation of stagnating domains.
I doubt Eliezer - champion of truth and science - would permit himself artistic license with this sort of thing. I think it is more likely that this is a genuine mistake on his part.