It's funny, I wrote a blog post arguing against humility not too long ago. I had a somewhat different picture of humility than you:
...People internalize norms in very different ways and to very different degrees. There are people out there who don’t seem to internalize the norms of humility at all. We usually call these people “arrogant jerks”. And there are people – probably the vast majority of people – who internalize them in reasonable, healthy ways. We usually call these people “normal”.
But then there are also people who internalize the norms of humili
Just wanted to say that I really appreciate your link roundups and look forward to them every month.
I just posted a comment on facebook that I'm going to lazily copy here:
...At this point I have no idea what's going on and I'm basically just waiting for astrophysicists to weigh in. All I can say is that this is fascinating and I can't wait for more data to come in.
Two specific things I'm confused about:
Apparently other astronomers already looked at this data and didn't notice anything amiss. Schaefer quotes them as saying "the star did not do anything spectacular over the past 100 years." But as far as I can tell the only relevant difference b
Wait, I'm confused. How does this practice resistance to false positives? If the false signal is designed to mimic what a true detection would look like, then it seems like the team would be correct to identify it as a true detection. I feel like I'm missing something here.
Well, it's both redundant and anti-redundant, which I always liked. But I don't think there's anything more to it than that.
I've had similar thoughts before:
...Now imagine you said this [that some people are funnier than others] to someone and they indignantly responded with the following:
“You can’t say that for sure – there are different types of humour! Everyone has different talents: some people are good at observational comedy, and some people are good at puns or slapstick. Also, most so-called “comedians” are only “stand-up funny” – they can’t make you laugh in real life. Plus, just because you’re funny doesn’t mean you’re fun to be around. I have a friend who’s not funny a
The first thing to come to mind is that selecting is simply much cheaper than grooming. If a company can get employees of roughly the same quality level without having to pay for an expensive grooming process over many years, they're going to do that. There's also less risk with selecting, because a groomed candidate can always decide to up and leave for another company (or die, or join a cult, or a have an epiphany and decide to live a simple life in the wilderness of Alaska, or whatever), and then the company is out all that grooming money. I feel as though groomed employees would have to be substantially better than selected ones to make up for these disadvantages.
Thanks for the great suggestions everyone. To follow up, here's what I did as a result of this thread:
-Put batteries back in my smoke detector
-Backed up all of my data (hadn't done this for many months)
-Got a small swiss army knife and put it on my keychain (already been useful)
-Looked at a few fire extinguishers to make sure I knew how to use them
-Put some useful things in my messenger bag (kleenex, pencil and paper) - I'll probably try to keep adding things to my bag as I think of them, since I almost always have it with me
All of the car-related suggesti...
ROT13: V thrffrq pbafreingvirf pbeerpgyl, nygubhtu V'z cerggl fher V unq urneq fbzrguvat nobhg gur fghql ryfrjurer.
I don't know, it feels like I see more people criticizing perceived hero worship of EY than I see actual hero worship. If anything the "in" thing on LW these days seems to be signalling how evolved one is by putting down EY or writing off the sequences as "just a decent popular introduction to cognitive biases, nothing more" or whatever.
I agree with this. "Half-baked" was probably the wrong phrase to use - I didn't mean "idea that's not fully formed or just a work in progress," although in retrospect that's exactly what half-baked would convey. I just meant an idea that's seriously flawed in one way or another.
Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don't think it's exaggerating to say that the sleeper cell "already exists". I'm willing to own up to the analogy to that extent.
As for Knightian uncertainty: either the AI will be an existential threat, or it won't. I already think that it will be (or could be), so I think I'm already being pretty conservative from a Knightian point of view, given the stakes at hand. Wors...
When I first read this post back in ~2011 or so, I remember remembering a specific scene in a book I had read that talked about this error and even gave it the same name. I intended to find the quote and post it here, but never bothered. Anyway, seeing this post on the front page again prompted me to finally pull out the book and look up the quote (mostly for the purpose of testing my memory of the scene to see if it actually matched what was written).
So, from Star Wars X-Wing: Isard's Revenge, by Michael A Stackpole (page 149 of the paperback edition):
...T
I mean, I don't really disagree; it's not a very scientific theory right now. It was just a blog post, after all. But if I was trying to test the theory, I would probably take a bunch of people who varied widely in writing skill and get them to write a short piece, and then get an external panel to grade the writing. Then I would get the same people to take some kind of test that judged ability to recognize rather than generate good writing (maybe get some panel of experts to provide some writing samples that were widely agreed to vary in writing quality, ...
I wrote a couple posts on my personal blog a while ago about creativity. I was considering cross-posting them here but didn't think they were LessWrong-y enough. Quick summary: I think because of the one-way nature of most problems we face (it's easier to recognize a solution than it is to generate it), pretty much all of the problem solving we do is guess-and-check. That is, the brain kind of throws up solutions to problems blindly, and then we consciously check to see if the solutions are any good. So what we call "creativity" is just "tho...
Fair.
What contingencies should I be planning for in day to day life? HPMOR was big on the whole "be prepared" theme, and while I encounter very few dark wizards and ominous prophecies in my life, it still seems like a good lesson to take to heart. I'd bet there's some low-hanging fruit that I'm missing out on in terms of preparedness. Any suggestions? They don't have to be big things - people always seem to jump to emergencies when talking about being prepared, which I think is both good and bad. Obviously certain emergencies are common enough that th...
(Note: my motivation for this is almost exclusively "I want to look like a genius in front of my friends when some contingency I planned for comes to pass", which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)
Often being prepared simply means that nobody notices anything being at odds. Don't optimize for flashy solutions.
What to do when things get lost
1) Your credit card
2) Your mobile phone
3) Your keys
What do you do when things you rely on break:
1) Your computer
2) Your car
Who to call?
1) Police imprisons you and charge you for a criminal act
2) You have a medical emergency (also set up a ICE contact list entry on your smart phone)
What contingencies should I be planning for in day to day life?
Those related to what you do and where you go in day to day life. The only people who need to worry about a micrometeorite punching a hole in the spaceship get training for it already.
These might include such things as: locking yourself out of your house, having an auto breakdown, being confronted by a mugger, being in an unfamiliar building when the fire alarm goes off, coming upon the scene of a serious accident, where to go and how to get there when widespread flooding is imminent, being ...
I am by no means an expert, but here are a couple of options that come to mind. I came up with most of these by thinking "what kind of emergency are you reasonably likely to run into at some point, and what can you do to mitigate them?"
Learn some measure of first aid, or at least the Heimlich maneuver and CPR.
Keep a Seat belt cutter and window breaker in your glove compartment. And on the subject, there are a bunch of other things that you may want to keep in your car as well.
Have an emergency kit at home, and have a plan for dealing with
I feel like there are interesting applications here for programmers, but I'm not exactly sure what. Maybe you could link up a particular programming language's syntax to our sense of grammar, so programs that wouldn't compile would seem as wrong to you as the sentence "I seen her". Experienced programmers probably already have something like this I suppose, but it could make learning a new programming language easier.
I have a cold start problem: in order for people to understand the importance of the information that I have to convey, they need to spend a fair amount of time thinking about it, but without having seen the importance of the information, they're not able to distinguish me from being a crackpot.
For what it's worth, these recent comments of yours have been working on me, at least sort of. I used to think you were just naively arrogant, but now it's seeming more plausible that you're actually justifiably arrogant. I don't know if I buy everything you're s...
Fair.
So, random anecdote time: I remember when I was younger my sister would often say things that would upset my parents; usually this ended up causing some kind of confrontation/fight. And whenever she would say these upsetting things, the second the words left her mouth I would cringe, because it was extremely obvious to me that what she had said was very much the wrong thing to say - I could tell it would only make my parents madder. And I was never quite sure (and am still not sure) whether she also recognized that what she was saying would only worse...
...But my focus here is on the meta-level: I perceive a non-contingency about the situation, where even if I did have extremely valuable information to share that I couldn't share without signaling high status, people would still react negatively to me trying to share it. My subjective sense is that to the extent that people doubt the value of what I have to share, this comes primarily from a predetermined bottom line of the type "if what he's saying were true, then he would get really high status: it's so arrogant of him to say things that would make h
I've been trying to be more "agenty" and less NPC-ish lately, and having some reasonable success. In the past month I've:
-Gone to a SlateStarCodex meetup
This involved taking a greyhound bus, crossing the border into a different country, and navigating my way around an unfamiliar city - all things that would have stopped me from even considering going a few years ago. But I realized that none of those things were actually that big of a deal, that what was really stopping me was that it just wasn't something I would normally do. And since there was...
Sure, I understand the identity now of course (or at least I have more of an understanding of it). All I meant was that if you're introduced to Euler's identity at a time when exponentiation just means "multiply this number by itself some number of times", then it's probably going to seem really odd to you. How exactly does one multiply 2.718 by itself sqrt(-1)*3.14 times?
I remember my mom, who was a math teacher, telling me for the first time that e^(i*pi) = -1. My immediate reaction was incredulity - I literally said "What??!" and grabbed a piece of paper to try to work out how that could be true. Of course I had none of the required tools to grapple with that kind of thing, so I got precisely nowhere with it. But that's the closest I've come to having a reaction like you describe with Scott and quintics. I consider the quintic thing far more impressive of course - the weirdness of Euler's identity isn't exactly...
Since much of this sequence has focused on case studies (Grothendiek, Scott Alexander), I'd be curious as to what you think of Douglas Hofstadter. How does he fit into this whole picture? He's obviously a man of incredible talent in something - I don't know whether to call it math or philosophy (or both). Either way it's clear that he has the aesthetic sense you're talking about here in spades. But I distinctly remember him writing something along the lines of how, upon reaching graduate mathematics he hit a "wall of abstraction" and couldn't pro...
You seem to be discussing in good faith here, and I think it's worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it's best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I'm still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren't that special. Like, you view the Chi...
I think we might be working with different definitions of the term "causal structure"? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency - if neuron A hadn't have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn't call that a different causal structure as long as they both implement a system that behaves the same un...
Well, since I'm on LW the first article to come to mind was Outside the Laboratory, although that's not really arguing for the proposition per se.
As for the stooping thing, I'm not entirely sure what you mean, but the first thing that came to mind was that maybe you have a rule out rather than rule in criteria for judging intelligence? As in: someone can say a bunch of smart things, but at best that just earns them provisional smart status. On the other hand if they say one sufficiently dumb thing that's enough to rule them out as being truly intelligent.
Well, I signed up for an interview (probably won't amount to anything, but it's too good of an opportunity to just ignore). After signing up though it occurred to me that this might be a US-only deal. Would my being Canadian be a deal-breaker?
Oh hey, convenient. Someone already wrote my reply.
In my experience hostels are a lot more like the fictional bars you describe.
Any more of this sequence forthcoming? I was looking forward to it continuing.
I tried for maybe thirty seconds to solve it, but couldn't see anything obvious, so I decided to just truncate the fraction to see if it was close to anything I knew. From that it was clear the answer was root 2, but I still couldn't see how to solve it. Once I got into work though I had another look, and then (maybe because I knew what the answer was and could see that it was simple algebraically) I was able to come up with the above solution.
Also how I did it. FWIW I know it took me more than a minute, but definitely less than five.
As a MIRI donor, glad to here it! Good luck to you guys, you're doing important work to say the least.
I'm curious, has this recent series of papers garnered much interest from the wider (or "mainstream") AI community? It seems like MIRI has made a lot of progress over the past few years in getting a lot of very smart people to take their ideas seriously (and in cultivating a more respectable, "serious" image). I was wondering if similar progress had been made in creating inroads into academia.
Seven of the papers (every one except the annotated bibliography) are referenced in the FLI research priorities document attached to the open letter which received a whole lot of recent publicity :-) Beyond that, it's a bit too early to guess how these particular papers will affect academia more broadly, but recent progress looks promising.
Did anyone else immediately try to come up with ways Davis' plan would fail? One obvious failure mode would be in specifying which dead people count - if you say "the people described in these books," the AI could just grab the books and rewrite them. Hmm, come to think of it: is any attempt to pin down human preferences by physical reference rather than logical reference vulnerable to tampering of this kind, and therefore unworkable? I know EY has written many times before about a "giant logical function that computes morality", but th...
A number of SSC posts have gone viral on Reddit or elsewhere. I'm sure he's picked up a fair number of readers from the greater internet. Also, for what it's worth, I've turned two of my friends on to SSC who were never much interested in LW.
But I'll second it being among my favourite websites.
See, if anything I have the exact opposite problem (which, ironically, I also attribute to arrogance). I almost never engage in arguments with people because I assume I'll never change their mind. When I do get into a debate with someone, I'm extremely quick to give up and write them off as a lost cause. This probably isn't a super healthy attitude to have (especially since many of these "lost causes" are my friends and family) but at least it keeps me out of unproductive arguments. I do have a few friends who are (in my experience) unusually good at listening to new arguments and changing their mind, so I usually wind up limiting my in-depth discussions to just them.
I can empathize to an extent - my fiance left me about two months ago (two months ago yesterday actually, now that I check). I still love her, and I'm not even close to getting over her. I don't think I'm even close to wanting to get over her. And when I have talked to her since it happened, I've said things that I wish I hadn't said, upon reflection. I know exactly what you mean about having no control of what you say around her.
But, with that being said...
Well, I certainly can't speak for the common wisdom of the community, but speaking for myself, I thi...
Survey complete! I'd have answered the digit ratio question, but I don't have a ruler of all things at home. Ooh, now to go check my answers for the calibration questions.
Scott is a LW member who has posted a few articles here
This seems like a significant understatement given that Scott has the second highest karma of all-time on LW (after only Eliezer). Even if he doesn't post much here directly anymore, he's still probably the biggest thought leader the broader rational community has right now.
It's been a while; any further updates on this project? All the BGI website says is that my sample has been received.
Okay, fair enough, forget the whole increasing of measure thing for now. There's still the fact that every time I go to the subway, there's a world where I jump in front of it. That for sure happens. I'm obviously not suggesting anything dumb like avoiding subways, that's not my point at all. It's just...that doesn't seem very "normal" to me, somehow. MWI gives this weird new weight to all counterfactuals that seems like it makes an actual difference (not in terms of any actual predictions, but psychologically - and psychology is all we're talkin...
I've never been entirely sure about the whole "it should all add up to normality" thing in regards to MWI. Like, in particular, I worry about the notion of intrusive thoughts. A good 30% of the time I ride the subway I have some sort of weak intrusive thought about jumping in front of the train (I hope it goes without saying that I am very much not suicidal). And since accepting MWI as being reasonably likely to be true, I've worried that just having these intrusive thoughts might increase the measure of those worlds where the intrusive thoughts ...
Whatever argument you have in mind about "the measure of those worlds" will go through just the same if you replace it with "the probability of the world being that way". You should be exactly equally concerned with or without MWI.
The question that actually matters to you should be something like: Are people with such intrusive thoughts who aren't generally suicidal more likely to jump in front of trains? I think I remember reading that the answer is no; if it turns out to be yes (or if you find those thoughts disturbing) then you might want to look into CBT or something; but MWI doesn't have anything to do with it except that maybe something about it bothers you psychologically.
Perhaps (and I'm just thinking off the cuff here) rationality is just the subset of general intelligence that you might call meta-intelligence - ie, the skill of intelligently using your first-order intelligence to best achieve your ends.
I remember being inordinately relieved/happy/satisfied when I first read about determinism around 14 or 15 (in Sophie's World, fwiw). It was like, thank you, that's what I've been trying to articulate all these years!
(although they casually dismissed it as a philosophy in the book, which annoyed 14-or-15-year-old me)
I mean, Laffer Curve-type reasons if nothing else.