So, several years ago I was moved by my primary dissatisfaction with HPMoR and my enjoyment of MLP to start a rationalist MLP fanfic. (There are at least two others, that occupied very different spheres, which I will get to in a bit.)
My main dissatisfaction with HPMoR was that Harry is almost always the teacher, not the student; relatedly, his cleverness far outstrips his wisdom. It is only at the very end, after he nearly loses everything, that he starts to realize how difficult it is to get things right, and even then he does not fully get it. Harry is the sort of character that the careful reader can learn from, but not the sort of character one should try to emulate.
MLP's protagonist, Twilight Sparkle, is in many ways the opposite character: instead of being overconfident and arrogant, she is anxious and (generally) humble. Where Harry has difficulty seeing others as equals or useful, Twilight genuinely relies on her friends. Most of Harry's positive characteristics, though, Twilight shares--or could plausibly share with little modification. (In HP terms, she's basically what would have happen if bookish Hermione had been the Girl-Who-Lived, with the accompanying leadership pot...
I think I recall seeing somewhere that the open thread is a good place for potentially silly questions. So I've got one to ask.
As long as I can remember small things give me the willies. Objects around the size of a penny or smaller trigger a kind of revulsion response if I have to handle them. Things like small coins, those paper circles created when using a hole punch, those stickers that they stick on fruit. I'm not typically bothered by handling a lot of the objects at the same time, a handful of pennies wouldn't bother me.
One thing that's odd, well aside from everything else about it, is that it seems to be especially triggered by jewelry. Rings, basically any piercings, even smallish necklaces. I'm alright as long as they don't get too close to me, but I start feeling weird if I have to interact with them.
Anyway, I've always thought this was pretty strange and it recently occurred to me that someone here probably has some idea of what's going on. Thanks in advance for any thoughts.
Interesting. Great that you shared it. Have never heard of something like this. To me it looks like one basic fear pattern matching gone wrong (wired differently than usual in the brain). I mean there must be some pre-wiring of object recognition in the brain that triggers on e.g. spider-like and snake-like forms. Why should such a wiring go wrong (mutation or whatever) and pattern-match against small-ringlike.
See also What universal experiences are you missing without realizing it. Where people mention a lot special things and maybe by now you can find something comparable to yours.
I'm going to be doing a Rationality / Sequences reading group. Sorry I've been busy the last few days since the book came out, but I'll be making an introductory posting soon. The plan is to cover one sequence every two weeks covering the whole book over the course of a year.
What resources would you recommend for skilled, highly-specialized, employed EU citizens looking for employment in the US?
Gates goes into a little bit more detail on his views on AI.
Interviewer:
Yesterday there was a lot of talk here on machine intelligence and I know you had expressed some concerns about how machines are making leaps in processing ability. What do you think we should be doing?
Gates:
There are two different threat models for AI. One is simply the labor substitution problem. That, in a certain way, seems like it should be solvable because what you are really talking about is an embarrassment of riches. But it is happening so quickly. It does raise some very interesting questions given the speed with which it happens.
Then you have the issue of greater-than-human intelligence. That one, I’ll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem.
If Gates were to really get on board, that would be huge, I think. Fingers crossed.
To the old ask and guess thread: I grew up under the impression it is a gender thing.
My mother would be "guess", she would expect me to notice that the thrash needs taking out, I didn't because I was lazy, and then she did it and acted hurt and told me she is tired of always needing to tell me to do my share of housework, she rather does it herself but she was bitter and hurt.
In the occasional cases she was ill and my father had to give a damn about the housework (in his defense he tended to have 10-11 hour workdays, mother was at home, so it made sense not to), he would do it in the clearly "tell" style of military training sergeants, "get that effing thrash out but on the double you got five effing seconds to finish it", that kind of style, however he was NEVER angry or hurt about this, he actually looked amused and having fun during that verbal rudeness, I think he always thought if you order people do things and they do them on the bounce, then things are right even if you need to give that order every day: you just tell it ruder and ruder until they learn, easy enough.
While I know ask and guess cultures exist in general, for me it got really tie...
I think power imbalance leads to passive aggression much more than the Hint or Ask character of the culture.
Hint and Ask are basically preferred communication protocols and most Hint people I know will adjust if the hints are clearly not working. But there is a big difference between
and
I don't understand why I do find certain kinds of goodness, kindness, compassion annoying. Of all the publications, The Guardians seems to rank highest in pissing me off with kindness. Consider this:
Ocean Howell, a former skateboarder and assistant professor of architectural history at the University of Oregon, who studies such anti-skating design, says it reveals wider processes of power. “Architectural deterrents to skateboarding and sleeping are interesting because – when noticed – they draw attention to the way that managers of spaces are always designing for specific subjects of the population, consciously or otherwise,” he says. “When we talk about the ‘public’, we’re never actually talking about ‘everyone’.”
Does anyone have any idea why may I find it annoying? Putting it differently, why do I experience something similar as Scott i.e. while I don't have many problems with most contemporary left-leaning ideas, I seem to have a problem with left-leaning people?
For example, I don't find anything inherently bad about starting a discussion about making design more skateboar...
The World Weekly covers superintelligent AI.
It's one of the better media pieces I've read on the topic.
Bostrom, Yudkowsky, and Russell are quoted, among many others.
What do you link someone to if you want to persuade them to start taking cryonics seriously instead of immediately dismissing it as ridiculous nonsense? There's no one single LW post that you can send someone to that I know of.
I just realized that some people object to hedonistic utilitarianism (which I've traditionally favored) on the grounds that "pleasure" and "suffering" are meaningless and ill-defined concepts, whereas I tend to find preference utilitarianism absurd on the grounds that "preference" is a meaningless and ill-defined concept.
This seems to point to a difference in how people's motivational systems appear from the inside: maybe for some, "pleasure" is an obvious, atomic concept which they can constantly observe as driving ...
I'll buy you sequences.
Sorry, I feel like a jerk repeating myself but this is the last time. I bought the three pack of the audio sequences on Kickstarter because there were multiple people who said they wanted it but for whom $50 was too dear. I just got the final "give us the names" email. Any takers?
On spaced repetition / Anki:
When I started to work after college I was surprised when people asked "How comes you don't know X? Haven't you read the manual?" I was surprised because in college it take more than one reading, a form of repetition, to learn, know and remember things. I would replay "I have read it, but have not yet memorized it."
Interestingly, later on, I managed to remember things after one reading, not details, but the general idea.
I wonder about the popularity of Anki and spaced repetition here. I am experimenting it ...
In Our Own Image: Will artificial intelligence save or destroy us? by George Zarkadakis was published by Random House on 5 March. I haven't read it, but from a search on Google Books, there's no mention of "Yudkowsky" or "MIRI", while "Bostrom" only appears once, in a discussion of the Simulation Argument. I nearly gave up at that point, but then I thought to search for "Hawking", and indeed, there is a discussion of the Hawking/Tegmark/Russell/Wilczek letter; this seems to me to be evidence on how carefully the auth...
Could someone help me out with the LessWrong wiki? I made an account called Tryagainslowly on it; it wouldn't let me use my LessWrong account, instead making me register for the wiki independently. I wanted to post in the discussion for the wiki page entitled "Rationality". The discussion page didn't have anything posted in it. I wrote out my post, and attempted to post it, but it wouldn't let me, telling me new pages cannot be created by new editors. What do I need to do in order to submit my post? I'm happy to show what I was intending to post here if anyone wants me to.
Do dating conventions fall victim to Positive Bias?
It seems that people are always looking for positive evidence, and that looking for negative evidence (I suspect my vocabulary might be incorrect?) is socially unacceptable. Ie. "Let's see if we could find something in common." seems typical and acceptable, while "Let's see if each of us posses any characteristics that would make us incompatible" seems socially unacceptable.
Note: I have zero experience with dating and romance so these are just my impressions, although I suspect that they're true.
Animals getting smarter in cities, or at least better at living in cities.
Also, does a beagle moving a chair to get up on a counter count as having tool use?
I'm running an Ideological Turing Test about religion, and I need some people to try answering the questions. I've giving a talk at UPenn this week on how to have better fights about religion, and the audience is going to try to sort out honest/faked Christian and atheist answers and see where both sides have trouble understanding the other.
In April, I'll share all the entries on my blog, so you can play along at home and see whether you can distinguish the impostors from the true (non-)believers.
In the last few years I've been thinking about all the separate mental modules that influence productivity, procrastination, akrasia etc. in their own unique ways. (The one thing that's for sure is that the ability to get stuff done isn't monolithic.) This is how my breakdown of the psychology of productivity looks like, and I have a hunch that these are all separate and generate their own effects independently (more or less) of the others:
Wrist computer: To Buy or Not To Buy
I'm considering whether or not to buy an Android phone in a wristwatch form-factor, and am hesitating on whether it's the best use of my money. Would anyone here care to offer their opinion?
One of my goals: Go camping and enjoy it. One of my constraints: A limited budget. I suspect that taking a watch-phone, such as an Omate Truesmart or one of its clones ( eg, http://www.dx.com/p/imacwear-m7-waterproof-android-4-2-dual-core-3g-smart-watch-phone-w-1-54-5-0mp-black-373360 ), and filling a 32 gigabyte SD card with offline ...
Would anyone here care to offer their opinion?
Sure :-D Smartwatches are computers miniaturized to the point of uselessness because of the tiny screen and UI issues. Specifically for camping or backpacking you'd be much better off with a bigger-screen device like a regular smartphone. In fact, if you're serious about backpacking I would recommend a dedicated GPS unit.
DAE know The Haze? The Haze is the brain fog whenever I have a subject I entertain comfortable lies about and the truth would be too painful. I.e. something negative about myself etc. whenever I approach the subject my brain decides to deal with the cognitive dissonance to avoid painful truth by reducing the IQ, but instead of becoming wooden and thick like normal stupidity, it becomes foggy. This fogginess is not actually felt or known at that time, but when I later on face the painful truth, it feels like a fog, a haze lifting. It feels a lot like as i...
From 2008: "Readers born to atheist parents have missed out on a fundamental life trial"
Not really, in my experience. First of all there are plenty of other silly things to believe in, such my parents tended to believe in feel-good liberal adages like "violence never solves anything".
But actually the experience made me learn from religious people quite a lot. For this reason: like for most modern secular liberal Europeans, for my parents the kind of history we live in began not so long ago. A few centuries ago. Or maybe 1945. Everythin...
Suppose I wanted to predict the likelihood of and degree of delays and cost over-runs associated with a nuclear plant currently under construction. How would people recommend I do so?
What is the name of the logical fallacy where you rhetorically invalidate an argument by providing an unflattering explanation of why someone might holds that viewpoint, rather than addressing the claim itself? I seem to remember there being a word for that sort of thing.
I labeled an exam question as "tricky" because if you applied the solution method we used in class to solve similar looking problems you would get the wrong answer. But it occurred to me that if the question had been identical to one given in class but I still labeled it as "tricky" the "tricky" label would have been accurate because the trick would have been students thinking that the obvious answer wasn't correct when indeed it was. So is it always accurate to label a question as "tricky"?
That's kind of a Hofstadter-esque question. I think the answer is "no", but the reason why depends on what meta-level you're looking at: if the label refers only to the object-level question, then it's straightforwardly true or false; but if you construe it as applying to the entire context of the question including its labeling, then it's possible to imagine a trick question that's transparent enough that labeling it as such exposes the trick and stops it from being tricky. It can be a self-fulfilling or a self-defeating prophecy.
That makes as much sense as having a class about political corruption and requiring that students pass the test by bribing the teacher.
Just because the class is about X doesn't mean that grades in the class should be measured by X.
But there's a difference between "this is how you do X" and "doing X is appropriate in this situation". Deciding that because a class is about bribery, you should get your grade in it by bribery, confuses these two things--you've given the students an opportunity to use the lessons from the class, but it's not a situation where most people think you should have an opportunity to use the lessons from the class. If your class was about some field of statistics related to randomness would you insist that your students roll dice to determine their exam score? If your class was about male privilege, would you automatically give all female students a grade one rank lower?
Tests are not just for evaluation, but should also be learning exercises.
While tests can have purposes, such as learning, that are orthogonal to evaluation, that's different from giving the test an additional purpose that is counterproductive to evaluation.
Also, I'd hate to be the student who had to explain to a prospective employer that the employer should add a percentage point to his GPA when considering him for employment, on the grounds that he scored poorly in your class for reasons unrelated to evaluation.
Generating artificial gravity on spaceships using centrifuges is a common idea in hard-sci-fi and in speculation about space travel, but no-one seems to consider them for low gravity on e.g. Mars. Am I mistaken in thinking that all you'd need to do is build the centrifuge with an angled floor, so the net force experienced from gravity and (illusory) centrifugal force is straight "down" into it?
I realise there'd be other practical problems with centrifuge-induced artificial gravity on Mars, since it's full of dust and not the best environment, but that doesn't seem to be the right kind of objection to explain it never being brought up where I've seen it.
I've written a post on my blog covering some aspects of AGI and FAI.
It probably has nothing new for most people here, but could still be interesting.
I'll be happy for feedback - in particular, I can't remember if my analogy with flight is something I came up with or heard here long ago. Will be happy to hear if it's novel, and if it's any good.
How many hardware engineers does it take to develop an artificial general intelligence?
I occasionally see people move their fingers on a flat surface while thinking, as if they were writing equations with their fingers. Does anyone do this, and can anyone explain why people do this? I asked one person who does it, and he said it helps him think about problems (presumably math problems) without actually writing anything down. Can this be learned? Is it a useful technique? Or is it just an innate idiosyncrasy?
Is avoiding death possible is principle? In particular, is there a solution to the problem of the universe no longer being able to support life?
Because obviously the only valid response to knowing death is inevitable is despair during your non-dead time...
If this post doesn't get answered, I'll repost in the next open thread. A test to see if more frequent threads are actually necessary.
I'm trying to make a prior probability mass distribution for the length of a binary string, and then generalize to strings of any quantity of symbols. I'm struggling to find one with the right properties under the log-odds transformation that still obeys the laws of probability. The one I like the most is P(len(x)) = 1/(x+2), as under log-odds it requires log(x)+1 bits of evidence for strings of len(x) to meet even odds. For...
Perhaps beliefs are exaggerated partially due to the chance of those who disagree with the belief expressing their disagreement being less than the chance of those who agree with it expressing their agreement with it.
Justification: It seems the main incentive for expressing one’s agreement or disagreement (and the reasons for it) is to make the person more likely to hold your belief and thus more likely to hold a more accurate belief. If you agree with the person, expressing your agreement has little cost, as you probably won’t get into a lengthy argumen...
It appears to me that the differences System 1 and System 2 reasoning be used as leverage to change one's mind.
For example, I am rather risk-averse and sometimes find myself unwilling to take a relatively minor risk (even if I think that doing that would be in line with my values). If that happens, I point out to myself that I already take comparable risks which my System 1 doesn't perceive as risks because I'm acclimated to these - such as navigating road traffic. That seems to confirm to System 1 the idea of "taking a minor risk for a good reason is no big deal".
Are there any English words with the property that if you rot13 them, they flip backwards? For example, "ly" becomes "yl," but "ly" isn't a word.
I have a random physics question:
A solid sphere, in ordinary atmosphere, with a magical heating element at one pole and a magical refrigeration element at the other. If the sphere itself is stationary and at room temperature; one pole is super-cooled while the opposite pole is super-heated. (Edit: Assume the axis connecting the poles is horizontal.)
What effect does this have on air-flow around the sphere? Does it move? If so, in which direction?
I've just read Initation Ceremony. Is this really where Bayesian probability begins? Because I don't claim to understand it, but I worked it out easy enough, just not mentally but with calc.exe, using my usual method of assuming a sample of 100. So there are 100 people, 75 W and 25 M, 75x0.75=56.26 VW and 25x0.5 = 12.5 VM so our ratio is 12.5 to 56.26 so a 22.2% chance (Because only the Sith deal in incomprehensible verbal-math like " two to nine, or a probability of two-elevenths". Percentages are IMHO way more intuitive. I use a sample size of ...
Why can't we just make a CPU as large as a dump truck, that can store a thousand petabytes, then run an AI and try to evolve intelligence? I can't imagine that this is beyond the technology of 2015.
(Not that this would be a good idea, I'm just saying that it seems possible.)
Why can't we just make a CPU as large as a dump truck [...?]
Lots of reasons, some of which Vaniver and ShardPhoenix have already given, but one of the big ones is that CPUs dissipate a truly enormous amount of heat for their size. Your average laptop I7 consumes about thirty watts, essentially all of which goes to heat one way or another, and it's about a centimeter square (the chip you see on the motherboard is bigger, but a lot of that is connections and housing). Let's call that about the size of a penny. That's an overestimate, but as we'll see, it won't matter much.
Now, a quick Google tells me that a dump truck can hold about 20 cubic meters (=20000 liters), and that a liter holds about 2000 closely packed pennies. So if we assume something with around the same packing and thermal efficiency, our dump truck-sized CPU will be putting out about 30 2000 20000 = 1.2 gigawatts of heat, or a bit more than the combined peak output of the two nuclear reactors powering a Nimitz-class aircraft carrier.
This poses certain design issues.
You're assuming that someone, given a zillion dollars, could implement your plan, but if you don't even know where to begin implementing it yourself, what reason do you have to believe someone else would?
Put another way, if "I can't imagine we can't [X] given the technology of 2015" works when X is "evolve artificial intelligence", why wouldn't it work for any other X you care to imagine?
Great. More ridiculous propaganda along the lines of "People revived from the dead are evil/damaged/soulless, etc."
The Returned on A&E
No stupid questions thread?
What make a person sexual submissive, sexually dominant, or a switch? Do people ever change d/s orientation?
Tsamina mina zangalewa!
(This time for Africa.)
I'm looking for critique or advise on where to ask for some for my short story Exploiting Quantum Immortality.
My comment elsewhere got downvoted, but to me the Outlander franchise looks somewhat like a cryonics story, only it sends the protagonist 200 years into her past (from the 1940's to the 1740's), instead of 200 years or so into "the future." She winds up in a different time, she doesn't know anyone, and she has to figure out quickly how the society works so that she can connect with people willing to accept her, as a matter of literal survival. It shows in a fictional way that you can make the necessary adaptations in this kind of situation, so why wouldn't this work in the future-traveling version?
On AI: are we sure we are not influenced by meta-religious ideas of sci-fi writers who write about sufficiently advanced computers just "waking up into consciousness" i.e. create a hard, almost soul-like, barrier between conscious and not conscious, which carries an assumption that consciousness is a typically human-like feature? It is meta-religious as it is based on the unique specialness of the human soul.
I mean I think the potential variation space of intelligent, conscious agents is very, very large and a randomly selected AI will not be hum...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.