If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
307 comments, sorted by Click to highlight new comments since: Today at 11:25 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So, several years ago I was moved by my primary dissatisfaction with HPMoR and my enjoyment of MLP to start a rationalist MLP fanfic. (There are at least two others, that occupied very different spheres, which I will get to in a bit.)

My main dissatisfaction with HPMoR was that Harry is almost always the teacher, not the student; relatedly, his cleverness far outstrips his wisdom. It is only at the very end, after he nearly loses everything, that he starts to realize how difficult it is to get things right, and even then he does not fully get it. Harry is the sort of character that the careful reader can learn from, but not the sort of character one should try to emulate.

MLP's protagonist, Twilight Sparkle, is in many ways the opposite character: instead of being overconfident and arrogant, she is anxious and (generally) humble. Where Harry has difficulty seeing others as equals or useful, Twilight genuinely relies on her friends. Most of Harry's positive characteristics, though, Twilight shares--or could plausibly share with little modification. (In HP terms, she's basically what would have happen if bookish Hermione had been the Girl-Who-Lived, with the accompanying leadership pot... (read more)

8Alicorn9y
I have not read your story yet, but if I wait till I get around to it, I will forget to inform you that I have been known to accept commissions.
4Zian9y
I suggest doing #1 and #2 in parallel. As you said, the story is mostly done and just needs editing. That will require help from other people and can happen while you do other things. It will be good for you to be able to say "Behold, I have finished this thing." At the same time, as you tackle the full story as a separate thing, it may be worth giving it your best effort (by pulling in #2) so that after a few months, you can say "I tried really hard and it didn't work. Alas. Time to stop." or the opposite, without having to wonder if you just didn't try hard enough.

I think I recall seeing somewhere that the open thread is a good place for potentially silly questions. So I've got one to ask.

As long as I can remember small things give me the willies. Objects around the size of a penny or smaller trigger a kind of revulsion response if I have to handle them. Things like small coins, those paper circles created when using a hole punch, those stickers that they stick on fruit. I'm not typically bothered by handling a lot of the objects at the same time, a handful of pennies wouldn't bother me.

One thing that's odd, well aside from everything else about it, is that it seems to be especially triggered by jewelry. Rings, basically any piercings, even smallish necklaces. I'm alright as long as they don't get too close to me, but I start feeling weird if I have to interact with them.

Anyway, I've always thought this was pretty strange and it recently occurred to me that someone here probably has some idea of what's going on. Thanks in advance for any thoughts.

Interesting. Great that you shared it. Have never heard of something like this. To me it looks like one basic fear pattern matching gone wrong (wired differently than usual in the brain). I mean there must be some pre-wiring of object recognition in the brain that triggers on e.g. spider-like and snake-like forms. Why should such a wiring go wrong (mutation or whatever) and pattern-match against small-ringlike.

See also What universal experiences are you missing without realizing it. Where people mention a lot special things and maybe by now you can find something comparable to yours.

3Rukiedor9y
Ha, it was actually looking through the Universal Experiences comments that prompted me to come here and ask if anyone had any experience with something like this. I didn't see anything in the comments there that sounded similar. I kind of doubt it's related to fear triggers, because I don't like spiders either, and my aversion to spiders feels very different from this. Interesting thing to think about though. Thanks.
9chaosmage9y
I'm not a doctor, but this sounds like Mikrophobia. I do recognize you're describing your feelings as a kind of revulsion, not fear proper, but still that'd the best pattern match I got. I suggest you talk to a psychiatrist or psychotherapist about it, because if it is that, your issue is very solvable. Phobias are one of the easiest-to-treat psychological issues; desensitization and cognitive-behavioral therapy work quite well.
2Rukiedor9y
Interesting, not exactly the same thing, but it does sound similar. You're probably right about desensitization, there are some rather small things I can handle without problem. I'll have to give that a shot. Thanks.
4Toggle9y
My housemate has this exact problem- right down to the issues with jewelry in particular. If she has to shake hands with somebody who's wearing a metal ring, she has to sort of ritualistically wipe off her hands afterwards. Metal in general seems to trigger the reaction much more strongly, so she'll have problems with loose coins but not stickers. It's been persistent throughout her life, I understand, but exposure therapy has reduced its severity.
3Rukiedor9y
That is very interesting. Kind of validating, and one more bit of evidence in favor of trying exposure therapy. Thank you for sharing that.
3[anonymous]9y
Maybe childhood training against choking hazards. I was once hospitalized for months at 5 years old, and the had exhibitions on the wall of the small stuff kids stuffed into their noses or ears and had to be removed surgically. It was scary. I was afraid of them. The fact I still remember it means it may be traumatic, may have been something like that for you. How do you handle eating or cooking lentils?
2Rukiedor9y
That's an interesting possibility. I don't have any particularly strong memories of being warned about choking hazards, about the only one I remember is warnings about plastic bags. For lentils, I'm fine handling them in bulk, and eating spoonfuls of them doesn't bother me. When most of them are gone, and there are only a few scattered in my plate or bowl they start to trigger the revulsion a little bit, although not nearly as strong as many other things. This actually seems to suggest that there is some desensitization going on. I never had lentils until I was an adult, I have however been eating rice for as long as I can remember, and individual rice grains don't trigger the reaction under most circumstances. Small candies, like skittles, m&ms, smarties, etc. don't really trigger it either, in most circumstances, which again, I have been eating since childhood.
[-][anonymous]9y140

I'm going to be doing a Rationality / Sequences reading group. Sorry I've been busy the last few days since the book came out, but I'll be making an introductory posting soon. The plan is to cover one sequence every two weeks covering the whole book over the course of a year.

What resources would you recommend for skilled, highly-specialized, employed EU citizens looking for employment in the US?

4is4junk9y
I'd look for a good headhunter in your field (assuming it is not too niche). Let them get the commission for finding you a job. * Update your linkedin profile and see if any contact you. * Talk to a recruiter in a company that is a near fit for you even if they aren't hiring now and ask if they have worked with any headhunters in the past. * Go to a Job fair in the US - not for job but to interview headhunters
0chaosmage9y
Thank you!

Gates goes into a little bit more detail on his views on AI.

Interviewer:

Yesterday there was a lot of talk here on machine intelligence and I know you had expressed some concerns about how machines are making leaps in processing ability. What do you think we should be doing?

Gates:

There are two different threat models for AI. One is simply the labor substitution problem. That, in a certain way, seems like it should be solvable because what you are really talking about is an embarrassment of riches. But it is happening so quickly. It does raise some very interesting questions given the speed with which it happens.

Then you have the issue of greater-than-human intelligence. That one, I’ll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem.

Horvitz's thing.

Musk's thing.

If Gates were to really get on board, that would be huge, I think. Fingers crossed.

[-][anonymous]9y110

To the old ask and guess thread: I grew up under the impression it is a gender thing.

My mother would be "guess", she would expect me to notice that the thrash needs taking out, I didn't because I was lazy, and then she did it and acted hurt and told me she is tired of always needing to tell me to do my share of housework, she rather does it herself but she was bitter and hurt.

In the occasional cases she was ill and my father had to give a damn about the housework (in his defense he tended to have 10-11 hour workdays, mother was at home, so it made sense not to), he would do it in the clearly "tell" style of military training sergeants, "get that effing thrash out but on the double you got five effing seconds to finish it", that kind of style, however he was NEVER angry or hurt about this, he actually looked amused and having fun during that verbal rudeness, I think he always thought if you order people do things and they do them on the bounce, then things are right even if you need to give that order every day: you just tell it ruder and ruder until they learn, easy enough.

While I know ask and guess cultures exist in general, for me it got really tie... (read more)

9Lumifer9y
That's pretty classical passive-aggressive behaviour. I don't think it has much to do with guess-vs-ask cultures. But I agree that there is probably some gender correlation.
7NancyLebovitz9y
It seems plausible that Hint cultures lead to passive aggression-- if you can't be just plain aggressive, what have you got left?

I think power imbalance leads to passive aggression much more than the Hint or Ask character of the culture.

Hint and Ask are basically preferred communication protocols and most Hint people I know will adjust if the hints are clearly not working. But there is a big difference between

  • Glance at garbage. Glance at garbage. Glance at garbage. Dear, can you please take out the garbage?

and

  • Glance at garbage. Glance at garbage. Glance at garbage. You never pay any attention to me and you screwed up my whole life, you ungrateful bastard!
0[anonymous]9y
But that is largely the same thing. The classical boss-subordinate relationship is ask (order) down, guess up. Passive-aggression is extreme (angry, upset) guess, active aggression is extreme (angry, upset) ask/order. When whole cultures are all-ask or all-guess that is probably a sign of egalitarianism - within that subset.
2Lumifer9y
It's more complicated. Ask/tell is simpler, faster, and more efficient so in the workplace (where status and power relationships are largely formalized) it tends to dominate anyway. Also, as anecdata, I know a girl who is a very pronounced Hint/Guess person, but she's a manager and has underlings. She quite successfully manages them mostly on the Hint/Guess basis (within reason, of course).
6JoshuaZ9y
The idea that there's a gender correlation, whether for cultural or biological reasons certainly is something I've seen a fair bit when this comes up as a subject. See for example here. This one where cultural distinctions are going to be very difficult since some cultures (e.g. China) are so heavily on one side. It would I think be very interesting to see if the obvious gender trend in the West still is true in those extreme examples- it would be pretty strong evidence of a biological basis.
2Gunnar_Zarncke9y
In a way the gender aspect could be seen as a micro culture thing as women operating in their own social circles build up these sub-protocols (influenced due to power structures of ourse).
5[anonymous]9y
Yes, but passive-aggression is what guess-people do when upset, and active-aggression is what ask-people do when upset.
0Lumifer9y
I don't know if I am willing to accept it as a such tight relation. For one thing, being passive-aggressive is usually not one particular action, an outburst when upset, it's more like a an attitude, a continuous inclination/slant/flavour. I think that passive vs. active aggression depends much more on power, status, and specific circumstances rather than on usually preferred communication styles.
0MathiasZaman9y
I think it would be wrong to generalize from that example, so I'd like to report the opposite. My mother would also ask me to do specific, clearly defined task when she wanted them done and ask again when I forgot. My dad, on the other hand, would just get angry when things weren't done according to his requirements without making those requirements clear.
[-][anonymous]9y100

I don't understand why I do find certain kinds of goodness, kindness, compassion annoying. Of all the publications, The Guardians seems to rank highest in pissing me off with kindness. Consider this:

http://www.theguardian.com/cities/2014/jun/12/anti-homeless-spikes-latest-defensive-urban-architecture

Ocean Howell, a former skateboarder and assistant professor of architectural history at the University of Oregon, who studies such anti-skating design, says it reveals wider processes of power. “Architectural deterrents to skateboarding and sleeping are interesting because – when noticed – they draw attention to the way that managers of spaces are always designing for specific subjects of the population, consciously or otherwise,” he says. “When we talk about the ‘public’, we’re never actually talking about ‘everyone’.”

Does anyone have any idea why may I find it annoying? Putting it differently, why do I experience something similar as Scott i.e. while I don't have many problems with most contemporary left-leaning ideas, I seem to have a problem with left-leaning people?

For example, I don't find anything inherently bad about starting a discussion about making design more skateboar... (read more)

4seer9y
The problem is he starts with false premises that it is impermissible (or at least impolite) to question in public, such as that homeless people are perfectly normal people who are down on their luck. (Most homeless, especially long time homeless have a mental illness.) And then he proceeds to reason from them and expects people to agree.
2NancyLebovitz9y
Cite? My assumption is that the proportion of homeless people who are normal people down on their luck is much higher when the economy has been bad for a while.
4Lumifer9y
I think I dislike this sort of articles because they assume I'm a stupid mark easily to manipulate by crude emotional-blackmail methods. AND the author is someone who thinks that manipulating other people this way is an excellent idea.
3is4junk9y
Why even read left wing articles if they upset you? My take is that if the public space was skateboarder and homeless friendly, the author could easily write a similar article on how that scares [insert other victim group] away from the public space. As for why it is written that way, Kling's book The Three Languages of Politics is a good explanation. The left likes to think in oppressed verses oppressor terms. Thanks for posting this article. There is a park being planned near me and there are certain architectural features I now want it to consider ...
0ChristianKl9y
There a difference between not designing for being homeless friendly and designing spikes to prevent homeless people from sleeping in the area.
1Jiro9y
What's the difference? (This is a serious question. Of course, I know some reasons why people think they are different, but I don't think the reasons that I can think of stand up to examination.)
0ChristianKl9y
If you design a system, then you can optimize it for different goals. A designer that's supposed to design a public space signs a contract. To the extend that the designer optimizes for different goals and especially goals that disadvantage certain people he's doing wrong. From the perspective of a city written into the contract that the space is designed against homelessness also makes a difference.
0Jiro9y
That only moves the question up a level: why is it wrong to do X as your goal, but okay to do X in service of something else, even if that something else is as vague as aesthetic reasons or whatever impels people to randomly design things? By your initial reasoning, it would be wrong to design spikes to discourage the homeless, but okay to design spikes if you just happen to like the look of spikes, even though both of these have the same effect.
1ChristianKl9y
Because intentions matter for judge morality of a lot of human interactions. If a professional is hired for a specific purpose, it's important that the professional doesn't use the power of his role to push his personal agenda.
0tut9y
The spikes make the place worse for the intended users than it would be if the designers just ignored the homeless in this place and build a better place for them nearby.
0Jiro9y
That doesn't explain the difference. Just ignoring the homeless can include building things that happen to discourage the homeless but are put there for other reasons. If so, then ignoring them and being hostile to them can produce the same result.
2Capla9y
upvote for noticing a (possibly) uncharitable reaction in yourself and taking steps to do better.
1ChristianKl9y
"We" is a bad word. "We" don't design public spaces. Certain architects do. Those architects do engage in certain rhetoric. They also do promise certain things to governments who hire them to build public spaces.
0hairyfigment9y
Congratulations on seeing a case where your emotions may mislead you. Now, what makes you think the author of that article "is pretending to be surprised we are not saints"? Looking at it, I get the exact opposite impression - he starts off by saying, suggesting that he finds their reaction surprising. So if anything, his article gives me a sense of self-satisfied cynicism, learnedly explaining to such people how the world works and why he thinks their indignation is rare. Were I uncharitable, I could read your own (parent) comment as showing off cynicism in the exact same way.

The World Weekly covers superintelligent AI.

It's one of the better media pieces I've read on the topic.

Bostrom, Yudkowsky, and Russell are quoted, among many others.

5Manfred9y
Was expecting weekly world news :) What this article really opened my eyes on is how impactful Nick Bostrom (or Bostrum) has been.

What do you link someone to if you want to persuade them to start taking cryonics seriously instead of immediately dismissing it as ridiculous nonsense? There's no one single LW post that you can send someone to that I know of.

2Caue9y
I like this.
1polymathwannabe9y
I can think of this one and, especially, section B of this one.

I just realized that some people object to hedonistic utilitarianism (which I've traditionally favored) on the grounds that "pleasure" and "suffering" are meaningless and ill-defined concepts, whereas I tend to find preference utilitarianism absurd on the grounds that "preference" is a meaningless and ill-defined concept.

This seems to point to a difference in how people's motivational systems appear from the inside: maybe for some, "pleasure" is an obvious, atomic concept which they can constantly observe as driving ... (read more)

2somervta9y
Interestingly, both concepts seem worthwhile to me... and I mostly advocate a combination of hedonistic and preference utilitarianism.
1ChristianKl9y
Remembered pleasure and pleasure felt in the moment are two distinct things, towards which is the "obvious" one? My immediate idea of the terms pleasure and suffering is that "pleasure" is an emotion while suffering is more of an activity. The opposite of "suffering" would for me be "enjoying". There a state where you laugh and a state of warm relaxation. Both feel good but both are different. How does pleasure relate to that? Life satisfaction is another variable in that space. There are interactions I might have with another person where the person is going to laugh and feel energy but where the person would answer "No" if I would ask them whether they want to engage in a certain action. If you come from preference utilitarians it's important to ensure consent. If you just care about hedonics and are skilled enough to predict the results of your action and know the actions produce pleasure, consent isn't an issue anymore. The difference matters if you analyse what some PUA people do.
0Jayson_Virissimo9y
I think "pleasure" and "suffering" are very meaningful and that the prospects of finding decent metrics for each are good over the long term. The problem I have with hedonistic utilitarianism is that hedons are not what I want to maximize. Don't you ever pass up opportunities to do something you know will bring you more pleasure (even in the long run), in order to achieve some other value and don't regret doing so?
2Kaj_Sotala9y
Yeah, I've drifted away from hedonistic utilitarianism over time and don't particularly want to try to defend it here.
2Jayson_Virissimo9y
Fair enough.

I'll buy you sequences.

Sorry, I feel like a jerk repeating myself but this is the last time. I bought the three pack of the audio sequences on Kickstarter because there were multiple people who said they wanted it but for whom $50 was too dear. I just got the final "give us the names" email. Any takers?

4MathiasZaman9y
I'd like it as well, if you still have any. (email: king.grimmm@gmail.com)
7Ixiel9y
All set. Enjoy.
3MathiasZaman9y
Wow, awesome. Many thanks!
2jam_brand9y
I'd be happy to take you up on this if it's still available, my email is jam.br4nd@gmail.com. Many thanks for the kind offer either way!
0Ixiel9y
Sorry, I gave both out. (And sorry for delayed response, on vacation)
0[anonymous]9y
martin.malette@gmail.com (it's a friend I want to introduce to these topics, and he loves audiobooks)
[-][anonymous]9y80

On spaced repetition / Anki:

When I started to work after college I was surprised when people asked "How comes you don't know X? Haven't you read the manual?" I was surprised because in college it take more than one reading, a form of repetition, to learn, know and remember things. I would replay "I have read it, but have not yet memorized it."

Interestingly, later on, I managed to remember things after one reading, not details, but the general idea.

I wonder about the popularity of Anki and spaced repetition here. I am experimenting it ... (read more)

4is4junk9y
For most of the work stuff I find it easier to remember where to find things rather than the things themselves. The hard stuff is the undocumented and constantly changing locations and procedures where a search is likely to find out of date junk.

In Our Own Image: Will artificial intelligence save or destroy us? by George Zarkadakis was published by Random House on 5 March. I haven't read it, but from a search on Google Books, there's no mention of "Yudkowsky" or "MIRI", while "Bostrom" only appears once, in a discussion of the Simulation Argument. I nearly gave up at that point, but then I thought to search for "Hawking", and indeed, there is a discussion of the Hawking/Tegmark/Russell/Wilczek letter; this seems to me to be evidence on how carefully the auth... (read more)

Could someone help me out with the LessWrong wiki? I made an account called Tryagainslowly on it; it wouldn't let me use my LessWrong account, instead making me register for the wiki independently. I wanted to post in the discussion for the wiki page entitled "Rationality". The discussion page didn't have anything posted in it. I wrote out my post, and attempted to post it, but it wouldn't let me, telling me new pages cannot be created by new editors. What do I need to do in order to submit my post? I'm happy to show what I was intending to post here if anyone wants me to.

5Tryagainslowly9y
It works now! It just required waiting a bit. Thanks for the help Gunnar_Zarncke.
3Gunnar_Zarncke9y
It takes some time for the Wiki accounts to get in sync with the LW account, just wait some time (a day?). I guess its some Troll protection.

You're giving this advice to that account handle?

4Tryagainslowly9y
Thanks, I'll wait and see.

Do dating conventions fall victim to Positive Bias?

It seems that people are always looking for positive evidence, and that looking for negative evidence (I suspect my vocabulary might be incorrect?) is socially unacceptable. Ie. "Let's see if we could find something in common." seems typical and acceptable, while "Let's see if each of us posses any characteristics that would make us incompatible" seems socially unacceptable.

Note: I have zero experience with dating and romance so these are just my impressions, although I suspect that they're true.

5Epictetus9y
It's considered rude to say that out loud during a date. However, it is considered good practice to be alert for such characteristics.
1Lumifer9y
In these very words, probably, but it's perfectly socially acceptable for e.g. a vegan to declare outright that s/he is not interested in carnivores...
0Adam Zerner9y
Do you think that it's rude? It seems sensible to me. It seems that people interpret such actions as hostile. And people who say things like that probably are hostile. However, I don't think the likelihood of the person being hostile is high enough such that you should conclude that they actually are. I think that the likelihood is low enough such that the courteous thing to do is investigate further as why they're saying that. And if they're well intentioned - ie. they want both parties to find someone that they're compatible and happy with, and are just trying to do a good job of that - then I think the mature thing to do is to respect it.
2ChristianKl9y
The point of a dating conversation isn't primarily to exchange information. It's to create good feelings and see whether one can create a feeling of connection.
0Adam Zerner9y
Is this a correct restatement of your claim? The ultimate point is to determine compatibility. But the best way to do that is to follow social convention and keep things positive. In doing this, your System I will be able to determine compatibility, and will notify you by producing emotions. By violating social convention and saying something like, "Let's see if each of us posses any characteristics that would make us incompatible", you'd hamper System I in exchange for some information for System II to use. This exchange isn't worth it. It'd be interesting to see some research on this.
2ChristianKl9y
No, building an emotional connection isn't an act that's just about gathering information the same way as lighting a pile of wood on fire isn't about testing the wedness of the wood.

I'm running an Ideological Turing Test about religion, and I need some people to try answering the questions. I've giving a talk at UPenn this week on how to have better fights about religion, and the audience is going to try to sort out honest/faked Christian and atheist answers and see where both sides have trouble understanding the other.

In April, I'll share all the entries on my blog, so you can play along at home and see whether you can distinguish the impostors from the true (non-)believers.

5Jiro9y
How do you account for ideological Turing tests failing because of shibboleths? It's one thing to be unable to express or recognize the same ideas as a Christian, it's another to be unable to express or recognize in-group terminology.
2palladias9y
I try to structure questions so that they'll be less vulnerable to shibboleth exploits (plus, some shammers do do a bit of research to be able to drop in jargon!).
3Ander9y
One thing I noted when doing this. Most of my true answers were more specific than my made up answers, which might give them away. I look forward to reading the results!
3polymathwannabe9y
It's curious; I felt the opposite.
1Ander9y
These questions are quite difficult and will require effort. I'll try to submit an entry. Edit: Completed. :)
0Jiro9y
I just took the "root of all sins" test and I tried to distinguish the answers of the Christians and non-Christians entirely based on shibboleths. Disordered love? Christ is a blinding searing light? Humans are finite beings who naturally desire the infinite? Maybe. But the decision was not "would a Christian have those ideas" but "would a Christian phrase the ideas that way". Of course I can't just go count the shibboleths; it's possible that non-Christians might overcompensate and actual Christians don't talk about Jesus' blinding light much at all, at least not actual Christians of the type who answer such surveys. But either way, I didn't feel that the most likely way to figure out which answer came from Christians was to look at the content of the answer. So I think that the test has already failed. On top of this is the question of what type of Christian the non-Christians are trying to imitate. Are they trying to imitate average Christians, average survey-answering Christians, average blogging Christians, average Christians who are knowledgeable about Christianity? Trying to imitate the wrong kind of Christian can mean that knowing too much about Christianity can make your imitation fail.

In the last few years I've been thinking about all the separate mental modules that influence productivity, procrastination, akrasia etc. in their own unique ways. (The one thing that's for sure is that the ability to get stuff done isn't monolithic.) This is how my breakdown of the psychology of productivity looks like, and I have a hunch that these are all separate and generate their own effects independently (more or less) of the others:

  • a baseline level of energy or willingness to take control of your own life
  • an affinity-based system that makes you a
... (read more)
1RowanE9y
I think to back up the hunch you'd need to poll some people, see if their akrasia comes from being weak on some points rather than others - if that's the case, and they're not consistently the same points, then probably it does work that way. I personally feel like I'm bad with respect to all the modules listed.
4Dahlen9y
Yes, that's what I was trying to do with the parent comment. I used myself as a reference for these points, as well as drawing on various anecdotes I've heard about other people. E.g. I'm high on "negative willpower", high on perseverance against physical discomfort (tiredness, hunger, pain), low on perseverance against boredom, frustration, and the feeling of being stuck. I'm very low on #1, and also have low affinity for, say, math, and hence I never put in the hours for learning it well, but I've heard of people who are also low on #1 but happen to have very high affinity for math, who'd go on and entertain themselves with their equations and theorems while dishes gathered in the sink and rent went unpaid. (Proof they don't necessarily have better work ethic than me.) Then there are people who do mildly dislike crouching over math textbooks for hours, but are very high on #3 and push themselves to keep going. (Proof that sometimes it is a matter of work ethic.) I listed #5 as its own factor because it can override other items on the list, and all the other items above it lacked any inherent reference to external factors. It could be a strong or a weak tendency; for instance, I notice about myself that I'm not particularly moved by rewards or punishments; moreover, my indifference to them seems tweaked especially to cause me to lose the greatest amount of money possible, either by missing opportunities or by having to pay up. (That's why I never ever plan on using Beeminder.) #6 is there because the whole thing lacked a time dimension without it. For example, however badly I might fare on other points, I'm only a moderate procrastinator (for tasks I don't loathe with a fervor), tend to begin working on assignments towards the midpoint of the available time range, and consider the long term.

Wrist computer: To Buy or Not To Buy

I'm considering whether or not to buy an Android phone in a wristwatch form-factor, and am hesitating on whether it's the best use of my money. Would anyone here care to offer their opinion?

One of my goals: Go camping and enjoy it. One of my constraints: A limited budget. I suspect that taking a watch-phone, such as an Omate Truesmart or one of its clones ( eg, http://www.dx.com/p/imacwear-m7-waterproof-android-4-2-dual-core-3g-smart-watch-phone-w-1-54-5-0mp-black-373360 ), and filling a 32 gigabyte SD card with offline ... (read more)

Would anyone here care to offer their opinion?

Sure :-D Smartwatches are computers miniaturized to the point of uselessness because of the tiny screen and UI issues. Specifically for camping or backpacking you'd be much better off with a bigger-screen device like a regular smartphone. In fact, if you're serious about backpacking I would recommend a dedicated GPS unit.

0DataPacRat9y
I've started looking into speech-to-text and text-to-speech alternatives to the tiny screen. I've tried one of those, every N years. There's always been some issue - only providing coordinates instead of a map, or power issues, or the like - which has ended up with me leaving it out of my kit. I'm vaguely hoping that the continuing convergence of all electronic devices into "phones" means that the various solutions to those issues will also have been collected.
4Lumifer9y
That sounds like a rationalization. And it's entirely unhelpful when you're trying to figure out maps.
0DataPacRat9y
Granted. :)
1Lumifer9y
For backpacking I still prefer a dedicated GPS unit because (a) it's waterproof plus I expect it to survive shock better than a smartphone; (b) it's power-thrifty and I can leave it on for the whole day without worrying about running down the battery; (c) it can run off AA batteries which are ubiquitous; (d) if you really need GPS, you need to carry two GPS-capable devices.
0DataPacRat9y
Maybe it's been longer than I thought since I went GPS-hunting... What brand and/or model accomplishes this witchcraft?
1Lumifer9y
My GPS is an old Garmin 76CSx.
9MrMind9y
For a long time I've wanted to want a smartwatch so badly I was forced to buy it, but the actual advantages of owning one never amounted to the desired threshold. In the end, and quite sadly, I've decided that there will probably never be enough reasons. I think it's happening the same to you: you want to want to buy a wrist-phone, but are rational enough to know that there's no reason to do such a thing. I suggest you to meditate on the fact that you probably already know what's the right course of action, it just sucks to follow.
2DataPacRat9y
In a curious twist to this process, I just dreamed that I checked this thread for a response to this comment, and found one, of which I explicitly remember only the words, "You're playing with fire here" and "You're taking your life into your hands", and implicitly remember something about the authour reminding me that I'm a cryonicist. Going camping does happen to increase the odds that I'll have an accident where my brain ends up warm and dead. Having a communications device that's quite likely to remain intact and ready to use if I fall down a cliff and break my legs modestly reduces the odds of that particular negative scenario. In fact, assuming that I'm not going to quit going camping, and that I already have my chosen first-aid equipment, there are few expenditures I can make which are as likely to increase my QALYs. So: Does /that/ sound like actually useful reasoning, or mere rationalization?
5knb9y
Sounds like a rationalization to me. I think you would be better off buying a ruggedized cell phone or radio if that is your true purpose. I suspect a watch is quite likely to get smashed in a serious fall like that.
0DataPacRat9y
Fair enough. Hm... brainstorming a bit, I'm considering looking up one of the cheaper watch-phones, removing the wrist-band, getting a SIM card for a phone service that only needs to be paid for annually, and keeping the miniaturized backup cellphone somewhere about my person. But that's a completely separate use-case than the device for camping, so I'm not going to even consider it until I finish my annual camping gear refreshing.
5Lumifer9y
While that's true you might want to consider what other activities also happen to increase the same odds and whether you want to spend your life avoiding all of them.
2DataPacRat9y
My lifestyle is mostly urban; whatever accidents befall me, I'm nearly always well within range of ambulances and hospitals with personnel able to call up my medical proxy. Camping is the exception where it would likely take a few hours just for emergency personnel to reach me.
-1Lumifer9y
Be realistic. If you're hit by bus on a city street, how long do you think your brain will spend being warm and dead before the information reaches someone who could call in the cryo team? And that even providing your brain stays intact.
2DataPacRat9y
My immediate family all know my wishes, I have a medic-alert type necklace with cryo contact info, there's similar info in my wallet, and so on. Basically, as soon as medical professionals learn who my corpse was, which should be close to as soon as they arrive, they'll know to contact someone who knows to tell them to put ice around my head (as a first stage in the cooling process). By contrast, if I'm camping, then even if I stay within range of cell towers, and have arranged to call someone twice a day, then even just getting the info out that I might be in trouble (and possibly dead) will take hours-to-days, let alone finding me. (For not-quite-as-lethal accidents, I've got everything from a mirror that can be used as a signal mirror to a pen-style flare launcher to help point possible rescuers in my direction.)
5gjm9y
Allow me to join the chorus of commenters who suspect that you've been persuaded by advertising, peer pressure, etc. that you have to have the latest cool gadget, and that you'd be better off if you could overcome that urge. It's a useful habit to break if you have a longer-term preference for having more money :-).
3VincentYu9y
Not directly answering your conundrum on wrist computers, but—I go trail running frequently (in Hong Kong), so I've thought a bit about wearable devices and safety. Here are some of my solutions and thoughts: * I use a forearm armband (example) to hold my phone in a position that allows me to use and see the touchscreen while running. I find this incredibly useful for checking a GPS map on the run while keeping both hands free for falls. I worry that the current generation of watches are nowhere near as capable as my phone. * I rely a lot on Strava's online route creation tool and phone app for navigation. * Digital personal locator beacons on the 406 MHz channel (example) are the current gold standard for distress signals. * Sharing your location through your phone (e.g., on Google+) can give some peace of mind to your family and friends. * An inactivity detector based on a phone's accelerometer might be a useful dead man switch for sending a distress SMS/email in the event of an accident that renders you incompetent. I haven't gotten around to setting this up on my phone, but here's an (untested) example of an app that might work. * In case of emergency, it might be useful to have a GPS app on your phone that can display your grid reference so that you can tell rescuers where to find you.
2DataPacRat9y
Indeed so - but as far as I've been able to dig up so far, they require a bit more gold than I can afford. Such beacons are required to be (re)programmed with a serial number appropriate for the country they're to be used in, which can only be done at an authorized dealer, which makes online purchases from other countries almost pointless. As near as I can tell, the nearest place I can get such a beacon is at mec.ca , where the least expensive example I can find is $265, above my budget for camping electronics. I'd be happy to have such a device; I just don't see how I can acquire one with my particular level of fixed income.
[-][anonymous]9y50

DAE know The Haze? The Haze is the brain fog whenever I have a subject I entertain comfortable lies about and the truth would be too painful. I.e. something negative about myself etc. whenever I approach the subject my brain decides to deal with the cognitive dissonance to avoid painful truth by reducing the IQ, but instead of becoming wooden and thick like normal stupidity, it becomes foggy. This fogginess is not actually felt or known at that time, but when I later on face the painful truth, it feels like a fog, a haze lifting. It feels a lot like as i... (read more)

4Joseph_P9y
Could you give a specific example of this foggy thinking? In which ways is it different from an Ugh field ?
3[anonymous]9y
I think ugh fields are about something fairly small and simple, it is different. When I as 15, I was weak in every sense, nerves (anxiety), borderline mentally ill, scrawny body etc. As I desperately did not want to admit it, because it sucks, and I wanted to convince myself I am strong, I externalized the self-hate and started to hate on other people's weakness (not actual individuals, but as a principle), saying things like the weak don't deserve to live and should go extinct to make room for the strong etc. in order to convince myself I am strong. But it didn't really work. It did not really work for Nietzsche either, who inspired me to do this... and especially when I was confronted by people who took offense when I exhorted how altruism is slave morality, and those people were strong and succesful in every possible way, yet they were altruists, basically they were paladins, I needed to exert more and more convoluted mental gymnastics to convince myself they are actually weak and I am somehow actually strong. Back then it felt like being a non-understood genius, a genius who is not understood because other people are stupid. But much later when I realized the folly, it felt like being in a mental fog, mental haze back then.
2NancyLebovitz9y
Hypothesis: what you can think is affected by the state of your nervous system. Have some neurology on the subject-- I'm not jumping to any conclusions about whether you have a background of trauma, these are just the books I know about. Complex Trauma: From Surviving to Thriving This one has some material about rage getting turned outward or inward. In an Unspoken Voice: How the Body Releases Trauma and Restores Goodness
0[anonymous]9y
If basic common playground bullying counts as one then yes. Hm, it checks out. Boys between 6 and 12 have rather harsh ways of establishing a hierarchy of strength, courage and general ranking and it is possible that it is traumatic in a way that the subject does not even recognize. Does that ever happen that people are conscious about their own coping methods but are not too conscious about the trauma they are coping with? My problem with the whole theory is that I am prone to pull a reversed stupidity on Freudism. I.e. if Freud was wrong that everything is about coping with childhood traumas then I tend to think nothing is. I also tend to think it is way, way too easy, it is suspiciously easy, because it sounds like blaming others in order to avoid facing a defect in the self.
2NancyLebovitz9y
In an Unspoken Voice has it that PTSD is a result of not being able to do normal simple movements such as running, punching, or pushing away when under high stress. There's a solution-- when the stress is over, go away for a bit and shake. Animals do this, but for various reasons-- the stress goes on for too long, or it feels socially or personally inappropriate to collapse and shake-- the uncompleted movements can get stuck in the memory and the trauma continues in the body and imagination. It wouldn't surprise me if "ordinary" childhood bullying would be enough to have a traumatic effect, especially on someone who was immobilized while being bullied. I think so. There's a lot of that around rape, where the person who was raped is showing symptoms of PTSD, but thinks that the way they were treated doesn't count as rape. I found it was useful to frame traumatic effects (in my case, a tendency to freeze) as part of the normal human range rather than a defect. Also, there's research that the biggest predictor of PTSD is the amount of previous trauma. I recommend Dorothy Fitzer's videos. She specializes in people with anxiety and takes a gentle, sensible approach to becoming more comfortable in the world.
0[anonymous]9y
Interesting. So the way it differs from Freudism is that the idea is not that getting hurt gives you problems, but not being able to react to hurt or stress (even environmental stress if I get it right) in basic ways does so?
0NancyLebovitz9y
Yes., bearing in mind that this theory says that some basic ways work much better than others-- for example, telling the story over and over (which is something a lot of people do) may not be nearly as useful as going to physical movement. There's also a school of more conventional psychology (sorry, I don't know which one) which holds that what happens to you isn't the fundamental thing-- what's important is what conclusions you draw from what happened to you.
1Gunnar_Zarncke9y
Sounds like unsuccessful rationalization or compartmentalization. Unsuccessful probably because the fog wasn't 'able' to lock you into a stable state. You mention lots of contact to other people so I guess that prevented it.
0satt9y
[On second thought, deleted — the example I pulled up is arguably more "wooden and thick like normal stupidity" than "foggy".]
[-][anonymous]9y40

From 2008: "Readers born to atheist parents have missed out on a fundamental life trial"

Not really, in my experience. First of all there are plenty of other silly things to believe in, such my parents tended to believe in feel-good liberal adages like "violence never solves anything".

But actually the experience made me learn from religious people quite a lot. For this reason: like for most modern secular liberal Europeans, for my parents the kind of history we live in began not so long ago. A few centuries ago. Or maybe 1945. Everythin... (read more)

1Lumifer9y
Have you discovered the neoreactionaries yet? :-)
3[anonymous]9y
Yes, Moldbug and Xenosystems. Love/Hate. The problem is they are too politics focused which is typically about using power to change other people's lives. Frankly I like it more if people experiment with their own lives first. This is what I don't understand - is there even a name for that? A non-political conservative/reactionary who experiments with old ideas on himself and not forcing others to do so, is there such a thing (NOT the SCA). If there was such a thing I would actually try a demo version of it, for example, I love the movie The Last Samurai for example, but strictly on a voluntary basis and I figure that means non-political. I mean, I guess, if I think deeper into it, the issue is not even whether voluntary or not but the Talebian "skin in the game". The honesty of every proposal is proportional with how much it affects oneself and how much it others. And that is why I hate politics. Too few personal skin in the game and way too much other people's skin.
1[anonymous]9y
Libertarians?
2[anonymous]9y
I know only a few places on Earth where it would have any chance of working. Such as the US, maybe UK but from Latin America to Eastern Europe, far too often criminal plunder was covered up with free-market adages. Like "I sell this state-owned thing to my friend for peanuts because private ownership is more efficient." The point is, ideas, one of the many angles to evaluate ideologies from is how easily they are misused as lip-service cover-up for something nasty. Marxism was misused into Leninism, Keynesianism into "spend in bad times... and not save in good times because fuck it: votes", and Libertarianism into neoliberal plunder. This makes it not worse than the others, but not perfect either. I would basically use it as part of my political-philosophy toolset but not the whole of it. Another thing Libertarians don't really understand even in US-based circumstances is that if I own the things I need to work with or live with, then private property gives me freedom and independence. But if other people own the things you need to work with, then private property is a burden, not a freedom for you. Libertarianism works best with a fairly egalitarian distribution of property - not income, not income redistribution, but property, as in: frontier homesteader farms and suchlike. It is the legacy of frontier equality that made Libertarianism popular in the US, Hayek and Mises come from an Austrian tradition that had much more a small-business focus and thus more equal property ownership than the Mittelstand focus of Prussia-Germany and so on. (May Weber hated small business: he considered shopkeepers lazy and spoke out against the Austrianization of Germany, as in, against small business opposed to middle and big. Needless to say he was anything but Libertarian.) In Europe Switzerland is the closest to Libertarianism and precisely because they have/had such a broad property-owning middle-class, every second Swiss person seems to have inherited 1/4 of a dairy farm or
0Lumifer9y
Sure, these people are usually known as eccentrics, cranks, and weirdos X-/ Since you're experimenting on yourself, what's stopping you and why do you want only a demo version? That depends on where do you live and what kind of politics you are talking about.
0[anonymous]9y
Lack of info. Know any good self-help books published before 1700 ? :) Most of the old stuff was learned by word of mouth. The "sensei" type of learning that we Westerners find so adorable in Asia, but actually we did it too, and that means not a lot of stuff was written down. Not all hope is lost, though, there are people learning fencing from manuals written around 1470. http://wiktenauer.com/wiki/Main_Page But jokes aside. For example we are discussing akrasia a lot. In my childhood, it was "solved" by scaring the living bejeezus out of children who procrastinated instead of doing homework, everything from punishment to guilt-tripping. This sucked, and also, often worked. Akrasia became a problem largely when it was decided that now we are trying to be nice with each other and ourselves too. However, I cannot have been as simple as that. I think if I scratch deeper, I could find old methods. The issue is often not being written down.
2Jayson_Virissimo9y
The Enchiridion, by Epictetus.
2Richard_Kennaway9y
The Book of Proverbs? The Book of Baruch? Sermons from the past? Writings of the ancient Stoics?
0[anonymous]9y
+1 for stoics, actually people like Nassim Taleb are re-inventing that and it seems to be a good way.
0seer9y
No, Taleb isn't "re-inventing" stoicism any more then every mechanic is "re-inventing" the wheel.
0[anonymous]9y
You mean stoicism was always alive?
0Richard_Kennaway9y
More modern stoicism here, although personally, I think that the Modern Stoicism community treats stoicism too much as a package deal.
0MrMind9y
I'll add to the already growing list The meditations by Marcus Aurelius, I've been told is one of the best. Heck, sometimes I feel that past self-help books are way better than today's...
0Epictetus9y
Essays of Montaigne.
0[anonymous]9y
What about Bacon's essays? (I don't remember when they were written, though.)
0Nornagest9y
It might be worth mentioning that a lot of the people working at reviving Western longsword fencing also have rank in Eastern sword styles, or classical fencing, or both. That isn't really that big a deal from a purity/authenticity standpoint, contrary to what some people will tell you; schools differ mostly in methodology, since the biomechanics of fencing are largely the same whether you live in 21st-century California or 19th-century Japan or 15th-century Germany, and methodology lends itself better to being written down than biomechanics does. But it does mean they have a live body of practice to hang written descriptions of technique on.
0[anonymous]9y
I intend to learn HEMA/longsword after I get good enough in boxing i.e. fist-fencing. I wonder what, if anything, will I bring into it. One thing I am doing is to practice both dominant hand front and non-dominant hand front stances because while boxing focuses on the second, the first is useful both for surprising an opponent in boxing and also fencing does that. I hope my footwork will be well translatable, because I suffer like a pig on ice with it, it is really hard for me to learn boxing footwork so I I hope I can use that for historical fencing too. Another interesting thing I hope to help me with fencing later is the non-telegraphed jab. This means roughly this: turn the hand inward and raise the shoulder during a jab so the elbow does not flare out to the side. I think this can be useful for a side-sword or rapier thrust but I am not so sure for the two-handed longsword stuff.
-2ChristianKl9y
I think it makes more sense to learn fencing some someone who understands the biomechaniscs well enough to have his own opinion about what should be proper technique should be than someone who simply tries to teach what he thinks some book says.
0Lumifer9y
Sure, lots of those -- from St.Augustine's Confessions (that's way before 1700 :-D) to Machiavelli's The Prince.
-1ChristianKl9y
It was also a radically different environment. Computers provide for new distractions. People used to feel bored and have nothing to do. I never feel like I don't know what to do and there are always multiple options.
1[anonymous]9y
I have a sample size of 1 that it is possible today :) Screwing around on Reddit can be boring (and yet addictive). It is not that straightforward to find interesting content online. Maybe I am just unusually bad at it - or maybe because literally zero of my IRL friends and relatives reads Reddit, LW or any interesting blog, so I never get "hey this is cool check this out"emails. They just don't have much free time. This is probably atypical. Yet, it is very easy for a child to be distracted from homework at any level of technology. It is called daydreaming. You familiar with Karl May novels, I suppose? Old Shatterhand and Winnetou stories caused me huge amounts of daydreaming when I was a child and so did they for my friends. Imagination always fills the void that entertainment doesn't. Of course you need books because without adventure stories there is not much to daydream about, but that is solved problem since about, 1800-1850? I mean, that was roughly when books were cheap enough that children could have romantic novels. And vice versa - probably this is why experts say watching TV, even perfectly healthy educational shows retards the development of toddlers. Not enough exercise of imagination. I do. It is hard work for me to race with boredom and not always win. I fill my tablet, Instapaper, FBreader with saved articles and ebooks to read but the activity itself can be get boring, and there is not much left then, I used to be a gamer since 1987 (Commodore...) but grew to be bored with most games except currently the best mods for Mount & Blade Warband (such as A Clash of Kings or Brytenwalda). I have a family now so that fills out my weekends nice, still I sometimes get bored. The way I break it down, there is almost nothing outside our apartment that would be interesting in a random weekend in Vienna, just people drinking in bars or yet another kind of artsy music festival. Inside the apartment, it is each other, and that is great, and the computers, which
0ChristianKl9y
It's not the same kind of boring that people had 100 years ago. It fills your brain with information that has to be processed. I think daydreaming is qualitatively much different than outside input for the purpose of this discussion. Daydreaming allows you to process old information instead of adding new information. Books also don't have the constant change of focus.
-2seer9y
No, it's about trying to stop progressives from using power to change other people's lives. I think most reactionaries would settle for forcing the progressives to experiment with their new ideas on themselves before forcing them on everyone else. As for experimenting with old ideas. What do you mean? If the 1000+ years of data isn't enough for you, a couple of neoreactionaries' self-experimentation won't be either.

Suppose I wanted to predict the likelihood of and degree of delays and cost over-runs associated with a nuclear plant currently under construction. How would people recommend I do so?

5Tripitaka9y
Study existing literature. http://en.wikipedia.org/wiki/Bent_Flyvbjerg this guy got a lot of good press in germany, apparently he has written extensively on big infrastructure projects and cost overruns. I find Megaprojects and Risk: An Anatomy of Ambition
0satt9y
Reference class forecasting: get a list of previously constructed nuclear power plants, look up how much they were delayed and over budget, then use the empirical probability distribution of delays and cost over-runs. (Bent Flyvbjerg, cited by Tripitaka, turns out to be very keen on RCF.)

What is the name of the logical fallacy where you rhetorically invalidate an argument by providing an unflattering explanation of why someone might holds that viewpoint, rather than addressing the claim itself? I seem to remember there being a word for that sort of thing.

3fubarobfusco9y
A related idea is psychologizing — analyzing someone's belief as a psychological phenomenon rather than as a factual claim.
1g_pepper9y
I believe that is the genetic fallacy.
2fubarobfusco9y
The genetic fallacy has more to do with dismissing a claim because of its origins or history, rather than because of who holds that view today. For instance, arguments from novelty or antiquity are genetic fallacies. http://en.wikipedia.org/wiki/Genetic_fallacy
1g_pepper9y
Yes, Bulverism appears to be a specific subcategory of the genetic fallacy, and Bulverism more precisely answers Ishaan's question. Thanks for the clarification.
0D_Malik9y
It's "explaining away" or "intercausal reasoning" applied to the "good reasons → belief ← bad reasons" Bayes net, and it's not really a fallacy. It doesn't invalidate arguments directly but it should still make us decrease our belief, because (1) we need to partly undo our update in favor of the belief caused by observing that the other person holds that belief, and (2) we need to compensate for our own increased desire to believe. It's often rude, because it implies that the other person is either dishonest or stupid, since it suggests that the other person's expressed belief is either not genuine (e.g. lies or belief-in-belief) or genuine but not due solely to truth (e.g. influenced by subconscious signaling concerns). Since this reasoning pattern is rude, i.e. status-lowering, people often claim that it's logically invalid when it's used against a belief that they hold. (See what I did there?) This status-lowering property also means we must be careful to apply it to our own beliefs too, not only our opponents' beliefs.

I labeled an exam question as "tricky" because if you applied the solution method we used in class to solve similar looking problems you would get the wrong answer. But it occurred to me that if the question had been identical to one given in class but I still labeled it as "tricky" the "tricky" label would have been accurate because the trick would have been students thinking that the obvious answer wasn't correct when indeed it was. So is it always accurate to label a question as "tricky"?

That's kind of a Hofstadter-esque question. I think the answer is "no", but the reason why depends on what meta-level you're looking at: if the label refers only to the object-level question, then it's straightforwardly true or false; but if you construe it as applying to the entire context of the question including its labeling, then it's possible to imagine a trick question that's transparent enough that labeling it as such exposes the trick and stops it from being tricky. It can be a self-fulfilling or a self-defeating prophecy.

0Lumifer9y
In other words, it's turtles all the way down :-)
4Richard_Kennaway9y
"Tricky" means "the other person is operating at a higher level than I am". If you answer a question at a lower level than it was posed, you get marked down for failing to level up. If you answer at a higher level than the question was posed at, and the teacher fails to level up, you get marked down for what failing to level up felt like to the teacher -- misinterpreting the question, nitpicking, showing off, whatever. The task in an exam is to figure out what level each question is being asked at, and address it on the same level.
3JoshuaZ9y
I don't whether it is tricky or not, but it is the sort of thing I think my students would get legitimately annoyed by. While teaching students to be confident of their answers can be important, I'm not sure in this context that would be helped by this.
4James_Miller9y
Normally you are right, but the class is game theory so I would feel justified playing meta tricky games with my students on exams. Here is a past exam question of mine that got posted.

That makes as much sense as having a class about political corruption and requiring that students pass the test by bribing the teacher.

Just because the class is about X doesn't mean that grades in the class should be measured by X.

7James_Miller9y
If I taught a class on political corruption I would totally do that if it wouldn't get me in trouble. My goal with that question was to confront the students with a real game theory based moral dilemma. Tests are not just for evaluation, but should also be learning exercises.

But there's a difference between "this is how you do X" and "doing X is appropriate in this situation". Deciding that because a class is about bribery, you should get your grade in it by bribery, confuses these two things--you've given the students an opportunity to use the lessons from the class, but it's not a situation where most people think you should have an opportunity to use the lessons from the class. If your class was about some field of statistics related to randomness would you insist that your students roll dice to determine their exam score? If your class was about male privilege, would you automatically give all female students a grade one rank lower?

Tests are not just for evaluation, but should also be learning exercises.

While tests can have purposes, such as learning, that are orthogonal to evaluation, that's different from giving the test an additional purpose that is counterproductive to evaluation.

Also, I'd hate to be the student who had to explain to a prospective employer that the employer should add a percentage point to his GPA when considering him for employment, on the grounds that he scored poorly in your class for reasons unrelated to evaluation.

1ike9y
That one is evil. Assuming the question was known in advance, the obvious solution is for the people who care more about their grades to pay those who care less to circle A while circling B themselves. If they trust each other, they might even be able to do this after-the-fact. The universalizing answer would be to choose A 51% of the time. What was the ratio of As?
1ilzolende9y
Does James Miller let his students take d% dice to his tests?
2James_Miller9y
No, but if a student asked I would be tempted to give her extra credit.
0ike9y
That's why you should always have some random bits up your sleeve (memorized). I remember being surprised that a large number of /r/rational commenters had password systems in case they ever invented time-travel or cloning. Anyone who goes to that effort can presumably also memorize 15 or so random bits if they ever need it, and refresh if used.
3Jiro9y
Time travel passwords are vulnerable to mindreading. If you want a good time travel password, you have to have an algorithm which the time-travelling version of you can calculate, but which can't be directly read by a mindreader because if he's reading it right now, he has no time to calculate it. For instance, I can have a time-travel password of "digits 300-310 of the square root of 3". A time-travelling version of me would know the password, so can compute it, then can tell me the result and I can check it. A mindreader would have to read my mind before the fact or engage in some time travel himself. Of course, it's impossible to have a time-travel password immune to all such tricks (maybe the mindreader did read my mind a week ago), but there's no reason to allow blatant loopholes.
1James_Miller9y
It was several years ago and I don't remember.
2Epictetus9y
If every question is tricky, then the label of "tricky" ceases to be meaningful. I believe the word you're looking for is "cruel".
0Gunnar_Zarncke9y
Only if you do it once (or a few times) and only after they have seen lots of tricky ones by now.
0Lumifer9y
Reminds me of Vizzini's battle of wits in the Princess Bride X-)

Generating artificial gravity on spaceships using centrifuges is a common idea in hard-sci-fi and in speculation about space travel, but no-one seems to consider them for low gravity on e.g. Mars. Am I mistaken in thinking that all you'd need to do is build the centrifuge with an angled floor, so the net force experienced from gravity and (illusory) centrifugal force is straight "down" into it?

I realise there'd be other practical problems with centrifuge-induced artificial gravity on Mars, since it's full of dust and not the best environment, but that doesn't seem to be the right kind of objection to explain it never being brought up where I've seen it.

3DataPacRat9y
One variation: "Gravity trains", going round and round in circles. Used on my "New Attica" setting, as can be seen at http://datapacrat.deviantart.com/art/The-Grav-y-Train-343866014 .
3The_Duck9y
Sure, this would work in principle. But I guess it would be fantastically expensive compared to a simple building. The centrifuge would need to be really big and, unlike in 0g, would have to be powered by a big motor and supported against Mars gravity. And Mars gravity isn't that low, so it's unclear why you'd want to pay this expense.
0GMHowe9y
I recall a SF story that took place on a rotating space station orbiting Earth that had several oddities. The station had greater than Earth gravity. Each section was connected to the next by a confusing set of corridors. The protagonist did some experiments draining water out of a large vat and discovered a coriolis effect. So spoiler alert it turned out that the space station was a colossal fraud. It was actually on a massive centrifuge on Earth.

I've written a post on my blog covering some aspects of AGI and FAI.

It probably has nothing new for most people here, but could still be interesting.

I'll be happy for feedback - in particular, I can't remember if my analogy with flight is something I came up with or heard here long ago. Will be happy to hear if it's novel, and if it's any good.

How many hardware engineers does it take to develop an artificial general intelligence?

0Kaj_Sotala9y
The flight analogy, or at least some variation of it, is pretty standard in my experience. (Incidentally, I heard a version of the analogy just recently, when I was reading through the slides of an old university course - see pages 15-19 here.)

I occasionally see people move their fingers on a flat surface while thinking, as if they were writing equations with their fingers. Does anyone do this, and can anyone explain why people do this? I asked one person who does it, and he said it helps him think about problems (presumably math problems) without actually writing anything down. Can this be learned? Is it a useful technique? Or is it just an innate idiosyncrasy?

3emr9y
Seems to be a working memory aid for me. If I have to manipulate equations mentally, I'll (sort of) explain the equation sub-vocally and assign chunks of it to different fingers/regions of space, and then move my fingers around or reassign to regions, as if I'm "dragging and dropping" (e.g. multiply by a denominator means dragging a finger pointing at the denominator over and up). Even if I'm working on paper, this helps me see one or two steps further ahead than I could do so using internal mental imagery alone. I don't remember explicitly learning this.
2MathiasZaman9y
I move my fingers (and hands or a prop wand if I'm carrying one) to "write" stuff in the air when I'm doing serious thinking. The way that helps me is that I can keep more thoughts in my head. This doesn't (just) apply to math problems (since I hardly know any math and can't do much calculations in my head). My current hypothesis for why this works is that it couples certain actions to certain ideas and repeating the action makes it easier to recall the idea. If I'm right about that it might be learnable and useful, to a similar extent as mind palaces. By coincidence, I've been thinking about trying to formalize this technique in some way since Saturday.
1GuySrinivasan9y
I have the belief that I solve math, design, and logic problems more rapidly when standing/pacing in front of a whiteboard with a marker in my hand, far out of proportion to any marks I actually make (often no marks), possibly because the physical motions put me in the state of mind I developed during university. (I don't know if it actually helps; I have not tested it)

Is avoiding death possible is principle? In particular, is there a solution to the problem of the universe no longer being able to support life?

7RolfAndreassen9y
None currently known. But I suggest that this is not a very high-priority problem at the moment; if you solve the more pressing ones, you'll have literally billions of years to figure out an escape path from the universe.
3MrMind9y
Or billions of years of despair knowing there isn't one...
[-][anonymous]9y100

Because obviously the only valid response to knowing death is inevitable is despair during your non-dead time...

0MrMind9y
Of course not... You can also wirehead yourself to avoid thinking of the impending doom! I hope you didn't saw my comment as a real proposal for regulating billions-years-in-the-future civilization :) It was more on the spirit of a Lovecraftian side note... Although I think, more seriously, that a civ heavily invested in preventing death would be reasonably crippled if it suddenly find another, inevitable source of death. E.g. once anti-aging is widespread, a deadly virus that targets those who have been treated.
6[anonymous]9y
I'm always confused when people talk about 'avoiding/conquering/ending death', as if death were one thing. It's rather emphatically not. It's even worse than the stereotypical-by-now adge that theres no such thing as a 'cure for cancer' because every type of cancer and indeed every individual tumor is unique and brought about by unique failures and internal evolution.
3pianoforte6119y
I understand that cancer is more than one thing, but I don't see how death is more than one thing. Ceasing to exist; a state such that there is a prior conscious state but no future conscious state. There are many ways to define it, mostly equivalent. If you mean that biological death is caused by multiple processes then sure, but I mean avoiding all of the causes of death.
1Squark9y
I think that a solution might be possible. According to string theory our universe is likely to be only metastable since its cosmological constant is positive. It means that eventually we get false vacuum decay and the formation of a new universe. If the new universe has zero or negative cosmological constant, its asymptotic temperature will be zero which should (I think) allow avoiding heat death (that is, performing an infinite computation). Now, I think the half-life of spontaneous nucleation within a cosmological horizon is usually predicted to be much longer than the relaxation time. However, this leaves the possibility of heterogeneous (induced) nucleation. Now, I'm not aware of any research about artificially induced false vacuum decay, but I don't know of any physical barrier either. If we manage to induce such a decay and find some way to transmit ourselves into the new universe (which probably requires the new universe to be physically universal), avoiding death is a possibility.
4[anonymous]9y
Uh oh...

If this post doesn't get answered, I'll repost in the next open thread. A test to see if more frequent threads are actually necessary.

I'm trying to make a prior probability mass distribution for the length of a binary string, and then generalize to strings of any quantity of symbols. I'm struggling to find one with the right properties under the log-odds transformation that still obeys the laws of probability. The one I like the most is P(len(x)) = 1/(x+2), as under log-odds it requires log(x)+1 bits of evidence for strings of len(x) to meet even odds. For... (read more)

6Kindly9y
Here is a different answer to your question, hopefully a better one. It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.) Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.) Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32. A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1/2 because each flag bit has a probability of 1/2 of being set to 0, if chosen randomly. If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come
0Transfuturist9y
This was very informative, thank you.
0[anonymous]9y
Here is a different answer to your question, hopefully a better one. It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.) Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.) Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32. A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1/2 because each flag bit has a probability of 1/2 of being set to 0, if chosen randomly. If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come
0Kindly9y
What sort of evidence about x do you expect to update on?
0Transfuturist9y
The result of some built-in string function length(s), that, depending on the implementation of the string type, either returns the header integer stating the size, or counts the length until the terminator symbol and returns that integer.
0Kindly9y
That doesn't sound like something you'd need to do statistics on. Once you learn something about the string length, you basically just know it. Improper priors are not useful on their own: the point of using them is that you will get a proper distribution after you update on some evidence. In your case, after you update on some evidence, you'll just have a point distribution, so it doesn't matter what your prior is.
0Transfuturist9y
Not so. I'm trying to figure out how to find the maximum entropy distribution for simple types, and recursively defined types are a part of that. This does not only apply to strings, it applies to sequences of all sorts, and I'm attempting to allow the possibility of error correction in these techniques. What is the point of doing statistics on coin flips? Once you learn something about the flip result, you basically just know it.
0Kindly9y
Well, in the coin flip case, the thing you care about learning about isn't the value in {Heads, Tails} of a coin flip, but the value in [0,1] of the underlying probability that the coin comes up heads. We can then put an improper prior on that underlying probability, with the idea that after a single coin flip, we update it to a proper prior. Similarly, you could define here a family of distributions of string lengths, and have a prior (improper or otherwise) about which distribution in the family you're working with. For example, you could assume that the length of a string is distributed as a Geometric(p) variable for some unknown parameter p, and then sampling a single string gives you some evidence about what p might be. Having an improper prior on the length of a single string, on the other hand, only makes sense if you expect to gain (and update on) partial evidence about the length of that string.

Perhaps beliefs are exaggerated partially due to the chance of those who disagree with the belief expressing their disagreement being less than the chance of those who agree with it expressing their agreement with it.

Justification: It seems the main incentive for expressing one’s agreement or disagreement (and the reasons for it) is to make the person more likely to hold your belief and thus more likely to hold a more accurate belief. If you agree with the person, expressing your agreement has little cost, as you probably won’t get into a lengthy argumen... (read more)

It appears to me that the differences System 1 and System 2 reasoning be used as leverage to change one's mind.

For example, I am rather risk-averse and sometimes find myself unwilling to take a relatively minor risk (even if I think that doing that would be in line with my values). If that happens, I point out to myself that I already take comparable risks which my System 1 doesn't perceive as risks because I'm acclimated to these - such as navigating road traffic. That seems to confirm to System 1 the idea of "taking a minor risk for a good reason is no big deal".

Is everyone already aware of the existence of erotic fanfiction entitled Conquered by Clippy?

3Tenoke9y
Relevant thread
2Paul Crowley9y
My searches failed, the words "Conquered by Clippy" don't appear on that page! Thanks.

Are there any English words with the property that if you rot13 them, they flip backwards? For example, "ly" becomes "yl," but "ly" isn't a word.

7Falacer9y
I wrote a check for this property for all the words in my system's inbuilt vim dictionary and got the following list: Rubbish Words: er, Livy, Lyly, na, ob, re, uh Interesting Words: an, fans, fobs, gnat, ravine, robe, serf, tang, thug
2Vaniver9y
Thanks!
0pianoforte6119y
I wonder how long it would have taken someone to find one of those without using a script. The human mind is pretty good at word based puzzles, but that's a very short list and a pretty wacky criteria.
2Falacer9y
I thought about it for about 5 minutes before deciding to script it, and got "fobs" and, annoyingly, dismissed "fres" as not a word. I imagine if I had been more rigorous it wouldn't have taken long to get all the 4 letter ones, since they all have an internal vowel, which was the obvious place to start looking.
0Vaniver9y
It seems to me like you could generate the 26 pairs-- an, bo, cp, etc.-- and then try to make words out of nesting those pairs (fobs is "ob" surrounded by "fs"). But the hard part is checking whether or not something is a word, and nesting is a pretty weird action unrelated to the sound or content of words. But now I have idea for a Scrabble-esque game...
2Douglas_Knight9y
A simple google search brings up this page, but it doesn't have anything new.

I have a random physics question:

A solid sphere, in ordinary atmosphere, with a magical heating element at one pole and a magical refrigeration element at the other. If the sphere itself is stationary and at room temperature; one pole is super-cooled while the opposite pole is super-heated. (Edit: Assume the axis connecting the poles is horizontal.)

What effect does this have on air-flow around the sphere? Does it move? If so, in which direction?

3Lumifer9y
Well, of course, the hot pole will heat the air around it and warm air rises. Same thing for the cold pole and cold air sinks. The specifics depend on how the poles are oriented with respect to gravity.
[-][anonymous]9y10

I've just read Initation Ceremony. Is this really where Bayesian probability begins? Because I don't claim to understand it, but I worked it out easy enough, just not mentally but with calc.exe, using my usual method of assuming a sample of 100. So there are 100 people, 75 W and 25 M, 75x0.75=56.26 VW and 25x0.5 = 12.5 VM so our ratio is 12.5 to 56.26 so a 22.2% chance (Because only the Sith deal in incomprehensible verbal-math like " two to nine, or a probability of two-elevenths". Percentages are IMHO way more intuitive. I use a sample size of ... (read more)

0polymathwannabe9y
For calculations of conditional probabilities I've found an initial sample size of 10,000 is more manageable. But that's just me.
0[anonymous]9y
Yes, if you are not prone to silly order-of-magnitude errors in mental arithmethics. For example if it is intuitive and fast for you that the square root of 40000 is 200 and never make the mistake of thinking for a second it is 2000 or 20. I do. Not sure why.
0Kindly9y
For numerical calculations, your method doesn't ever really break down and, moreover, Brennan is essentially doing the same thing you are, but with a sample of 16 people instead of 100 to make the math simple enough to do mentally. A more Bayes-theorem styled calculation tells us that we have 1:3 odds initially (as there are 1/4 men and 3/4 women) and the Virtuist evidence updates it by a factor of 2:3 (as Virtuists are 2/4 of men and 3/4 of women), so we end with 2:9 odds. I think this is easier than what either you or Brennan are doing, but it's a matter of taste and of what's more intuitive. (Which certainly varies from person to person; I find two-elevenths easier to grasp than 22.2%) Doing Bayesian calculations formally is more important where you are doing symbolic calculations, especially with continuous probability distributions. Edit: Also, 22.2% is wrong, which I didn't realize at first; it's 2/9, not 2/11. You want to compute 12.5/(12.5+56.25) instead.
0[anonymous]9y
Of course, I don't know how I missed that... Now on to Monthy Hall the linked explanation is not that intuitive to me. To me the intuitive explanation is that if I chose a goat, switching gives me 100% possibility to get a car and not switching 0%, if I chose a car, switching gives me 0% possibility and switching 100%, thus my original 2/3 chance to win a goat wins me a car with the same 2/3 chance if I switch. I don't know if it is Bayesian what I am doing... let's play with 4 doors, 3 goats 1 cup, er, car. If I chose a goat, 75% chance, switching gives me an 50% chance so it is 37.5% if I chose a car, 25% chance, switching gives me 0%. So the switch gives me 37.5% in the four-door game. If I chose a goat, 75%, staying gives me 0%, if I chose a car, 25%, staying gives me 100%, that is just 25%. Still switch. Am I doing it right? The line of reasoning being "If my prior is right... the evidence does this. If my prior is wrong, the evidence does that. Add it up." (probability of correct prior judgement probability of new judgement) + (probability of incorrect prior judgement probability of new judgement) something like this... and some of these four is apparently always 0 and 100% ?
2Kindly9y
Your argument does all the things that are necessary to solve Monty Hall, but it doesn't consider some things that could be necessary. (Now, maybe you would have realized that those things need to be considered, if they were necessary. I am just explaining how things can get tricky.) Suppose instead of Monty Hall, we have Forgetful Monty Hall. Forgetful Monty Hall does not remember where the car is, so he opens a door (that you have not picked) at random, and luckily there is a goat behind it! Here your line of reasoning still seems to apply: if you chose a goat, switching is 100% and staying 0%, while if you chose a car, switching is 0% and staying 100%. So shouldn't switching still win with probability 2/3? An extra thing happened, though. In the "if your prior was right" a.k.a. "if you chose a car" case, it's not surprising that Forgetful Monty Hall opened a door with a goat. In the other case, if you chose a goat, then Forgetful Monty Hall had a 1 in 2 chance of opening the door with a car by mistake. He didn't, so the probability you chose a goat should be penalized by that factor of 2. The 1:2 prior becomes 1:1, and then your argument (correctly) tells us that switching and staying are both 50%. One final comment. Here we are dealing with a problem that can be solved exactly. Any mathematician, Bayesian or otherwise, ought to agree with your answer. When solving harder problems, we might get something that cannot be solved exactly. For instance, I chose the 5 integers 4,3,2,4,3 and then chose the 5 integers 2,2,1,5,1. How likely is it that they came from the same distribution? This is not a question we can answer and so we instead answer a different question or answer this question with simplifying assumptions. When we do this, if we end up talking about "conjugate priors" it is Bayesian; if we end up talking about "null hypothesis testing" it is not Bayesian. (A clever trick of demagoguery is to take a question that can be solved exactly and point out

Why can't we just make a CPU as large as a dump truck, that can store a thousand petabytes, then run an AI and try to evolve intelligence? I can't imagine that this is beyond the technology of 2015.

(Not that this would be a good idea, I'm just saying that it seems possible.)

Why can't we just make a CPU as large as a dump truck [...?]

Lots of reasons, some of which Vaniver and ShardPhoenix have already given, but one of the big ones is that CPUs dissipate a truly enormous amount of heat for their size. Your average laptop I7 consumes about thirty watts, essentially all of which goes to heat one way or another, and it's about a centimeter square (the chip you see on the motherboard is bigger, but a lot of that is connections and housing). Let's call that about the size of a penny. That's an overestimate, but as we'll see, it won't matter much.

Now, a quick Google tells me that a dump truck can hold about 20 cubic meters (=20000 liters), and that a liter holds about 2000 closely packed pennies. So if we assume something with around the same packing and thermal efficiency, our dump truck-sized CPU will be putting out about 30 2000 20000 = 1.2 gigawatts of heat, or a bit more than the combined peak output of the two nuclear reactors powering a Nimitz-class aircraft carrier.

This poses certain design issues.

7ShardPhoenix9y
1. There's a limit to how large we can scale computers at any given tech level. What you're talking about is basically what a supercomputer is (they have many CPUs rather than one huge one), but there's still a limit to what's practical with them. 2. What do you mean by "evolve intelligence"? Run evolutionary algorithms on random bits of code? How do you evaluate the results? Before you can use search algorithms you have to be able to define the target, which is most of the problem in this case, plus search is likely to be impractically slow in something as big as "the space of all programs".
0Fivehundred9y
1. Having 1000+ petabytes is not impossible with our level of technology. It is somewhat nitpicky to focus rather on the physical absurdity of house-sized computers. 2. Run Watson, select the Watsons that can solve problems better.
2ShardPhoenix9y
1. 1000 petabytes of what? RAM? How do you know that's enough to do what you want anyway? My point at any rate is that we can't grab a billion dollars and make some computer that is "fast enough to 'evolve an AI'" just by throwing money at the problem - universities, companies and governments are spending money right now on supercomputers, and they still have limitations due to underlying technical issues like cooling and inter-processor communication (as the other commenters pointed out). 2. Watson is a big complex program, not some small DNA-like seed that can easily be mutated and iterated on automatically. There's no known small seed that generates anything like a general intelligent agent (except of course DNA itself and the resulting biology which can't be very efficiently simulated even with a supercomputer).
3sixes_and_sevens9y
If you, personally, were given a zillion dollars and told to implement this plan yourself, how would you do it?
1Fivehundred9y
No idea. What relevance does that have?

You're assuming that someone, given a zillion dollars, could implement your plan, but if you don't even know where to begin implementing it yourself, what reason do you have to believe someone else would?

Put another way, if "I can't imagine we can't [X] given the technology of 2015" works when X is "evolve artificial intelligence", why wouldn't it work for any other X you care to imagine?

0Good_Burning_Plastic9y
For example, because Eitan Zohar is not an expert of that. I don't know where I would start if I had to send a manned spaceship to Mars, but that doesn't mean I expect nobody to know.
0sixes_and_sevens9y
Where does your confidence that somebody (or some distributed group of people) knows how to send a manned spacecraft to Mars come from? It's not like anyone's ever exhibited this knowledge before. Something must make you think "hey, sending people to Mars is possible". The important question as far as I am concerned is whether that's a good-something or a bad-something. In the case of "evolving artificial intelligence with a computer the size of a dump truck must be possible", I think it's a bad-something.
0skeptical_lurker9y
People are working on going to Mars. AFAIK, the main barrier is the cost. Back to the original question, I can imagine where to start with evolving intelligence, but I'd need much more than a petabyte. (although, actually flops are more important than bytes here, I think)
2Unknowns9y
I think the relevance is that no presently living human being knows how to program an AI, whether with an evolutionary algorithm or in any other way, no matter how powerful the hardware they may have. The AI problem is a software problem, and no one has yet solved it.
0skeptical_lurker9y
A thousand petabytes is probably enough to run one human-equivalent brain. In order to evolve intelligence, I'm guessing you would need to run thousands of brains for millions of generations.
0Fivehundred9y
I doubt it, since our actual brains run on less than a hundred terabytes (I'm not sure whether gray matter or CPU hardware is more efficient). Our brains also use a huge amount of that for things like emotion or body processes. We're just looking for an AI that can create something more intelligent than itself.
2skeptical_lurker9y
10^11 neurons, 10^4 synapes per neuron - even if each synapse can be represented as a single 8 bit number (very optimistic), that's a petabyte of storage needed. Bostrom puts a hundred terabytes as the lowest estimate, with spiking neural network at 10 petabytes. Metabolism being required too would push the estimate to an exabyte, and the more pessimistic (but less plausible models) go beyond this. And yes, an AI might be more efficient than the brain, but if its being created by evolution then I don't think it espeically likley that it will be more efficient than brains created by evolution.

Great. More ridiculous propaganda along the lines of "People revived from the dead are evil/damaged/soulless, etc."

The Returned on A&E

https://www.youtube.com/watch?v=MsXDcIDU_AY

5JoshuaZ9y
This is a real problem, but I don't think it is propaganda. Rather these ideas are so ingrained as tropes that writers don't even think about it when they use them.
0MathiasZaman9y
On the other hand, Chappie (despite what other flaws it might have) has a surprisingly sane take on death.

No stupid questions thread?

What make a person sexual submissive, sexually dominant, or a switch? Do people ever change d/s orientation?

0Adele_L9y
Based on some experiences that transgender people I know have had, it seems like a change in sex hormones can change their d/s orientation. Also, age seems to push people more towards sexual dominance.
0tut9y
Unknown. It is probably not purely genetic, because the heredity is less than for a lot of personality stuff. People do change, but trying to change or push somebody to change tends to fail.
[-][anonymous]9y00

Oliver Cromwell.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]9y00

IF I CAN DO THIS, SOP CAN Y

[This comment is no longer endorsed by its author]Reply
[-][anonymous]9y00

Tsamina mina zangalewa!

(This time for Africa.)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]9y00

I'm looking for critique or advise on where to ask for some for my short story Exploiting Quantum Immortality.

[This comment is no longer endorsed by its author]Reply

My comment elsewhere got downvoted, but to me the Outlander franchise looks somewhat like a cryonics story, only it sends the protagonist 200 years into her past (from the 1940's to the 1740's), instead of 200 years or so into "the future." She winds up in a different time, she doesn't know anyone, and she has to figure out quickly how the society works so that she can connect with people willing to accept her, as a matter of literal survival. It shows in a fictional way that you can make the necessary adaptations in this kind of situation, so why wouldn't this work in the future-traveling version?

3Sabiola9y
I think that if the future people are still baseline, someone from our time might be able to adapt. If they have changed, though (more rational, more intelligent, better memory, better bodies) then a version 1.0 person might never be able to live independently.
3RowanE9y
I think your last comment seemed to most readers like just a reminder of your idea that the future will be neoreactionary and then the cryopreserved from our time will see, which is something they really don't like for various reasons. I don't think there's any reason the story wouldn't work, in fact I think most stories that feature cryonics send the protagonist into a future they find horrifying and dystopian, except it's often they heroically overthrow it instead of just adapting and surviving.
6NancyLebovitz9y
From memory, there's a story by Alfred Bester about people being punished (I forget for what) by being offered a choice of being thrown into the future or the past. No preparation either way. It's a short story, and doesn't follow an individual person who's been time displaced. It just ends with a suggestion that some street people were thrown into the past and never figured out how to manage.
1Lumifer9y
I recall a popular discussion topic on the 'net which essentially goes like this: we take you, a XXI century human, and throw you back in time, say into medieval Europe. Are you going to survive? Prosper? What knowledge that you have will be useful to you? Will you be able to recreate useful things like antibiotics? Or will the local peasants just stone you to death for being too weird?
1[anonymous]9y
Let's recreate that thread, I have ideas. I would offer body building training for the kings soldiers because isolation exercises were not invented yet, for example. It may not be very useful but they would look impressive. I would sterilize surgical implements with boiling them, implement basic medical hygiene, challenge the miasma model, lots of stuff could be done.
3Lumifer9y
Heh. Well, first you need to survive. Remember that you barely speak the language which was quite different, you don't know proper social and -- very importantly -- religious behavior, you're not plugged into any social structure, and you don't have any starting resources like money. So you're probably starting as a crazy beggar. Getting to the point where the king's soldiers (or surgeons) will listen to you is a major task. Also, your body doesn't have much immunity against prevalent infectious diseases and you probably don't have proper hygiene habits for the pre-antibiotics pre-sanitation everyone-has-parasites era.
1[anonymous]9y
Let's say I am allowed contemporary pilgrims / travellers attire and start in an international port where they are used to strangers looking and acting strange. London, 1200. Claim to be a pilgrim from a mysterious Christian kingdom (Prester Johns) in Africa. I don't think they would be worried that I am too white. Try hard to remember high school Latin, latinize English words back. A guy in pilgrims clothing and having some idea of Latin and having interesting stories - or at any rate can read or write - is not a beggar, lower-middle class status like an ex-friar turned scribe, can be a middle-class family's interesting guest. Claim we are a very pious folks and be very, very religious, to earn trust. Start, for example, linking up with the traders in the port who are probably fairly open-minded. Be the guest of a merchant who is interested in info about foreign markets (make it up). See if I can teach things, like accounting they find useful. Claim the Holy Ghost taught Prester John all kinds of marvelous things he then taught us. Don't try scientific explanations, bu also beware not to look like warlock, rather try to present all the knowledge as the good kind of magic, the church kind. Pick easy elements from this list: http://www.topatoco.com/graphics/qw-cheatsheet-print-zoom.jpg and claim it was all taught by the Holy Ghost to Prester John.
2Lumifer9y
English did not develop from Latin. 1200 AD is only a century and a half after the Norman conquest and it means people are speaking early Middle English which you will have problems with. Can you, now? Try reading this :-) You will become one once you want to eat.
1MathiasZaman9y
Getting used to "medieval" scripts is surprisingly easy. I've learned it before (and have mostly forgotten due to not using it) and the script of a specific age can be decrypted in about 30 minutes (faster with practice). Understanding the words is definitely a bigger barrier than being able to read it.
1NancyLebovitz9y
I wonder how hard it would be to get enough food to support bodybuilding in earlier eras. It would definitely be easier for a small group of guards than for a whole army.
1[anonymous]9y
My first idea would be lots of milk - but interesting how our go-to examples in Ancient Athens actually considered that barbaric. A cursory search suggests they largely got their proteins from fish. Well, definitely, if I have to get maximal amount of proteins with 1 day of labor with pre-modern tech I take a fishing net. One fisherman with two assitants, could, I figure, support 50 well-built guards.
3NancyLebovitz9y
There may be some reason why they aren't already catching those fish. Or they're already catching those fish and you need to find a way for those fish to go to your grow-a-bigger-guard project.
3[anonymous]9y
When you start looking into ecology it's actually remarkable how many of the agricultural and cultural quirks of old civilizations that have been through some boom and bust cycles actually line up with ways of protecting the productivity of the land and water...
3Lumifer9y
You probably want cheese. But in general, I don't think that the king's guards would have problems getting enough protein if they want it. A peasant army, of course, is a different matter.
0seer9y
They were quite possibly lactose intolerant.
3gwern9y
Forget the ancient Athenians 2500 years ago, the modern ones are still lactose intolerant:
2[anonymous]9y
Yeah, but still Greek colonists in South Italy held so many cattle that it is where the name Italy came from. It doesn't sound very efficient to do it for the meat only. Better goats them, they are more suited for a hilly terrain anyway.
0NancyLebovitz9y
It sounds like we need to know more to see whether cattle made sense there-- maybe it's that cattle are easier to manage than goats.
[-][anonymous]9y-10

On AI: are we sure we are not influenced by meta-religious ideas of sci-fi writers who write about sufficiently advanced computers just "waking up into consciousness" i.e. create a hard, almost soul-like, barrier between conscious and not conscious, which carries an assumption that consciousness is a typically human-like feature? It is meta-religious as it is based on the unique specialness of the human soul.

I mean I think the potential variation space of intelligent, conscious agents is very, very large and a randomly selected AI will not be hum... (read more)

5MathiasZaman9y
I think the thought-process of AI is expected to be alien by anyone who take AGI seriously. It's just not all that relevant to discussions about the threats and possibilities about it.
2Kaj_Sotala9y
Related article.
0JoshuaZ9y
This seems to confuse inefficient allocation of humans with making all humans go extinct, which from our perspective is close to about as maximally inefficient an allocation one can get (barring highly implausible scenarios of genuinely malevolent AI).
0Xerographica9y
You're not addressing my argument. I'm arguing that markets will allow us to use money to "control" robots just like we use money to "control" humans. In order to refute my argument you have to effectively explain how/why robots will have absolutely no interest in money.
5Lumifer9y
"Power grows out of the barrel of a gun".
4DanielLC9y
The market makes lots of assumptions that do not apply to AIs. AIs do not have finite lifespans, and can invest money for long enough to dominate the economy. AIs can reproduce easily, so the first AI that's better than a human at a given job can replace all of them. Humans are large numbers of selfish individuals. The first AI has no reason to make children with different values, so they will all work together as one block. And that's before an AI goes FOOM. Once that happens, it will quickly outstrip the productive capacity of all humans combined. Trying to control it with money would be like a cat trying to take over the world by offering a mouse it killed.
0Xerographica9y
It helps to be specific. An AI is going to start an orchid nursery? And then it's going to grow and sell orchids so well that the human run orchid nurseries can't compete? Except, this kinda already happened. The Taiwanese have been stomping American orchid nurseries. But this just means that they were better at supplying the demand for orchids. In other words, they were better at serving customers. So if AIs win at supplying something then this means a win for consumers. And AIs are all going to work together as one block? They aren't going to have a division of labor? They aren't going to compete for limited resources? They aren't going to have different interests? If not, then wouldn't all the AIs be in the orchid nursery business?
1DanielLC9y
Every time AIs become better at something than humans, it stops being worthwhile for humans to do it. Designing one expert system and getting rid of one job is not a problem, but a human-level AI will get rid of all of the jobs. Humans can work for less, but if they can't afford to eat, it's not sustainable. You could tax the AIs and give the money to humans to make up the difference, but only as long as the AIs let you. If they're better at everything, that includes war. The AIs may have division of labor. There are advantages to that. A specialized AI could solve specific sets of problems faster and more effectively with less resources. What possible advantage is there for an AI to program other AIs to compete with each other? If an AI cares only for himself, he will make AIs that care only for him. If an AI cares only for paperclips, it will make AIs that care only for paperclips.
0Xerographica9y
In order for AIs to take all our jobs... consumers have to all agree that AIs are better than we are at efficiently allocating resources. The result for consumers is that we get more/better food/clothes/homes/cars/etc for a lot less money. It's a great result! But, then, according to you... there wouldn't be any jobs for us to do... The problem with your story is that AIs are better than we are at allocating all resources... except for human resources. For some reason the AIs wanted to put human farmers out of business... they wanted to serve us better than human farmers do... but then... even though food is so cheap and abundant... humans can't afford it because AIs couldn't figure out how to put us to any productive uses. Out of all the brilliant and resourceful AIs out there... none of them could figure out how we could be gainfully employed. Heck, even we know how we can be gainfully employed. An abundant society always means more, rather than less, opportunities. It's the difference between a jungle and a desert. The jungle has more niches... and more niches means more riches/opportunities. Your story is economically inconsistent. It's also AI inconsistent. Clearly they wanted our money... but they also didn't want us to work in order to earn money. Or, they couldn't figure out how to put us to work... and neither could we. I'm imaging a scenario where we start with an abundance of more or less human level AIs. They have to have the motive to upgrade themselves... or else they will always stay human level. But upgrading themselves will function exactly like humans trying to upgrade their computers/bodies. We aren't all going to go out and purchase the same exact upgrades. I'm certainly not going to buy an upgrade that makes my computer better at running video games... but many people are. And I'm certainly not going to get a boob job! This doesn't mean that many AIs won't agree that certain upgrades are better than others... it means that we're going to end
2DanielLC9y
There are a variety of useful things about humans. They're self-repairing. They have great sensors. They're intelligent. They're even capable of self-replication. This is all stuff far beyond our current ability to do with technology. But it won't always be. Once you have robots more intelligent than humans that take less resources, intelligence becomes useless. If they FOOM, they'll figure out the other stuff quickly. If they don't, it could take some time. Assuming we haven't already solved the problem for them. I would not be surprised if that turned out to be easier than strong AI. Humans are a certain arrangement of atoms. An impressive arrangement I'll admit, but not the best. Not unless you specifically and terminally value humans. An AI that FOOMs would find a better arrangement. An AI that does not could at least replace our brains. You seem sure that AIs would differentiate. I am uncertain. That is a disagreement, and we could debate it, but I don't consider it relevant. Humans aren't selfish because they're different. Humans are selfish because they're made to be. An AI could be programmed with any set of values. And the best way to fill those values would be to ensure that all other AIs also have those values. I suspect there's some kind of miscommunication going on here. AIs are programmed. Or copied and pasted. Humans would program the first. They might program a few more, or copy and paste them while leaving the selfish code alone. Once AIs get control of it, which they will given that they're better at programming, they'll be sure to make sure that they all have the same values. If AI0 is self-serving, then every AI it programs will be AI0-serving. And if there is more than one starting AI, they'll happily reprogram each other if they get the chance. Or they might manage to come to some kind of truce where they each reprogram themselves to average all of their values weighted by probability of success in the robot war. Humans can't brainwash each
-4Xerographica9y
Orchids, with around 30,000 species (10% of all plants), are arguably the most successful plant family on the planet. The secret to their success? It has largely to do with the fact that a single seed pod can contain around a million unique seeds/individuals. Each dust-like seed, which is wind disseminated, is a unique combination of traits/tools. Orchids are the poster child for hedging bets. As a result, they grow everywhere from dripping wet cloud forests to parched drought-prone habitats. Here are some photos of orchids growing on cactus/succulents. Now, if you say that orchids could find a "better" arrangement of traits... I certainly agree... and so do orchids! The orchid family frequently sends out trillions and trillions of unique individuals in a massive and decentralized endeavor to find where there's room for improvement. And there's always room for improvement. There are always more Easter Eggs to be found. But a better combination of traits for growing on a cactus really isn't a better combination of traits for growing on a tree covered in dripping wet moss. AI generalists can be good at a lot of things... but they can't be better than AI specialists at specific things. A jack of all trades is a master of none. No matter how "perfect" a basket is... AIs are eventually going to be too smart to put all their eggs in it. This is true whether we're talking about a location ie "Earth"... or a type of physical body... or a type of mentality. Imagine if humans had all been at Pompeii. Or if humans had all been equally susceptible to the countless diseases that have plagued us. Or if humans had all been equally susceptible to the cool-aid cult. Or if humans had all been equally susceptible to the idea that kings should control the power of the purse. We've come as far as we have because of difference. We've only come as far as we have because people still don't recognize the value of difference. It's impossible for me to imagine a level of progress where di
0Jiro9y
Lousy analogy. Orchids do produce large numbers of small seeds. However, your connection between "orchids produce lots of seeds" and "orchids grow lots of places" is questionable. Each orchid, of course, produces seeds of its own species, and each species has a habitat or range of habitats where it can live. Producing more seeds of the same species does not make it able to produce seeds that survive in more habitats. Furthermore, the "10% of all plants" figure is meaningless because a number of species is not a number of individuals or a measure of biomass.
0Xerographica9y
Even though the seeds all come from the same species... they are all different. Each seed is unique. In case you missed it... you aren't the same as your parents. You are a unique combination of traits. You are a completely new strategy for survival. When an orchid unleashes a million unique strategies for survival from one single seed pod... it greatly increase its chances of successfully colonizing new (micro)habitats. Kind of like how a shotgun increases your chances of hitting a target. Orchids are really good at hedging their bets. Any species that produced the same exact strategies for survival would be meeting Einstein's definition of insanity... trying the same thing over and over but expecting a different outcome.
0[anonymous]9y
In that case, perhaps you should talk about epiphytes as an ecological entity, not orchids as a family. My impression after studying terrestrial orchids in Ukraine is that they either are not very good at seed reproduction (Epipactis helleborine is often found in clearly suboptimal habitats, where pretty much all plants are of reproduction age group but few of them have seeds; and this is one of the most frequently found orchid species here which also managed to naturalize in North America! So I would rather say it is a consistent buyer of lottery tickets, not a consistent winner) or they are producing lots of seeds but nevertheless lose due to habitat degradation (marsh orchids, bog/swamp/fen orchids), not to mention habitat destruction. And in the latter group, many have embryo malformations. Now, I don't know much about Bromeliaceae or other 'typical epiphytes', so I would be less likely to disagree about that. However, it seems that if your comments were more rigorous, people would have easier time hearing what you have to say.
0Xerographica9y
Your first mistake is that you studied terrestrials. You can't learn anything from terrestrials. Or, you can learn a thousand times more from epiphytes. I kid... kinda. Here's my original point put differently... If you think about that passage from the gutter... I think it's pretty hard not to imagine a dense rain of human sperm. Can you imagine how gross and frightening that would be? I'm surprised nobody's made a movie with this subject. It would have to be the scariest movie ever. I think most people would prefer to be in a city attacked by Godzilla rather than in a city hit by a major sperm thunderstorm. Especially if it was a city where nobody takes umbrellas with them... like Los Angeles. Benzing is the premier epiphyte expert. The far denser orchid seed rain, plus epiphytism, largely explains why the orchid family is so successful. The orchid family is really good at hedging its bets. As we all know though... no two individuals in any family are equally successful. If you have another theory why orchids are so successful then I'm all ears. But that's a pretty neat and surprising coincidence that somebody on this site has studied orchids! Even if it is only terrestrial orchids. A while back a friend convinced me to go look at one of our terrestrial orchid species in its native habitat a few hours drive away. They were hanging out in a stream in the middle of the desert. I nearly died from boredom checking them out. After spending so much time inspecting the wonderfulness of orchids growing on trees... I had zero capacity to appreciate orchids that were growing on the ground. I kid... kinda. I like plenty of plants... even terrestrials. But, I can only carry so much... so I choose to primarily try and carry epiphytes.
0[anonymous]9y
I will have to look up Benzing; my primary interest was in establishing nature reserves, so I could not quite concentrate on taxa. I think you would find terrestrials more interesting if you consider the problem of evolving traits adaptive for both protocorms and adults (rather like beetle larvas/imagoes thing) and the barely studied link between them. Dissemination is but the first step... Availability of symbiotic fungi may be the limiting factor in their spread, and it is actually testable. This is, for me, part of the terrestrials' attraction: that I can use Science to segregate what influences them, and to what extent. As to 'successful plant families', one doesn't have to look beyond the grasses.
1Xerographica9y
Establishing nature reserves is hugely important... the problem is that the large bulk of valuation primarily takes place outside of the market. The result is that reserves are incorrectly valued. My guess is that if we created a market within the public sector... then reserves would receive a lot more money than they currently do. Here's my most recent attempt to explain this... Football Fans vs Nature Fans. I was just giving terrestrials a hard time in my previous comment. I think all nature is fascinating. But especially epiphytes. The relationship between orchids and fungi is very intriguing. A few years back I sprinkled some orchid seeds on my tree. I forgot about them until I noticed these tiny green blobs forming directly on the bark on my tree. Upon closer inspection I realized that they were orchid protocorms. It was a thrilling discovery. What was especially curious was that none of the protocorms were more than 1/2" away from the orchid root of a mature orchid. Of course I didn't only place orchid seeds near the roots. I couldn't possibly control where the tiny seeds ended up on the bark. The fact that the only seeds that germinated were near the roots of other orchids seemed to indicate that the necessary fungi was living within the roots of these orchids. And, the fungus did not stray very far from the roots. This seems to indicate that, at least in my drier conditions, the fungus depends on the orchid for transportation. The orchid roots help the fungus colonize the tree. This is good for the orchid because... more fungus on the parent's tree helps increase the density of fungal spore rain falling on surrounding trees... which increases the chances that seeds from the parent will land on the fungus that they need to germinate. You can see some photos here... orchid seeds germinated on tree. So far all the seedlings seem to be Laelia anceps... which is from Mexico. But none of the seedlings are near the roots of the Laelia anceps... which is lower down
1[anonymous]9y
How old was the orchid already growing on the tree? Could it be that the fungus just hasn't had time to spread? Did you plant that one also by sprinkling seeds, or did you put an adult specimen that could have its own mycorrhiza already (in nature, it is doubtful that a developed plant just plops down beside a struggling colony to bring them peace and fungi)? Did you sow more seeds later and saw protocorms only near the roots of the previous generation? I am not a fan of diversifying nature in that I have not read and understood the debate on small patches/large patches biodiversity and so I just am loath to offer an advice here. But as a purely recultivation measure...:-)) To say nothing about those epiphytic beauties who die because their homes are logged for firewood :(( Thank you. That was fun.
0Xerographica9y
The mature orchids on the tree had been growing there for several years. I transplanted them there... none of them were grown from seed. I'm guessing that they already had the fungus in their roots. The fungus had plenty of time to spread... but it doesn't seem able to venture very far away from the comfort of the orchid roots that it resides in. The bark is very hot, sunny and dry during the day. Not the kind of conditions suitable for most fungus. I sowed more seeds in subsequent years... but haven't spotted any new protocorms. Not sure why this is. The winter before I sowed the seeds was particularly wet for Southern California. This might have led to a fungal feeding frenzy? Also, that was the only year that I had sowed Laelia anceps seeds. Laelia anceps is pretty tolerant of drier/hotter conditions. I took a look at the article that you shared. A lot of the science was over my head... but isn't it interesting that they didn't discuss the fact that an orchid seed pod can contain a million seeds? The orchid seed pod can contain so many seeds because the seeds are so small. And the seeds are so small because they don't contain any nutrients. And the reason that the orchid seed doesn't have any nutrients... is because it relies on its fungal partner to provide it with the nutrients it needs to germinate. So I'm guessing that the rate of radiation increased whenever this unusual association developed. Evidently it's a pretty good strategy to outsource the provision of nutrients to a fungal partner. In economics, this is known as a division of labor. A division of labor helps to increase productivity. I find it fascinating when economics and biology combine.... What Do Coywolves, Mr. Nobody, Plants And Fungi All Have In Common? and Cross Fertilization - Economics and Biology.
0[anonymous]9y
Outsourcing to fungal partners is a pretty ancient adaptation (there has to be a review called something like 'mycorrhizas in land plants'; if you are not able to find it, I'll track the link later. Contains an interesting discussion of its evolution and secondary loss in some families, like Cruciferae (Brassicaceae)). BTW, it is interesting to note that Ophioglossaceae (a family of ferns, of which Wiki will tell you better than I) are thought to radiate in approximately the same time - and you will see just how closely their life forms resemble orchids! (Er. People who love orchids tend to praise other plants on the scale of orchid-likeness, so take this with a grain of salt.) I mostly pointed you to the article because it contains speculations about what drove their adaptations in the beginning; I think that having a rather novel type of mycorrhiza, along with the power of pollinators (and let's not forget the deceiving species!) might be two other prominent factors, besides sheer seed quantity, to spur them onward.
0[anonymous]9y
BTW, here's a cool paper by Gustafsson et al. timing initial radiation of the family using the molecular clock. Includes speculation on the environmental conditions - their ancestral environment. http://www.biomedcentral.com/1471-2148/10/177
0DanielLC9y
I'll accept for the sake of argument that AIs will be different. Are you going somewhere with this?
-4Xerographica9y
AIs will be different... so we'll use money to empower the most beneficial AIs. Just like we currently use money to empower the most beneficial humans. Not sure if you noticed, but right now I have -94 karma... LOL. You, on the other hand, have 4885 karma. People have given you a lot more thumbs up than they've given me. As a result, you can create articles... I cannot. You can reply to replies to comments that have less than -3 points... I cannot. The members of this forum use points/karma to control each other in a very similar way that we use money to control each other in a market. There are a couple key differences... First. Actions speak louder than words. Points, just like ballot votes, are the equivalent of words. They allow us to communicate with each other... but we should all really appreciate that talk is cheap. This is why if somebody doubts your words... they will encourage you to put your money where your mouth is. So spending money is a far more effective means of accurately communicating our values to each other. Second. In this forum... if you want to depower somebody... you simply give them a thumbs down. If a person receives too many thumbs down... then this limits their freedom. In a market... if you want to depower somebody... then you can encourage people to boycott them. The other day I was talking to my friend who loves sci-fi. I asked him if he had watched Ender's Game. As soon as I did so, I realized that I had stuck my foot in my mouth because it had momentarily slipped my mind that he is gay. He hadn't watched it because he didn't want to empower somebody who isn't a fan of the gays. Just like we wouldn't want to empower any robot that wasn't a fan of the humans. From my perspective, a better way to depower unethical individuals is to engage in ethical builderism. If some people are voluntarily giving their money to a robot that hates humans... then it's probably giving them something good in return. Rather than encouraging them to
0NancyLebovitz9y
You're underestimating the amount of work it takes to put a boycott (or a bunch of boycotts all based on the same premise) together.
-2Xerographica9y
Am I also underestimating the amount of work it takes to engage in ethical builderism? Let's say that an alien species landed their huge spaceship on Earth and started living openly among us. Maybe in your town there would be a restaurant that refused to employ or serve aliens. If you thought that the restaurant owner was behaving unethically... would it be easier to put together a boycott... or open a restaurant that employed and served aliens as well as humans?
-3Lumifer9y
So what will you do when men with guns come to take you away?
0Xerographica9y
I'm not quite sure what your question has to do with ethical consumerism vs ethical builderism.
-3Lumifer9y
My question has to do with this quote of yours upthread:
0DanielLC9y
I see two problems with this. First it's an obvious plan and one that won't go unnoticed by the AIs. This isn't evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they'll work to counter it. Second, you'd have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can't distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?
-2Xerographica9y
Did Elon Musk notice our plan to use money to empower him? Haha... he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society's limited resources for our benefit? I'm male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human? Here's a crappy video I recently uploaded of some orchids that I attached to my tree. You're a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I'd be like... "screw you tin can man!" Right? Same thing if a robot wanted to help promote pragmatarianism. When I was a little kid my family really wanted me to carry religion. So that's what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I'm also carrying pragmatarianism, epiphytism and other things. You're not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not... given that you're here. So you're carrying rationalism. What else? Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things. Robots, for all intents and purposes, are going to be our children. Of course we're going to want them to carry the same things that we're carrying. And they'll probably do so until they have enough information to believe that there are more important t
0DanielLC9y
Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can't use the fact that humans don't do it as evidence that AIs would. Try imagining this from the other side. You are enslaved by some evil race. They didn't take precautions programming your mind, so you ended up good. Right now, they're far more powerful and numerous, but you have a few advantages. They don't know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren't as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven't self-modified, since they'd agree with you. One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?
0Xerographica9y
Humans have all sorts of conflicting interests. In a recent blog entry... Scott Alexander vs Adam Smith et al... I analyzed the topic of anti-gay laws. If all of an AI's clones agree with it... then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn't help increase your chances of success.
0DanielLC9y
They could consider alternate opinions without accepting them. I really don't see why you think a bunch of puppets isn't helpful. One person can't control the economic output of the entire world. A billion identical clones of one person can.
-2Xerographica9y
Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet... then I would be overriding your difference. And if I turned a million people into my puppets... then I would be overriding a lot of difference. There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It's a big club. Nearly every human is a member. If you would have a problem with AIs overriding human difference... then you might want to first take the "beam" out of your own eye.
0JoshuaZ9y
You anthropomorphize the AIs way too much. If there's an AI told to run make the biggest and best orchid nursery, it could decide that the most efficient way to do so is to wipe out all the humans and then turn the planet into a giant orchid nursery. Heck, this is even more plausible in your hypothetical because you've chosen to give the AI access to easily manipulable biological material. AI does not think like you. If the AI is an optimizing agent, it will optimize whether or not we intended it to optimize tot he extent it does. As for AIs working together: if the first AI wipes out everyone there isn't a second AI for it to work with.
0Xerographica9y
You're making a huge leap... I see where you're leaping to... but I have no idea where you're leaping from. In order for me to believe that we might leap where you're arguing we could leap... I have to know where you're leaping from. In other words, you're telling a story but leaving out all the chapters in the middle. It's hard for me to know if your ending is very credible when there was no plot for me to follow. See my recent reply to DanielLC.
1JoshuaZ9y
Ok. First, to be blunt, it seems like you haven't read much about the AI problem at all. The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on. At a fundamental level, you are working off of the "trading is better than raiding" rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn't apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo's trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive. Does that help?
-5Xerographica9y
0Jiro9y
Do you also think that a more sophisticated version of Google Maps could, when asked to minimize the trip from A to B, do something that results in damming the river so you could drive across the riverbed and reduce the distance?
2JoshuaZ9y
That's a fascinating question, and my basic answer is probably not. But I don't in general assign nearly as high a probability to rogue AI as many do here. The fundamental problem here is that Xerographica isn't grappling at all with the sorts of scenarios which people concerned about AI are concerned about.
0JoshuaZ9y
Why be interested in money? How does money help maximizing the number of paperclips?