Filter This week

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: MrMind 19 October 2016 03:40:28PM *  -1 points [-]

I've this weird fanfiction where LessWrong is a monastery/school of magic who has been abandoned by its creator a long time ago but it's still operating, that sometimes has been attacked by a disgruntled student who was expelled, but has somehow learned to do necromancy and has returned with an army of meat-puppets.
Now I'll have to incorporate that due to some random magic accident, the monastery disappeared, but not the rooms inside it.

Comment author: WalterL 17 October 2016 07:44:21PM 7 points [-]

I'd suggest you prioritize your personal security. Once you have an income that doesn't take up much of your time, a place to live, a stable social circle, etc...then you can think about devoting your spare resources to causes.

The reason I'd make this suggestion is that personal liberty allows you to A/B test your decisions. If you set up a stable state and then experiment, and it turns out badly, you can just chuck the whole setup. If you throw yourself into a cause without setting things up for yourself and it doesn't work out the fallout can be considerable.

Comment author: DanArmak 13 October 2016 11:19:20PM 6 points [-]

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

Comment author: WalterL 19 October 2016 09:21:03PM 6 points [-]

My life places me in a position to observe an uncommon number of people repenting and trying to change. As you might expect, humans being what we are, few accomplish their goal.

A fact that I've observed is that NONE of those who other themselves and blame the shard get it done. If someone says "I've got a terrible temper", he will still hit. If he says "I hit my girlfriend", he might stop. If someone says "I have shitty executive function", he will still be late. If he says "I broke my promise", he might change.

So, when you say "I have an addiction", I'm a bit concerned. A LW truism is that we don't have brains, we are brains. We aren't ghosts manning machines, we are machines.

I think it is some old "devil made me do it", stuff. The "other me" isn't real, so energy spent fighting him is wasted. Effort spent changing my behavior might bear fruit.

I'm reading a lot into phrasing, so if this isn't you, my bad. Just...my advice... be sure to own your stuff man. You either "have an addiction", or "screwed some randos without protection", and my experience suggests that thinking of it as the second one will help you more.

Comment author: Clarity 18 October 2016 12:01:37AM 3 points [-]

Sex and love addiction, sexual compulsions, insecure attachment, risky sexual behaviour, HOCD, HIVOCD

What if you lost the love of your life due to a sexual impulse? What if you recognised sexual impulsivity as a pattern of your behaviour, deeply deeply ingrained into your being, and that you want to overcome it? That’s me.

I chose the name clarity because when I started to post, I was dipping in and out of psychoses and other really mentally unhealthy states. I would have moments of clarity, inspired by stuff I read in the sequences and other LessWrong posts and they would be like gulps of air saving me from drowning in really turbulent water. Now that I’m on some kind of boat, I don’t have to actively think about how to breath.

Until now, again.

I haven’t posted a lot recently. Mainly because I have been doing really, really well. My epic failures I dare so have given me a reputation here, and I talk about them freely. But, again, I have been doing well lately.

With an exception. Let me explain:

Since I already have a soldiery mindset due to some abuse from my childhood I thought I could grow by joining the French Foreign Legion. I had decided not to in the past due to risk of permanent injury but considered it again. I decided not to this time because I figured I wouldn’t be able to meet, court and enjoy time with someone, fall in love etc. – it’s unsuitable for married life (which correlates strongly with happiness), according to this link: https://www.cervens.net/legionbbs123/archive/index.php/t-53.html

Lately I am infatuated with someone. She seems to have the potential to meet my criteria for a good potential wife: communication skills, personality, responsibility, emotional honesty, attractiveness, matching sex drives, and value alignment. I just wish I had some good comebacks for when a person is out and about with an Asian girl and people making comments that make me feel self-conscious. She gives me a different feeling than that bewilderment kind of pleasant feeling I would get when my ex housemate I fell for used open her small mouth really really wide in amazement at something, haha. I get more of the nice chill longing of when I think of that cute little housemate listening too hip-hop.

I’ve been thinking about her strong feelings for veganism so I looked up some stuff about the case for veganism.

I decided to go milk free after watching this: https://m.youtube.com/watch?v=UcN7SGGoCNI Wool free after watching watching just 243 of this video. https://m.youtube.com/watch?v=siTvjWE2aVw

So another recent experience really stood out to me as a bad choice, by a similar rationale. I consider myself heteroflexible, or perhaps hetero but rather sexually fluid. On Sunday night I went to a gay sauna, tossed up a bit between that and a brothel, but decided I prefer the idea of guys this time. I’m a bit anxious and unattached to guys physically, except if its porn (which I had watched before going). So I went into a dark room with two guys I later saw were ugly AF and of course, like previous times, they give me tonnes or props and validation as a good looking guy. One guy said he was a cleaner when I asked what he does. The other had scaly crusty balls. I didn’t stop, unfortunately. And now maybe that sore was Herpes or Genital Warts and now if I got herpes which is incurable, then it might ostracise me from 4/5 of the beautiful women in the world (maybe just not the slutty ones who have that too, and may just break my heart in time anyway).

Worst case scenario, I just HIV. I mean it’s a dark room, anything can happen, a grazing, a bite, etc., a pin prick from some vexed crazy guy. No accountability. In the heat of the moment something could slip off too. And, I’m not familiar with much more than the superficial statistics around HIV transition and lore, like that oral sex HIV could but they doubt it often happens – but as a medical researcher I know the quality of research must be judged in a case by case basis and never take the overviews credibility for granted.

I reflected in the moment and realised I wasn't enjoying myself in the slightest. I think it’s some need for validation, or loneliness or risk taking or a compulsion. Fuck me autocorrect almost corrected to compulsive homosexuality. Got to fix that too, or I will be outed.

I think I have HOCD, or something accounted for by these accounts:

I find each of them helpful and hope to revisit them.

http://blogs.psychcentral.com/sex-addiction/2013/03/when-straight-men-are-addicted-to-gay-sex/ http://www.sexaddictionscounseling.com/can-a-straight-man-be-addicted-to-gay-sex/ http://www.brainphysics.com/yourenotgay.php https://www.google.com.au/amp/m.wikihow.com/Overcome-Sexual-Addiction%3famp=1?client=ms-android-optus-au

If I don't do it (regardless of where unless I find myself in a stable relationship with that person before or within a week) again by 2020 I'll give one my close friends $141 as a prize to encourage me. 1/1/2020. If not I’ll donate the same amount to a sex, love and or romance focussed impulse control related group.

Masturbating alone is hedonically better and it’s safer anyway, what the fuck is wrong with me?

I have an addiction but I have some much will power and a track record of discipline. This is the last frontier. Never again.

Comment author: James_Miller 15 October 2016 07:18:08PM 5 points [-]

In ten years what's the probability that a CRISPR-competent terrorist group could exterminate mankind? The optimal consequentialist anti-terrorist policies if this answer is >1% should horrify a deontologicalist.

Comment author: chron 16 October 2016 08:10:29PM *  4 points [-]

Interestingly, no notable historical group has combined both the genocidal and suicidal urges.

Actually such groups existed, for example the Khmer Rouge turned in on themselves after killing their enemies. Something similar happened with the movement lead by Zhang Xianzhong only to a much greater extent, i.e., they more-or-less depopulated the province of Sichuan, including killing themselves.

Comment author: turchin 16 October 2016 10:05:22AM *  3 points [-]

In 20 century most risks were created by superpowers. Should we include them in the list of potential agents?

Also it seems that some risks are non-agential, as they result from collective behaviors of a group of agents, like arms race, capitalism, resource depletion, overpopulation etc.

Comment author: SithLord13 15 October 2016 11:25:12PM 4 points [-]

Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.

Disregarding any discussion of legitimate climate concerns, isn't this a really bad decision? Isn't it better to be unblackmailable, to disincentivize blackmail.

Comment author: WalterL 20 October 2016 02:45:03AM 3 points [-]

Sorry, I didn't mean that to be what you took from it.

I used to be fat. ( I still am, but not nearly to the same extent) Like, Jabba fat. My parents got doctors to say that I had an eating disorder, and maybe I did.

Othering my appetite never helped me. Like "I have an eating disorder" focused my energy on something (my disorder) that didn't have a mind. It couldn't get tired, or bored...it didn't exist. It's like "fighting" cancer.

But that doesn't mean that what worked was thinking "I'm a glutton".

When you say that "I am a dumb person", it isn't any closer to a thought you can act on. Kicking yourself when you are down feels good (or, at least, it did for me), it feels like "paying" for the behavior, but that's just thoughts. It doesn't actually change stuff.

I was shooting for more "I am a person who had unprotected sex with sketchy folks at place X". That feels, 'actionable', if you will, to me. Like, if the problem is a sex addiction, I dunno what the solution is. If the problem is being a dumb person, I dunno what the solution is. But if the problem is going to a place and doing stuff, there are a bunch of solutions.

1: Carry protection, everywhere. Put it in something that you carry everywhere (wallet, little thingy on your car keys, cell phone case, whatever). If you ever screw someone sketchy, make sure you take it out and use it. If they aren't willing, maybe that's a spur to reconsider?

2: Enlist the help of the dudes who run the place. Tell them if they see you there, you will give them ten thousand dollars, or however much money would sting. Ask them, as friends, to kick you out. Tell them you have leprosy. Whatever words you have to say to make sure you aren't welcome back there.

3: If this place is pay to play, then ration your funds. Each morning put exactly as much cash as you'll need that day in your wallet, and don't carry a credit card.

I don't know if any of these could work for you, but something similar might. A behavior that you don't want to repeat can always be made more inconvenient. That's what helped me out with eating too much. I hope that you can do a similar thing to get yourself a different habit.

Comment author: Lumifer 18 October 2016 09:29:16PM 3 points [-]

So your current value can be considered a value and none else?

That objection is not logical :-P

It's using your brain mechanics seeking for a higher power

Sorry, don't have those. Maybe somewhere in dusty off-line storage, but certainly not activated.

Because is it really that bad to value logic over all else?

That strikes me as an expression devoid of meaning. Logic is a tool. Tools can be useful or not so much, but tools are not values unto themselves, they just make it easier to reach actual goals.

Do tell, how The One True Value of logic led you to post word salad on LW?

Comment author: ChristianKl 18 October 2016 08:56:25PM 3 points [-]

What empirical evidence do you have observed to back you belief that this technique is valuable?

Comment author: Lumifer 18 October 2016 08:33:29PM 3 points [-]

Thank you, I'm not looking for a religious conversion experience.

Neither I'm likely to take blind leaps of faith on the say-so of internet strangers. Logic isn't a "value", anyway.

Comment author: Lumifer 18 October 2016 07:57:18PM 3 points [-]

I don't have a most important value.

Comment author: moridinamael 17 October 2016 09:49:54PM 3 points [-]

I am essentially imagining you to be similar to me about five years ago.

It sounds like you are not really excited about anything in your own life. You're probably more excited about far-future hypotheticals than about any project or prospect in your own immediate future. This is a problem because you are a primate who is psychologically deeply predisposed to be engaged with your environment and with other primates.

I used to have similar problems of motivation and engagement with reality. At some point I just sort of became exhausted with it all and started working on "insignificant" projects like writing a book, working on an app, and raising kids. It turns out that focusing on things that are fun and engaging to work on is better for my mental health than worrying about how badly I'm failing to live up to my imagined ideal of a perfectly rational agent living in a Big World.

If I find that I'm having to argue with myself that something is useful and I should do it, then I'm fighting my brain's deeply ingrained and fairly accurate Bullshit Detector Module. If I actually believe that a task is useful in the beliefs-as-constraints-for-anticipated-experience sense of "believe", then I'll just do it and not have any internal dialogue at all.

Comment author: James_Miller 17 October 2016 01:00:05PM *  3 points [-]

Yes, I agree. It shows children are trying to guess the teacher's password and are not doing math. Interestingly, when I asked my son this question he said you couldn't find the answer. When I asked how he knew that he said he has seen other math problems where you don't have enough information to solve.

Comment author: SithLord13 17 October 2016 12:51:29PM 3 points [-]

I think the issue here might be slightly different than posed. I think the real issue is that children instinctively assume they're running on corrupted hardware. For all priors in math, they've had a solvable problem. They've had problems they couldn't solve, and then been shown it was a mistake on their part. Without good cause, why would they suddenly assume all their priors are wrong, and not just that they're failing to grasp it? Given their priors and information, it's ration to expect that they missed something.

Comment author: kithpendragon 17 October 2016 12:51:35AM 3 points [-]

I wonder how my coworkers will do...

Comment author: gworley 16 October 2016 12:41:31AM 3 points [-]

medium makes it a little hard to find the rss feeds, but it's at:

https://medium.com/feed/map-and-territory

Comment author: CronoDAS 15 October 2016 09:49:23PM 3 points [-]

Is there an RSS feed for new posts?

Comment author: WhySpace 15 October 2016 06:42:05PM 3 points [-]

If the majority of minds with moral weight are the result of an intelligent mind's decision, then the link between complexity and frequency may be weak. Pain is a strong motivator for some things, even if it's bad at motivating creativity, so perhaps there would still be an incentive to create more pain. This is extremely speculative though.

The bigger worry would be that forces like Moloch and evolution may favor pain. Wild animals appear to have much more pain in their lives than pleasure. Even if the carrot was a more effective motivator than the stick for something, if pain was simpler and more robust evolution would still favor it.

This would be especially important for things like Boltzmann brains. To me it seems unlikely to me that things like trees or insects can suffer, but if they can we'd have a very hard time relating to minds so different from our own. With so little evidence, the choice of a good prior is crucial, so it would be useful to have a prior for the predominance of suffering over happiness.

Comment author: James_Miller 20 October 2016 04:00:23PM 2 points [-]

Megyn Kelly walked by me once. If she had handed me a knife and asked me to remove my own heart and give it to her, part of my brain would have felt obligated to comply.

Comment author: siIver 20 October 2016 01:41:10AM *  2 points [-]

This may be a naive and over-simplified stance, so educate me if I'm being ignorant--

but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.

Feel free to just provide a link if the argument has been discussed before.

Comment author: Manfred 20 October 2016 01:04:15AM *  2 points [-]

Depends on information. If people retain memories, so that each person-moment follows from a previous one, then knowing only that I suddenly find myself in a room means I'm probably in room A. If people are memory-wiped at some interval, then this increases the probability I should assign to being in room B - probability of being in a specific room, given that your state of information is that you suddenly find yourself in a room, is proportional to the number of times "I have suddenly found myself in a room" is somebody's state of information.

The above is in fact true. So here's a fun puzzler for you: why is the following false?

"If you tell me the exact time, then my room must more likely be B, because there are 1000 times more people in room B at that time. Since this holds for all times you could tell me, it is always true that my room is probably B, so I'm probably in room B."

Hint: Assuming that room B residents "live" 1,000,000 times longer than room A residents, how does their probability of being in room B look throughout their life, assuming they retain their memories?

Comment author: Gram_Stone 20 October 2016 12:22:21AM 2 points [-]

Here's a stab: If I understand you correctly, then every observer's experience is indistinguishable from every other's, so my credence in the proposition "I'm in room A" is 0.999 and my decision policy is "Bet that I'm in room A." If 100 trillion + 100 billion people choose room B, then 100 trillion will lose and 100 billion will win. If 100 trillion + 100 billion people choose room A, then 100 billion will lose and 100 trillion will win.

Comment author: gbear605 19 October 2016 10:35:19PM 2 points [-]

Not OP, but each single person could be in room A for 1/1,000,000 the time that they're in room B. The time doesn't run slower, but they're there less time, producing the same effect.

Comment author: Lumifer 19 October 2016 04:29:46PM 2 points [-]

"You must be able to teleport this far to enter" X-)

Comment author: gwern 19 October 2016 04:12:15PM 2 points [-]

Now I'll have to incorporate that due to some random magic accident, the monastery disappeared, but not the rooms inside it.

The next hit Japanese LN & anime murder mystery series!

Comment author: turchin 19 October 2016 11:02:27AM 2 points [-]

The page http://lesswrong.com/r/discussion/new/ returns error for me for 12 hours, but other pages are fine. Is it only my glitch?

error text: "You have encountered an error in the code that runs Less Wrong. The site maintainers have been informed and will get to it is as soon as they can. In the unlikely event that you've bumped into this error before and think that no-one is paying attention, please report the error and how to reproduce it on http://code.google.com/p/lesswrong/issues/list'

If the error is localised you might still find awesome Less Wrong content in the Main article area or in the Discussion area.

Comment author: chron 19 October 2016 12:06:31AM *  2 points [-]

I don't think I knew that particular stat was an empirical fact, though I wasn't surprised by it. My view, generally, was that blacks in America earned less, had higher incarceration rates, etc. The causes interest me.

Well, the proximate cause of them having higher incarceration rates is them having higher crime rates. The reason for the higher crime rates isn't directly relevant to the discussion of police "racial bias".

1) is true in at least some cases based on many, many experiences I've had.

How did this "racial bias" manifest itself? Them acting like they believed blacks were more likely to be criminals than whites. Or even willingness to shoot a black who was running at him and grabing for his gun?

Comment author: chron 18 October 2016 06:38:41PM 2 points [-]

Yes, though mostly indirectly.

In particular did you know about the different rates of murder commited by blacks and whites before posting the OC?

But I'm wavering. I still believe people are (1) biased based on race (2) this bias can be unconscious and (3) this unconscious bias' effect would be pronounced in a high stress, high consequence environment where someone needed to act quickly (like what police officers face when they are in close proximity to a suspect).

Do you have any evidence for this belief? If so, why haven't you presented it anywhere in this thread? Or does "bias" in this case mean that the cops understand the differences in muder rates?

Comment author: James_Miller 18 October 2016 02:51:04AM 2 points [-]

I didn't get it until I read your comment.

Comment author: CronoDAS 18 October 2016 02:33:12AM *  2 points [-]

Read the first sentence of the problem again.

Comment author: username2 17 October 2016 05:02:08PM *  2 points [-]

This is deserving of a much longer answer which I have not had the time to write and probably won't any time soon, I'm sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation.

Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn't some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes.

Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn't enlighten us about the hole digging nature at all.

Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I'd rather have dust in no one's eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn't tell us much about our underlying desires because there isn't some consistent mathematical utility function underlying our responses. At best it just reveals how we've been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.

Comment author: Lumifer 17 October 2016 02:24:31PM 2 points [-]

Reminds me of a slightly different problem:

You are a bus driver and you start with an empty bus. On the first stop 7 people got in. On the second stop 4 more people got in and two people left. On the third stop no one got in and one person left. On the fourth stop 5 people got in, 2 left. On the fifth stop one got in and two left. What is the colour of the driver's eyes?

Comment author: resuf 17 October 2016 09:58:15AM *  1 point [-]

Hey Gleb. I really like your insights on general EA marketing and the way you help people build a local EA community in the Facebook EA Marketing group.

When I first opened this video I was pleasantly surprised that you made such a modern, attractive video about effective giving; exactly what I hoped InIn would do. Unfortunately, again, you put your Organisation, Intentional Insights (InIn), on the same level as Givewell, ACE, TLYCS and GWWC.

Isn't this exactly what the EA community had a problem with?

a) Posting to the forums and EA sites with a much higher frequency than others, creating the impression that InIn was a bigger deal in the EA community than it really was, and b) using the EA brand despite wanting to target laypeople using listicles and clickbait articles.

You updated from b and started to drop the label and go for advocating "effective giving" only, which was great and could no longer taint the EA brand. However, this new video again puts Intentional Insights on the same level as much more rigorously researched organisations with an entirely different target group and an already established good reputation.

This video could have been great if it left out your organisation entirely. Now I don't really want it to get shared. I hope I don't sound too harsh when I say that, from this video, I get the impression that InIn wants to leech off the reputation of the most popular EA organisations.

Comment author: CellBioGuy 17 October 2016 05:57:49AM 2 points [-]

Timing of next post uncertain, three weeks of insane teaching and grant-writing and yeast-poking ahead.

Comment author: chron 16 October 2016 07:57:22PM 2 points [-]

Your example (2) (and arguably (3)) seems like a special case of (6) and it's not at all clear why you're singling out those to particular apocalyptic philosophies, as opposed to say ISIS-style Islamic apocalyptism.

Comment author: WhySpace 16 October 2016 04:52:51AM 0 points [-]

Awesome article! I do have a small piece of feedback to offer, though.

Interestingly, no notable historical group has combined both the genocidal and suicidal urges.

No historical group has combined both genocidal and suicidal actions, but that may be because of technological constraints. If we had had nukes widely available for millennia, how many groups do you think would have blown up their own cities?

Without sufficiently destructive technology, it takes a lot more time and effort to completely wipe out large groups of people. Usually some of them survive, and there's a bloody feud for the next 10 generations. It's rare to win sufficiently thoroughly that the group can then commit mass suicide without the culture they attempted genocide against coming back in a generation or two.

There have, of course, been plenty of groups willing to fight to the death. How many of them would have pressed a domesday button if they could?

Comment author: turchin 15 October 2016 08:33:45PM 2 points [-]

I think that natural evolution of values is part of what is to be human (and that is why I am against CEV). But here I mean some kind of disruptive revolutions in values in shorter time period, like in 20 years. And I think it will not in happen in 20 years as humans have some kind of values inertia.

But on loner time horizon new technologies could help to spread new "memes-values" quicker, and they will be like computer viruses for human brains, may be disseminating through brain implants. It could be quick and catastrophic.

Comment author: skeptical_lurker 15 October 2016 01:08:49PM 1 point [-]

Time to Godwin myself:

1930's Germany: The problem with relativity is that it's developed by Jews. We need an ethnically pure physics.

2010's USA : The problem with AI is that it's developed by white men. We need an ethnically diverse compsci.

Comment author: turchin 14 October 2016 03:58:53PM 2 points [-]

White house also relized a pdf with concrete recommendations: http://barnoldlaw.blogspot.ru/2016/10/intelligence.html

Some interesting lines:

Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R and D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R and D in these areas.

Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.

Comment author: Houshalter 20 October 2016 08:48:36PM 1 point [-]

Most AI researchers have not done any research into the topic of AI research, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.

I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.

Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.

Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.

Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.

Comment author: philosophytorres 20 October 2016 08:06:02PM 1 point [-]

Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)

Comment author: hairyfigment 20 October 2016 05:24:51PM 1 point [-]

I hate this essay immediately, though a lot of that may be your false summary. (The link tries to make clear it is not about "high-status halos" in general.) Defending priests who raped children was not, I think, "primarily an upper-class phenomenon." Those who did the same for Roman Polanski were often rich, but for very different reasons.

Comment author: username2 20 October 2016 02:02:55PM *  0 points [-]

What you've expressed is the outlier, extremist view. Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur. It is hardly obvious at all that AI should be a highly regulated research field, like say nuclear weapons research.

I highly suggest expanding your reading beyond the Yudjowsky, Bostrom et all clique.

Comment author: username2 20 October 2016 12:26:33AM *  1 point [-]

Part of my motivation is motivated by hopes of greater social status.

That would be a bad reason to join the FFL. It is a small clique of people that would assign higher social status to a legionnaire, current or former. Most, thanks to Hollywood, will think you a psychopath or maybe a fugitive on the run. At best your so-called friends and family will be left wondering "why France?" -- because in their minds the only justifiable reason to serve is patriotism, and how can you be a patriot for a nation that is not your own? You don't even get a French passport at the end unless you are wounded in battle.

So if it is social status seeking by people you already know, then the FFL is not for you. But if you think it's more than that, read on.

I would read some works by David Grossman, specifically "On Killing" and also "On Combat". Maybe start first with this classic essay by him, excerpted from "On Combat":

http://www.killology.com/sheep-wolves-and-sheepdogs

Ask yourself: are you a sheepdog? Not do you want to be a sheepdog, but are you? Is that how you define yourself? If so, the path of a warrior might have value for you. But don't take this question lightly. Talk to those whose opinion you trust and who know you well. Only tell them as many details as you need to, but get their opinion of you. You might recognize things you never realized in how others see you. Maybe reach out to some warrior forums and get some opinions from those who did serve.

If you do join the légion étrangère, it is really in your interests to go all-in. Make sure you excel at everything you do, top of the class. Then volunteer for the 2e REP parachute regiment, which is the special ops branch. Volunteer for as much advanced training as you can get. You'll then serve the remaining 3-4 years of your tour on assignments in various hot spots. Get to know everyone, but particularly focus on those that are the most professional. This will help your career whether you stay with the legion or not.

When you're out you'll have access to the alumni network of people that live not above but outside the rule of law. Those who understand the true nature and basis of governance and serve when it is necessary as its blunt instrument. Your reputation is what gates access to this network.

One thing to note: "soldier of fortune" is misnamed. You will not get rich being a merc. The pay is decent, and tax-free, but we're talking like maybe $100k/yr. You can do way better in tech or finance and without putting your life on the line. Only do this if it is a serious calling.

it's an 'x' (can't remember what they guessed) and an asian together?' it's a really strange voice - they were two teenage guys. Not quite rascist, but it could be in the future. I'm mixed ethnicity but appear either dark skinned Latino or Arab I guess.

Nah man that's totally racist, and f-- them. I would have been in their faces causing trouble if that was at me or my girl. That goes far beyond "having a good comeback" territory.

Comment author: RomeoStevens 19 October 2016 10:56:13PM *  1 point [-]

Yeah, to rephrase: do we update based on subjective or objective measure of time?

There are two groups of brains, x1 and x2. x1 exists for a million years but only experiences 1000 years of subjective time. x2 exists for 1000 years but experiences a million years of subjective time.

If you don't know which of the groups you are in you'll update differently depending on which rule you are following. If updating on objective time you'll update towards x1, if updating on subjective you'll update towards y1. What meta-rule we might propose that would generate differences between x1 and x2? I can't imagine what would.

Comment author: turchin 19 October 2016 09:52:17PM 1 point [-]

I don't understand how it could happen: do you mean that time in the room B is million times slower?

In response to comment by MrMind on Quantum Bayesianism
Comment author: TheAncientGeek 19 October 2016 05:52:58PM 1 point [-]

I'm not putting g a 100%..sorry, 99,99999% weighting on RQM. But its very existence undermines EYs argument for MWI,because it suggests third alternatives to a number of alleged either/or dichotomies

Comment author: TheAncientGeek 19 October 2016 04:48:39PM *  1 point [-]

those are perfectly coherent and sound for those who entertain them, we should though do not call them "Clippy's, Elves' or Pebblesorters' morality", because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.

So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain't no Mama cause she ain't my Mama!

Yudkowsky isn't being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.

And it's not like the issue isn't important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.

Comment author: tut 19 October 2016 11:18:50AM 1 point [-]

I seem to have the same thing

Comment author: chron 19 October 2016 01:32:58AM 1 point [-]

The reason for the higher crime rates isn't directly relevant to the discussion of police "racial bias".

It's not? How do you know?

Police bias seems likes it could be directly related to crime rates (since it's the cops who do the arresting).

I'm not sure what your trying to say here? Are you saying that police framing black suspects is responsible for the statistics showing blacks being over seven times for likely to commit murder than whites, because that's the only way police bias could be the cause of the crime rates.

How did this "racial bias" manifest itself? Them acting like they believed blacks were more likely to be criminals than whites.

Judgements based only on race.

Well blacks are over seven times more likely to commit a murder than whites so you've failed to establish that the judgement was irrational.

I'm not arguing every white cop who shoots a black person is racist. Not even close.

Well somehow every prominent example sited by the type of people arguing for police being "racially biased" up on investigation turns out to be similarly justified.

Comment author: morganism 18 October 2016 10:30:24PM 1 point [-]

AI•ON is an open community dedicated to advancing Artificial Intelligence by:

Drawing attention to important yet under-appreciated research problems.
Connecting researchers and encouraging open scientific collaboration.
Providing a learning environment for students looking to gain machine learning experience.

http://ai-on.org/

Comment author: ingive 18 October 2016 09:06:02PM *  1 point [-]

I've seen multiple people having an "awakening" type experience where they value logic over all else, understanding they know nothing and hunger for knowledge. They also want to spread it because it is too unbelievable. It's metaphorically a connection between the emotional and rational part of the brain. They also see how obvious their past behavior was for whatever flawed value they had.

What it seems to be is the next step of humanity - religion, money, have been similar. This will spread.

There's also a temporary feeling of euphoria. Sense of peace. The awakening happens in an instant and is permanent. Thus it could be called that. Or a paradigm shift. Or catharsis. Enlightenment, whatever.

It's basically submitting your current core, most important value, to logic. Salvation. Gain the safety of logic over whatever you currently have.

Comment author: username2 18 October 2016 08:49:37PM 1 point [-]

I thought I could grow by joining the French Foreign Legion

I have opinions on this as well, having gotten very close to doing so myself.. until I met the love of my life 6 months before my planned one-way ticket to Aubagne. I can advise if you are interesting in contemplating this further.

I just wish I had some good comebacks for when a person is out and about with an Asian girl and people making comments that make me feel self-conscious.

Can you give an example? You called her out as Asian -- you mean like racist comments or something? What's your ethnicity?


Regarding the main point of your post, seek out a sex addiction group. I don't think LessWrong can help you here.

Also: get tested for STDs, ASAP. Especially before you see that girl you like again.

Comment author: Lumifer 18 October 2016 07:01:17PM 1 point [-]

/me rolls eyes

Comment author: Lumifer 18 October 2016 07:00:21PM 1 point [-]

I'm not sure, I have different forms of identification that state that they are different colors

Do you have a mirror?

Comment author: Brillyant 18 October 2016 05:10:23PM *  0 points [-]

So have you actually learned anything

Yes, though mostly indirectly. I've learned mostly from reading about neoreactionaries elsewhere. SSC, Moldbug, etc. I'm learning a lot. Very interesting. This discussion was the catalyst for my reading. So, thanks!

from these discussions

Yes, I've learned some directly from this discussion.

Mostly I've learned that people will get internet-hostile about certain topics. I was already aware of this, but my interaction in this discussion has re-cemented the fact in my mind. I've received a recent -37 karma lashing (to date). A lot of my downvoted comments were just simple sincere questions—I'd admitted my ignorance and was really seeking to understand these issues better. Maybe I'm just annoying to people to know more about these things and my questions seem obvious? I could understand people being annoyed... (Note: I kinda like a mixed karma bag. Nothing so negative so as to be perceived as any dumber than I am... but anything north of 90% on LW would worry me.)

I don't recall much of what I learned directly about this topic directly from this discussion. (Lumifer made some nice points that helped some things make more conceptual sense to me.) At any rate, what I actually learned about this topic from this direct discussion was a tiny percentage of what I've learned from reading elsewhere.

Also, it appears your account is very new—I'm hypothesizing this fact, along with my recent negative karma streak, along with the stories I've read about others' similar experiences when engaging in such topics...means you are the sock puppet of Eugine Nier. The Eugine Nier. That's super cool! Nice to meet you, Eugine. I've heard so much about you and I'm honored to... have drawn your ire. :)

in particular, are you willing to admit that the Hillary/Kane analysis of the "implicit biases" of police officers you cited in the OC is wrong.

Yes. Yes, I think there is a bit more to this than I realized.

I still think HRC's point was refreshing in the context of the debate, and potentially useful. But I'm wavering. I still believe people are (1) biased based on race (2) this bias can be unconscious and (3) this unconscious bias' effect would be pronounced in a high stress, high consequence environment where someone needed to act quickly (like what police officers face when they are in close proximity to a suspect). I'm cynical enough about politics not to be excited that HRC's one liner will change anything. Or that she intended it as very much more than a rhetorical judo move in that debate...

Anyway, I'm still thinking and reading about all of this stuff. My current epistemic status is "Oh Shit I'm Soooo Ignorant While Shaking My Head"

A sincere thank you for the interaction.

Comment author: turchin 18 October 2016 12:25:33PM 1 point [-]

What about dissolved oxygen in water, which also supports large animals? Could it happen in underground oceans of icy moons?

Comment author: MrMind 18 October 2016 07:35:56AM 1 point [-]

I would suggest to read "The subtle art of not giving a fuck". It's about how to properly choose our own values, how often we are distracted by bigger or impossible goals that exhaust our mental focus and only bring unhappiness, and what are actual useful tinier values that bring much more happiness.
It seems to be a perfect fit for your situation. It personally saved my life, but as with anything in self-help, your mileage may vary.

Comment author: CellBioGuy 18 October 2016 12:41:03AM 1 point [-]

Dang. Some information I've been pointed to since publishing this suggests that there are multiple groups out there that consider it likely that photosynthesis was present very close to the root of the bacterial tree, and that large numbers of bacterial groups may have lost it rather than it going all around the tree by horizonal transfer. This would put photosynthesis as one of the rather earlier metabolic pathways on Earth.

I've also been pointed towards more modern evidence that oxygenic photosynthesis originated in the original bacterium that created both energy-producing and sulfer-creating photosynthesis rather than the two coming together via horizontal transfer later.

Comment author: Manfred 17 October 2016 11:41:05PM 1 point [-]

This is a perfectly valid presentation. A better one would be to ditch the lists and just say "0.99 to MWI" or "probability 0.5 of MWI, 0.2 of consciousness causes collapse, the rest distributed among unknown unknowns."

Even better would be to assign a smaller probability to consciousness causes collapse :P

Comment author: chron 17 October 2016 11:31:04PM *  1 point [-]

My views on these issues are largely ignorant and I'm open to learning.

So have you actually learned anything from these discussions, in particular, are you willing to admit that the Hillary/Kane analysis of the "implicit biases" of police officers you cited in the OC is wrong?

Comment author: hg00 17 October 2016 10:25:59PM *  1 point [-]

I'm familiar with lots of the things Eliezer Yudkowsky has said about AI. That doesn't mean I agree with them. Less Wrong has an unfortunate culture of not discussing topics once the Great Teacher has made a pronouncement.

Plus, I don't think philosophytorres' claim is obvious even if you accept Yudkowsky's arguments.

Fragility of value thesis. Getting a goal system 90% right does not give you 90% of the value, any more than correctly dialing 9 out of 10 digits of my phone number will connect you to somebody who’s 90% similar to Eliezer Yudkowsky. There are multiple dimensions for which eliminating that dimension of value would eliminate almost all value from the future. For example an alien species which shared almost all of human value except that their parameter setting for “boredom” was much lower, might devote most of their computational power to replaying a single peak, optimal experience over and over again with slightly different pixel colors (or the equivalent thereof). Friendly AI is more like a satisficing threshold than something where we’re trying to eke out successive 10% improvements. See: Yudkowsky (2009, 2011).

From here.

OK, so do my best friend's values constitute a 90% match? A 99.9% match? Do they pass the satisficing threshold?

Also, Eliezer's boredom-free scenario sounds like a pretty good outcome to me, all things considered. If an AGI modified me so I could no longer get bored, and then replayed a peak experience for me for millions of years, I'd consider that a positive singularity. Certainly not a "catastrophe" in the sense that an earthquake is a catastrophe. (Well, perhaps a catastrophe of opportunity cost, but basically every outcome is a catastrophe of opportunity cost on a long enough timescale, so that's not a very interesting objection.) The utility function is not up for grabs--I am the expert on my values, not the Great Teacher.

Here's the abstract from his 2011 paper:

A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome,” despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI researchers who consider themselves to have cosmopolitan values not tied to the exact forms or desires of humanity.

It sounds to me like Eliezer's point is more about the complexity of values, not the need to prevent slight misalignment. In other words, Eliezer seems to argue here that a naively programmed definition of "positive value" constitutes a gross misalignment, NOT that a slight misalignment constitutes a catastrophic outcome.

Please think critically.

Comment author: Gleb_Tsipursky 17 October 2016 04:39:04PM 0 points [-]

Thanks for your good words about my insights on EA marketing, really appreciate it!

Regarding having InIn in the video, the goal is not to establish any sort of equivalence. In fact, it would be hard to compare the other organizations with each other as well. For instance, GiveWell has a huge budget and vastly more staff than any of the other organizations mentioned in the video. The goal is to give people information on various venues where they could get different types of information. For example, ACE is there for people who care about animal rights, and GWWC is there for people who want a community. InIn is there for people who want easy content to inform themselves about effective giving. This is why InIn is specifically discussed as a venue to get content, not recommendations on effective charities or things like that.

Also, please remember people's priors. This video is not aimed at EAs. The people who watch this video will not have any idea about the popularity of various organizations. InIn would get fine credit within the EA community if we had just produced the video without including InIn itself. The goal is to provide a broad audience with a variety of sources of information about effective giving. We included InIn because it provides some types of content - such as this video - that other orgs do not - as you say, they have a different target group :-)

Comment author: Manfred 17 October 2016 04:26:57PM 1 point [-]

On 1, I'm not sure - How an Algorithm Feels from Inside is the closest I can think of right now. But you might check out Good and Real, by Gary Drescher, which is basically what you're looking for, particular author aside.

On 2, a fairly common opinion is that AI risk will happen on a faster timescale and impact more people than global warming, but attribute this to anyone in particular at your own risk.

Comment author: qmotus 17 October 2016 03:23:21PM 1 point [-]

I'm having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I'd like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.

But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and maybe a few other existential risks are worth focusing on (not even things that I still consider to be enormously important relative to some others). In principle I could focus on those, as well. I'm not intelligent enough to do serious work on Friendly AI, but I probably could transition, relatively quickly, to working on machine learning and in data science, with perhaps some opportunities to contribute and likely higher earnings.

The biggest problem, however, is that whenever I seem to be on track towards doing something useful and interesting, a monumental existential confusion kicks in and my productivity plummets. This is mostly related to thinking about life and death.

EY recently suggested that we should care about solving AGI alignment because of quantum immortality (or its cousins). This is a subject that has greatly troubled me for a long time. Thinking logically, big world immortality seems like an inescapable conclusion from some fairly basic assumption. On the other hand, the whole idea feels completely absurd.

Having to take that seriously, even if I don't believe in it 100 percent, has made it difficult for me to find joy in the things that I do. Combining big world immortality with other usual ideas regarding existential risks and so on that are prevalent in the LW memespace sort of suggests that the most likely outcome I (or anybody else) can expect in the long run is surviving indefinitely as the only remaining human, or nearly certainly as the only remaining person among those that I currently know. Probably in an increasingly bad health as well.

It doesn't help that I've never been that interested in living for a very long time, like most transhumanists seem to be. Sure, I think aging and death are problems that we should eventually solve, and in principle I don't have anything against living for a significantly longer time than the average human lifespan, but it's not something that I've been very interested in actively seeking and if there's a significant risk that those very many years would not be very comfortable, then I quickly lose interest. So the theories that sort of make this whole death business seem like an illusion are difficult to me. And overall, the idea does make the mundane things that I do now seem even more meaningless. Obviously, this is taking its toll on my relationships with other people as well.

This has also led me to approach related topics a lot less rationally than I probably should. Because of this, I think both my estimate of the severity of the UFAI problem and our ability to solve this has gone up, as has my estimate of the likelihood that we'll be able to beat aging in my lifetime - because those are things that seem to be necessary to escape the depressing conclusions I've pointed out.

I'm not good enough at fooling myself, though. As I said, my ability to concentrate on doing anything useful is very weak nowadays. It actually often feels easier to do something that I know is an outright waste of time but gives something to think about, like watching YouTube, playing video games or drinking beer.

I would appreciate any input. Given how seriously people here take things like the simulation argument, the singularity or MWI, existential confusion cannot be that uncommon. How do people usually deal with this kind of stuff?

Comment author: Lumifer 17 October 2016 02:31:10PM *  0 points [-]

I only need to know that the process used to construct it results in a friendly AI.

You are still facing the same problem. Given that you can't recognize friendliness, how will you create or choose a process which will build a FAI? Would you be able to answer "Will it be friendly?" by looking at the process?

the negative parts of human values are entirely eliminated.

That doesn't make much sense. What do you mean by "negative" and from which point of view? If from the point of view of the AI, that's just a trivial tautology. If from the point of view of (at least some) humans, this seems to be not so.

In general, do you treat morals/values as subjective or objective? If objective, the whole "if they knew more" part is entirely unnecessary: you're discovering empirical reality, not consulting with people on what do they like. And subjectivism here, of course, makes the whole idea of CEV meaningless.

Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of "improve".

Comment author: SithLord13 17 October 2016 12:34:57PM 1 point [-]

I think the best reason for him to raise that possibility is to give a clear analogy. Nukes are undoubtedly airgapped from the net, and there's no chance anyone with the capacity to penetrate would think otherwise. It's just an easy to grasp way for him to present it to the public.

Comment author: turchin 17 October 2016 11:22:18AM 1 point [-]

I read somewhere in his earlier writing that he hopes that AI will explain us what is qualia. So it looks like he postponed the consciousness problem until creation of (friendly) AI.

Comment author: MrMind 17 October 2016 10:39:54AM *  1 point [-]

Would this be an accurate summary of what you think is the meta-ethics sequence? I feel that you captured the important bits but I also feel that we disagree on some aspects:

  • values that motivates actions (set of concepts that agents care about) are two placed computations, one for class of beings (and possibly other parameters locating them) and the other for individual beings.

V(Elves, _ ) = Christmas spirity
V(Pebblesorters, _ ) = primality
V(Humans, _ ) = morality

If V(Humans, Alice) =/= V(Humans, _ ) that doesn't make morality subjective, it is rather indicating that Alice is behaving immoraly. V(Humans, _ ) (= morality) exists objectively insofar it is a computation instantiated by a class of agents at some point in time, but it is not a property of the world independent from the existence of any agents calculating it. Morality is there because of evolution, and it happens to be a complicated and somewhat unexplored landscape, which means that it's also fragile and possibly no one has a hold of it's entirety.

Comment author: siIver 17 October 2016 09:42:40AM *  0 points [-]

Is there a place where Yudkowsky has talked about consciousness? I have found the Zombie Series, but that's not quite what I'm looking for. I'm more curious how he thinks it works more than why Zombies don't work.

Also, is there a place where Yudkowsky has talked about Climate Change?

I've looked for both, but I couldn't find either.

Comment author: hg00 17 October 2016 09:03:50AM 0 points [-]

Good post!

While not all sociopaths are violent, a disproportionate number of criminals and dictators have (or very likely have) had the condition.

Luckily sociopaths tend to have poor impulse control.

It follows that some radical environmentalists in the future could attempt to use technology to cause human extinction, thereby “solving” the environmental crisis.

Reminds me of Derrick Jensen. He doesn't talk about human extinction, but he does talk about bringing down civilization.

Fortunately, this version of negative utilitarianism is not a position that many non-academics tend to hold, and even among academic philosophers it is not especially widespread.

For details see http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/

This is worrisome because recent research shows that even slight misalignments between our values and those motivating a superintelligence could have existentially catastrophic consequences.

Citation? This is commonly asserted by AI risk proponents, but I'm not sure I believe it. My best friend's values are slightly misaligned relative to my own, but if my best friend became superintelligent, that seems to me like it'd be a pretty good outcome.

Comment author: CellBioGuy 17 October 2016 05:53:02AM *  1 point [-]

Yes, but two of the hydrogens/electrons ripped from water in the process of photosynthesis effectively reduce one of the oxygens from the CO2, regenerating one water: 2 H2O + CO2 -> O2 + CH2O + H2O (multiply atom numbers by various numbers to get the actual biomolecules that comes out the other side of the various carbon fixation pathways). Still needed since the oxygen production all happens from the photosystems splitting water rather than from splitting CO2, the photosystems never touch the carbon directly. That water is produced no matter what you're using as your electron donor.

Comment author: chron 17 October 2016 04:24:11AM 1 point [-]

Stupid technical question. You write:

Oxygen producing photosynthesis, on the other hand, uses water itself as the chemical source of electrons, two H2O for every CO2.

Doesn't the standard equation have one H2O for every CO2?

Comment author: Romashka 16 October 2016 09:21:18PM *  1 point [-]

A recommendation from personal experience (n=1 or 2): translating (or proof-reading) articles for a journal specializing in a field close (but not very close) to your own gives you a more-or-less regular opportunity to read reviews of literature which you wouldn't have thought to survey on your own.

I find it cool. One day, I just browse the net, looking at - whatever I look at, the next day, bacteriae developing on industrial wastes come knocking. And the advantage of reading the text in my native tongue is that tiny decrease in cognitive power necessary to process the information (more than made up by the effort of translation, but hey, practice.)

Comment author: TheAncientGeek 16 October 2016 12:49:07PM *  1 point [-]

It is not a clear expression of something that can be seen to work

Version 1.

I am obligated to both do and not do any number of acts by any number of shouldness-equations

If that is the case, anything resembling objectivism is out of the window. If I am obligate to do X, and I do X, then my action is right. If I am obligated not do to X, and I do X, my action is wrong. if I am both obligated and not obligated to do X, then my action is somehow both right and wrong..that is, it has no definite moral status.

But that's not quite what you were saying.

Version 2.

There are lots of different kinds of morality, but I am only obligated by human morality.

That would work, but it's not what you mean. You are explicitly embracing...

Version 3.

There are lots of different kinds of morality, but I am only motivated by human morality

There's only one word of difference between that and version 2, which is the substitution of "motivated" for "obligated". As we saw under version 1, it's the existence of multiple conflicting obligations which stymies ethical objectivism. And motivation can't fix that problem, because it is a different thing to obligation. In fact it is orthogonal, because:

You can be motivated to do what you are not obligated to do. You can be obligated to d what your are not motivated to do. Or both. Or neither.

Because of that, version 3 implies version 1, and has the same problem.

Comment author: turchin 16 October 2016 10:01:50AM 1 point [-]

I think that number of agents will also grow as technologies will be more accessible for smaller organisations and even individuals. If a teenager could create dangerous biovirus as simply as he now able to write computer virus to amuse his friends, we are certainly doomed.

Comment author: ignoranceprior 16 October 2016 04:35:10AM 1 point [-]

You can watch the archived videos here: http://livestream.com/nyu-tv/ethicsofAI

View more: Prev | Next