Hell yes this is a problem! Hours worked and hourly pay are very much correlated (it costs a lot to get someone to work that 80th or 120th hour in a week) and part time jobs often don't come with health benefits. Many workers have the opposite problem - the local retail store won't increase their hours and they never hire anyone full time ever because then they would have to provide health insurance.
Sorry, can't see why situation III is so bad. I generally like the fact that I like spending time with my husband. On the other hand, I can almost imagine the 'and then zombie apocalypse happened and she had to bash his head in with the frying pan' end of story. What exactly did you have in mind?
Sex and love addiction, sexual compulsions, insecure attachment, risky sexual behaviour, HOCD, HIVOCD
What if you lost the love of your life due to a sexual impulse? What if you recognised sexual impulsivity as a pattern of your behaviour, deeply deeply ingrained into your being, and that you want to overcome it? That’s me.
I chose the name clarity because when I started to post, I was dipping in and out of psychoses and other really mentally unhealthy states. I would have moments of clarity, inspired by stuff I read in the sequences and other LessWrong posts and they would be like gulps of air saving me from drowning in really turbulent water. Now that I’m on some kind of boat, I don’t have to actively think about how to breath.
Until now, again.
I haven’t posted a lot recently. Mainly because I have been doing really, really well. My epic failures I dare so have given me a reputation here, and I talk about them freely. But, again, I have been doing well lately.
With an exception. Let me explain:
Since I already have a soldiery mindset due to some abuse from my childhood I thought I could grow by joining the French Foreign Legion. I had decided not to in the past due to risk of permanent injury but considered it again. I decided not to this time because I figured I wouldn’t be able to meet, court and enjoy time with someone, fall in love etc. – it’s unsuitable for married life (which correlates strongly with happiness), according to this link: https://www.cervens.net/legionbbs123/archive/index.php/t-53.html
Lately I am infatuated with someone. She seems to have the potential to meet my criteria for a good potential wife: communication skills, personality, responsibility, emotional honesty, attractiveness, matching sex drives, and value alignment. I just wish I had some good comebacks for when a person is out and about with an Asian girl and people making comments that make me feel self-conscious. She gives me a different feeling than that bewilderment kind of pleasant feeling I would get when my ex housemate I fell for used open her small mouth really really wide in amazement at something, haha. I get more of the nice chill longing of when I think of that cute little housemate listening too hip-hop.
I’ve been thinking about her strong feelings for veganism so I looked up some stuff about the case for veganism.
I decided to go milk free after watching this: https://m.youtube.com/watch?v=UcN7SGGoCNI Wool free after watching watching just 243 of this video. https://m.youtube.com/watch?v=siTvjWE2aVw
So another recent experience really stood out to me as a bad choice, by a similar rationale. I consider myself heteroflexible, or perhaps hetero but rather sexually fluid. On Sunday night I went to a gay sauna, tossed up a bit between that and a brothel, but decided I prefer the idea of guys this time. I’m a bit anxious and unattached to guys physically, except if its porn (which I had watched before going). So I went into a dark room with two guys I later saw were ugly AF and of course, like previous times, they give me tonnes or props and validation as a good looking guy. One guy said he was a cleaner when I asked what he does. The other had scaly crusty balls. I didn’t stop, unfortunately. And now maybe that sore was Herpes or Genital Warts and now if I got herpes which is incurable, then it might ostracise me from 4/5 of the beautiful women in the world (maybe just not the slutty ones who have that too, and may just break my heart in time anyway).
Worst case scenario, I just HIV. I mean it’s a dark room, anything can happen, a grazing, a bite, etc., a pin prick from some vexed crazy guy. No accountability. In the heat of the moment something could slip off too. And, I’m not familiar with much more than the superficial statistics around HIV transition and lore, like that oral sex HIV could but they doubt it often happens – but as a medical researcher I know the quality of research must be judged in a case by case basis and never take the overviews credibility for granted.
I reflected in the moment and realised I wasn't enjoying myself in the slightest. I think it’s some need for validation, or loneliness or risk taking or a compulsion. Fuck me autocorrect almost corrected to compulsive homosexuality. Got to fix that too, or I will be outed.
I think I have HOCD, or something accounted for by these accounts:
I find each of them helpful and hope to revisit them.
http://blogs.psychcentral.com/sex-addiction/2013/03/when-straight-men-are-addicted-to-gay-sex/ http://www.sexaddictionscounseling.com/can-a-straight-man-be-addicted-to-gay-sex/ http://www.brainphysics.com/yourenotgay.php https://www.google.com.au/amp/m.wikihow.com/Overcome-Sexual-Addiction%3famp=1?client=ms-android-optus-au
If I don't do it (regardless of where unless I find myself in a stable relationship with that person before or within a week) again by 2020 I'll give one my close friends $141 as a prize to encourage me. 1/1/2020. If not I’ll donate the same amount to a sex, love and or romance focussed impulse control related group.
Masturbating alone is hedonically better and it’s safer anyway, what the fuck is wrong with me?
I have an addiction but I have some much will power and a track record of discipline. This is the last frontier. Never again.
I am essentially imagining you to be similar to me about five years ago.
It sounds like you are not really excited about anything in your own life. You're probably more excited about far-future hypotheticals than about any project or prospect in your own immediate future. This is a problem because you are a primate who is psychologically deeply predisposed to be engaged with your environment and with other primates.
I used to have similar problems of motivation and engagement with reality. At some point I just sort of became exhausted with it all and started working on "insignificant" projects like writing a book, working on an app, and raising kids. It turns out that focusing on things that are fun and engaging to work on is better for my mental health than worrying about how badly I'm failing to live up to my imagined ideal of a perfectly rational agent living in a Big World.
If I find that I'm having to argue with myself that something is useful and I should do it, then I'm fighting my brain's deeply ingrained and fairly accurate Bullshit Detector Module. If I actually believe that a task is useful in the beliefs-as-constraints-for-anticipated-experience sense of "believe", then I'll just do it and not have any internal dialogue at all.
Hey Gleb. I really like your insights on general EA marketing and the way you help people build a local EA community in the Facebook EA Marketing group.
When I first opened this video I was pleasantly surprised that you made such a modern, attractive video about effective giving; exactly what I hoped InIn would do. Unfortunately, again, you put your Organisation, Intentional Insights (InIn), on the same level as Givewell, ACE, TLYCS and GWWC.
Isn't this exactly what the EA community had a problem with?
a) Posting to the forums and EA sites with a much higher frequency than others, creating the impression that InIn was a bigger deal in the EA community than it really was, and b) using the EA brand despite wanting to target laypeople using listicles and clickbait articles.
You updated from b and started to drop the label and go for advocating "effective giving" only, which was great and could no longer taint the EA brand. However, this new video again puts Intentional Insights on the same level as much more rigorously researched organisations with an entirely different target group and an already established good reputation.
This video could have been great if it left out your organisation entirely. Now I don't really want it to get shared. I hope I don't sound too harsh when I say that, from this video, I get the impression that InIn wants to leech off the reputation of the most popular EA organisations.
Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.
Disregarding any discussion of legitimate climate concerns, isn't this a really bad decision? Isn't it better to be unblackmailable, to disincentivize blackmail.
I think we discussed this previously on LW. In general the argument isn't convincing in his case.
Gilead made 20$ billion with a drug that cures one virus. If a pharma company would think that his approach has a 10% of working to cure all viruses spending 100$ million or more would be very interesting for traditional pharma companies under the current incentive scheme.
I think it's well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.
I recall notable researchers like Hinton making predictions that "X will take 5 years" and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:

In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.
Google recently announced a neural net chip that is 7 years ahead of Moore's law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.
That's just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google's synthetic gradient paper or hypernetworks.
I think one of the biggest things holding the field back is that it's all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don't generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.
Lots of other problems with it too. Why is there any last-universal-common-ancestor in this scenario? You would want to drop a full ecosystem with millions of different organisms, each with different FEC shards of data. If you can deliver some bacteria to a virgin planet, you can deliver multiple kinds of bacteria, not just one. Yet, genetics finds that there's a LUCA (not that much of LUCA survives in current genomes).
What are the reasons?
For example, there were 4,636 murders committed by white people and 5,620 murders committed by black people in 2015 (source). On the per-capita basis this makes the by-white murder rate to be about 2.2 per 100,000 and the by-black murder rate to be about 16.2 per 100,000.
Depends on in what way you're having trouble with it. If you need to interact with lots of people in whatever context, I find that taking an initial tone of mildly self-deprecating humor helps smooth things out. If you're the first one to mock yourself, it releases any tension that might be in the air. But then, you should let go of the self-deprecation before it starts to suggest actual low self-confidence.
It can also be good to formulate a pithy explanation for why you don't have the skill, so that you can casually explain the situation without bogging people down. "There weren't any swimming pools near where I grew up." Something short and simple, even if it leaves out important biographical details.
In the vast majority of cases, people are too involved in their own business to even think about you. If I see an adult swimming really badly, I just assume that nobody ever taught them to swim, which is a completely value-neutral assessment, and then continue on with whatever I was thinking about. I recently took a handful of jiu-jitsu lessons and was obviously as useless as a newborn kitten, but I don't really need to offer any kind of expository explanation for this lack of skill, because "just started learning" is a fully self-contained explanation.
In the hypothetical scenario in which there was something to find in Antarctica in the first place, given the thorough scraping the continent has gotten for 20+ megayears by kilometers-deep glaciers you can't expect to find much at all. The areas not covered by glaciers are generally mountains which erode - their modern exposed surfaces would have been quite deep underground at the time.
The sorts of things you could actually expect to find would be more along the lines of missing coal seams, long rods of long-ago-oxidized steel poking vertically through multiple strata into areas that would have held petroleum deposits at the time, really deep coal seams turned to ash in situ by underground gasification, hydrothermal features that concentrate copper and silver ore capped by weird craters that obliterate where the highest concentrations would have been with a big pile of copper-depleted gravel nearby. Perhaps odd isotope ratios in a very narrow sediment band if nuclear reactions were ever explored. The ecological effects you would expect on the continent are kind of overshadowed in the ocean sediment record by the worldwide climate event that the PETM represents (6C temperature spike, deep ocean hypoxia, phytoplankton death and repopulation).
It's worth noting that there are probably particular clades that are predisposed to being smart. There's a fascinating book out by Dr. Herculano-Houzel ("The Human Advantage") detailing recent work over the last decade examining brain structure across the mammals. She and her group found something fascinating: neural scaling laws differ from clade to clade. Mammals in general have a neural scaling law that if you make a brain 10x as large, it only has 4x as many neurons as the neurons on average increase in volume (partially due to longer connecting fibers). Primates break this though - all primate neurons are about the same size, which is remarkably small, the same size as that of a mammal that's like 10 grams in mass. A large primate brain is MUCH more powerful than a generic mammal brain of the same mass. Their recent work since that book came out indicates that birds also break that scaling law and have marvelously efficient brains - all bird neurons are approximately the same size like the primates, but what's more that size is 6x as small as those of primates. It is an interesting question if this would also have applied to dinosaurs, their close relatives who nonetheless were not under crazy selective pressure for low weight.
The most striking problem with this paper is how easy all of the tests of viability they used are to game. There are a bunch of simple tests you can do to check for viability, and it's fairly common for non-viable tissue to produce decent-looking results on at least a couple, if you do enough. (A couple of weeks ago, I was reading a paper by Fahy which described the presence of this effect in tissue slices.)
It may be worth pointing out that they only cooled the hearts to -3 C, as well.
Thank you James Lamine, Vaniver, and Trike Apps.
I also wanted to quote something Vaniver has said, but that was unfortunately downvoted below the visibility threshold at the time:
I've pushed for doing things the right way, even if it takes longer, rather than quicker attempts that are less likely to work.
The development of Native Americans has been stunted and they simply exist within the controlled conditions imposed by the new civilization now. They aren't all dead, but they can't actually control their own destiny as a people. Native American reservations seem like exactly the sort of thing aliens might put us in. Very limited control over our own affairs in desolate parts of the universe with the addition of welfare payments to give us some sort of quality of life.
You misunderstood my point.
The Europeans did not "proceed with a controlled extermination of the population". Yet, what happened to that population?
You don't need to start with a deliberate decision to exterminate in order to end up with almost none of the original population. Sometimes you just need to not care much.
I'm surprised to find such rhetoric on this site. There is an image now popularized by certain political activists and ideologically-driven cartoons, which depict the colonization of the Americas as a mockery of the D-Day landing, with peaceful Natives standing on the shore and smiling, while gun-toting Europeans jump out of the ships and start shooting at them. That image is even more false than the racist depictions in the late 19th century glorifying the westward expansion of the USA while vilifying the natives.
The truth is much more complicated than that.
If you look at the big picture, there was no such conquest in America like the Mongol invasion. There wasn't even a concentrated "every newcomer versus every native" warfare. The diverse European nations fought among themselves a lot, the Natives also fought among themselves a lot, both before and after the arrival of the Europeans. Europeans allied themselves with the Natives at least as often as they fought against them. Even the history of the unquestionably ruthless conquistadors like Cortez didn't feature an army of Europeans set out to exterminate a specific ethnicity. He only had a few hundred Europeans with him, and had tens of thousands of Native allies. If you look at the whole history from the beginning, there was no concentrated military invasion with the intent to conquer a continent. Everything happened during a relatively long period of time. The settlements coexisted peacefully with the natives in multiple occasions, traded with each other, and when conflict developed between them it was no more different than any conflict at any other place on the planet. Conflict develops sooner or later, in the new world just as in the old world. Although there certainly were acts of injustice, the bigger picture is that there was no central "us vs them", not in any stronger form than how the European powers fought wars among themselves. The Natives had the disadvantage of the diseases as other commenters have already stated, but also of the smaller numbers, of the less advanced societal structures (the civilizations of the Old World needed a lot of time between living in tribes and developing forms of governments sufficient to lead nations of millions) and of inferior technology. The term out-competed is much more fitting than exterminated, which is a very biased and politically loaded word.
You cannot compare the colonization of the Americas to the scenario when a starfleet arrives to the planet and proceeds with a controlled extermination of the population.
designing technology is a special case of prediction
It's possible to be very good at prediction but still rather bad at design. Suppose you have a black box that does physics simulations with perfect accuracy. Then you can predict exactly what will happen if you build any given thing, by asking the black box. But it won't, of itself, give you ideas about what things to ask it about, or understanding of why it produces the results it does beyond "that's how the physics works out".
(To be good at design you do, I think, need to be pretty good at prediction.)
To the speed section, you might want to add examples of parallel learning. Parallelizing learning of robot arm manipulation, or parallel playing of Atari games, which are both (much) faster in terms of wallclock time and also can be more sample and resource efficient (A3C actually can be more sample-efficient than DQN with multiple independent agents, and it doesn't need to waste a great deal of RAM and computation on the experience replay).
This.
Also, schlep alert: this might be the densest regulatory thicket outside of healthcare, with huge variation in standards at (at least?) the state/province level. In my little environment of 13 million Ontarians, a recent arbitrary change of the teacher/child ratio allegedly drove a good many daycares out of business.
Also, parents are insane (source: am parent).
Seconding resuf's comments: both that this is a pretty good, professional looking video, but also that it's another instance of you seeming to listen to some of the exact-letter-of-the-request when people ask you to stop or do things differently, without understanding the underlying reasons why people are upset.
And that this is especially important if your goal is to be a public-facing outreach organization.
This is similar to Pournelle's Iron Law of Bureaucracy.
Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people":
First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.
Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.
The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.
Sorry, I didn't mean that to be what you took from it.
I used to be fat. ( I still am, but not nearly to the same extent) Like, Jabba fat. My parents got doctors to say that I had an eating disorder, and maybe I did.
Othering my appetite never helped me. Like "I have an eating disorder" focused my energy on something (my disorder) that didn't have a mind. It couldn't get tired, or bored...it didn't exist. It's like "fighting" cancer.
But that doesn't mean that what worked was thinking "I'm a glutton".
When you say that "I am a dumb person", it isn't any closer to a thought you can act on. Kicking yourself when you are down feels good (or, at least, it did for me), it feels like "paying" for the behavior, but that's just thoughts. It doesn't actually change stuff.
I was shooting for more "I am a person who had unprotected sex with sketchy folks at place X". That feels, 'actionable', if you will, to me. Like, if the problem is a sex addiction, I dunno what the solution is. If the problem is being a dumb person, I dunno what the solution is. But if the problem is going to a place and doing stuff, there are a bunch of solutions.
1: Carry protection, everywhere. Put it in something that you carry everywhere (wallet, little thingy on your car keys, cell phone case, whatever). If you ever screw someone sketchy, make sure you take it out and use it. If they aren't willing, maybe that's a spur to reconsider?
2: Enlist the help of the dudes who run the place. Tell them if they see you there, you will give them ten thousand dollars, or however much money would sting. Ask them, as friends, to kick you out. Tell them you have leprosy. Whatever words you have to say to make sure you aren't welcome back there.
3: If this place is pay to play, then ration your funds. Each morning put exactly as much cash as you'll need that day in your wallet, and don't carry a credit card.
I don't know if any of these could work for you, but something similar might. A behavior that you don't want to repeat can always be made more inconvenient. That's what helped me out with eating too much. I hope that you can do a similar thing to get yourself a different habit.
So your current value can be considered a value and none else?
That objection is not logical :-P
It's using your brain mechanics seeking for a higher power
Sorry, don't have those. Maybe somewhere in dusty off-line storage, but certainly not activated.
Because is it really that bad to value logic over all else?
That strikes me as an expression devoid of meaning. Logic is a tool. Tools can be useful or not so much, but tools are not values unto themselves, they just make it easier to reach actual goals.
Do tell, how The One True Value of logic led you to post word salad on LW?
I wonder how my coworkers will do...
EDIT (2016.10.21): In case anybody is interested, the results with my coworkers are...
6 variations of "I don't know" with one outright "You didn't give me any information about the shepherd... he could be any age"
4 numeric answers ranging from 5 to 35
1 got distracted and never answered the question
I've got a party to attend tomorrow, we'll see if they do better.
Interestingly, no notable historical group has combined both the genocidal and suicidal urges.
Actually such groups existed, for example the Khmer Rouge turned in on themselves after killing their enemies. Something similar happened with the movement lead by Zhang Xianzhong only to a much greater extent, i.e., they more-or-less depopulated the province of Sichuan, including killing themselves.
In 20 century most risks were created by superpowers. Should we include them in the list of potential agents?
Also it seems that some risks are non-agential, as they result from collective behaviors of a group of agents, like arms race, capitalism, resource depletion, overpopulation etc.
"Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility." -Wikipedia
The very next sentence starts with "Utility is defined in various ways..." It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as "the greatest good for the greatest number" but the clutch is in the word "good" which is left undefined. This is as opposed to, say, virtue ethics which doesn't care per se about the consequences of actions.
https://www.quora.com/How-can-I-get-Wi-Fi-for-free-at-a-hotel/answer/Yishan-Wong
Want free wifi when staying at an hotel? Ask for it. Of course!, Duh, seems so obvious now that I think about it.
Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.
- Some can't survive on less or have other obligations that looks like charity (child support)
- We would have less initiative to earn more
- It would hurt our economy, as it is consumer driven. We must buy Iphones
- I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
- I pay taxes and it is like charity.
- I know better how to spent money on my needs.
- Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
- If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
- If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
- Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
- If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.
There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.
These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.
I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.
These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.
The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.
With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.
And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.
Estimated cost of tax evasion per year to the Federal gov is 450B.
Can I ask you to examine the apparent assumption here - that the $450B is all loss? Have you considered the possibility that the people who avoided the tax put the money to good use? Or that the government would not put that money to good use if it took it?
The article distinguishes between "emotional empathy" ("feeling with") and "cognitive empathy" ("feeling for"), and it's only the former that it (cautiously) argues against. It argues that emotional empathy pushes you to follow the crowd urging you to burn the witches, not merely out of social propriety but through coming to share their fear and anger.
So I think the author's answer to "why help all those strangers?" (meaning, I take it, something like "with what motive?") is "cognitive empathy".
I'm not altogether convinced by either the terminology or the psychology, but at any rate the claim here is not that we should be discarding every form of empathy and turning ourselves into sociopaths.
I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."
The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.
Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.
Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:
"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."
Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)
As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.
Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.
This seems as useful as telling depressed people to stop being depressed. Fear of embarrassment is one of the strongest drives humans have. Probably appearing to be a fool in the ancestral environment led to fewer mates or less status. It's not something you can just voluntarily turn off or push through easily.
The best strategy, I think, would be to work around it. Convince your brain that it's not embarrassing. Or that no one cares. Or pretend no one is watching. Or do it around supportive friends.
persufflation
That was a mild pain to google, so I'm leaving what I dug up here so others don't have to duplicate the effort.
Persufflation is perfusion with gaseous oxygen. Perfusion is when fluid going to an organ passes through the lymphatic system or blood vessels to get there.
If I'm reading this correctly, there's no thermodynamic reason to pump the organ full of oxygen gas, but only a biological one. Cells need less oxygen when they're on ice for an organ transplant, but they still consume O2. If this isn't being delivered via blood flow, another source is needed.
I take it that the persufflation is to help with recovering kidneys from liquid nitrogen temperatures, and not in getting there without damage?
EQ is NOT the whole story. As I just noted above in another comment, there is amazing work on brain architecture coming out of the lab of Dr. Suzana Herculano-Houzel, a scientist studying neural structure across the vertebrates. I recommend her book, "The Human Advantage" and all the papers to have come out of her lab recently.
Three important things:
1 - Neural scaling laws differ from clade to clade. In a generic mammal, a brain 10x as large has only 4x as many neurons so there is diminishing returns to brain mass probably due to the need to maintain long connecting fibers. Primates break this relationship - all primate brains are roughly equally densely packed, and indeed are as densely packed as a generic mammal brain from a very small mammal. Something changed in primate embryonic development upwards of 50 megayears ago predisposing large primates to have much larger numbers of neurons (Practical example: turns out the cerebrum of an elephant is roughly equivalent to that of a chimp and the largest whales probably correspond to early homo erectus).
2 - Humans are actually incredibly generic primates. All of the pieces of our brains fall right on the primate trend lines in terms of size and cell number - our cerebrum is not oversized, its just that the cerebrum grows faster than other parts with increasing brain size across all the primates. We just happen to have the largest neuron number. And also, humans fall right on the body size to encephalization quotient trendline of all the primates, with only 3 primates falling off the trendline - chimps, gorillas, and orangutans are below the trendline with brains much smaller than you'd expect for their body sizes. She hypothesizes, for very sound reasons explored in their papers and her book, that this was due to energy constraints because brain tissue is energetically expensive, and that humans were able to get back onto the generic primate trendline and have brains as big as you'd expect for a primate of our body mass once we started cooking and could support the energy requirements of brain tissue.
3 - Birds are another clade that breaks the usual brain scaling laws. Their neurons do not get bigger with increasing brain size, much like primates, except that their neurons are ~6x as small as primate neurons. Thus, it turns out that corvids and parrots are packing brains equivalent to many monkeys that their EQ would never suggest.
Missing link: should point to Science Alert.
Article says the Chinese State Food and Drug Adminstration (CFDA or SFDA) conducted an internal review of the drugs currently pending approval, and found out that in more than 80%:
the data failed to meet analysis requirements, were incomplete, or totally non-existent. [Also], many clinical trial outcomes were written before the trials had actually taken place. [...] The report found that pretty much everyone involved was guilty of some kind of malpractice of fraud. [...] even third party independent investigators tasked with inspecting clinical trial facilities are mentioned in the report as being "accomplices in data fabrication due to cut-throat competition and economic motivation".
There's no matching news item on the SFDA site; it probably doesn't have an official version in English. The article linked relies on this and that.
Compare and contrast with Scott Alexander's idea of making the American FDA regulate less. Two ends of a spectrum? Different cultures and markets leading to different outcomes? Similar situations but better hidden in the American case?
OTOH it's plausible they don't have much compelling evidence mainly because they were resource-constrained. I'm still not expecting this to go anywhere, though.
Whole kidneys can already be stored and brought back up from liquid nitrogen temps via persufflation well enough to properly filter waste and produce urine, and possibly well enough to be transplanted (research pending), though this may or may not go anywhere, depending on the funding environment.
Everything is heritable:
- "Genome-wide association study of antisocial personality disorder", Rautiainen et al 2016 (GWAS hits on crime)
- "The Causal Effects of Education on Health, Mortality, Cognition, Well-being, and Income in the UK Biobank", Davies et al 2016
- "Shared genetic aetiology of puberty timing between sexes and with health-related outcomes", Day et al 2015 (Most correlations are bad, as predicted by life cycle theory.)
- "Genomic analyses for age at menarche identify 389 independent signals and indicate BMI-independent effects of puberty timing on cancer susceptibility", Day et al 2016b
- "Evidence that low socioeconomic position accentuates genetic susceptibility to obesity", Tyrrell et al 2016
Politics/religion:
- "'Superbug' scourge spreads as U.S. fails to track rising human toll" (The weakness of US public health statistics on the spread of antibiotic resistance.)
- "The Iron Law Of Evaluation And Other Metallic Rules", Rossi 1987
- "The Terrorism Delusion: America's Overwrought Response to September 11", Mueller & Stewart 2012
- "The Disappeared: How the fatwa changed a writer's life"
- Malcolm X's life of crime
AI:
- "WaveNet: A Generative Model for Raw Audio"
- "Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning", Zhu et al 2016 (video)
- "Deep Neural Networks for YouTube Recommendations", Covington et al 2016
- "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network", Ledig et al 2016
- "Hyper Networks", Ha et al 2016 (blog)
- "Generative Visual Manipulation on the Natural Image Manifold", Zhu et al 2016b
- "Challenges for Brain Emulation: Why Is It So Difficult?", Cattell & Parker 2012
- NN architectures depicted graphically
Statistics/meta-science/mathematics:
- "Saving Science: Science isn't self-correcting, it's self-destructing. To save the enterprise, scientists must come out of the lab and into the real world."
- "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes", Ord et al 2008
- "Predicting Experimental Results: Who Knows What?", DellaVigna & Pope 2016
- "The Solution of the n-body Problem", Diacu 1996
- "If you went outside and lay down on your back with your mouth open, how long would you have to wait until a bird pooped in it?"
- /r/estimation
Psychology/biology:
- "Morphometricity as a measure of the neuroanatomical signature of a trait", Sabuncu et al 2016 (Heritability/variance component estimation generalized to brain volume/thickness: demonstrates that brain structure can predict a large fraction of variance among Alzheimers & aging (~1), IQ (0.95), etc, and so those traits have causal relationships (of some sort) with brain volume/thickness. While the causal relationships may not turn out to be interesting (we already knew brain volumes and thicknesses are catastrophically affected by aging and Alzheimer's), it does at least imply that as brain imaging datasets get larger, they will get ever better at predicting whether a subject has Alzheimers or how intelligent a person is. Hopefully we'll see variance components taken seriously outside of genetics. If power analysis tells you whether you have enough light to find the needles in the haystack, variance components can tell you whether there are even any needles to look for.)
- "Treatment of Psychopathy: A Review of Empirical Findings", Harris & Rice 2006
- "How to Raise a Genius: Lessons from a 45-Year Study of Super-smart Children"
- "Does Reading a Single Passage of Literary Fiction Really Improve Theory of Mind? An Attempt at Replication", Panero et al 2016
- "Failing Your Goals with Beeminder"
- "Evidence That Computer Science Grades Are Not Bimodal", Patitsas et al 2016
- "Thomas Jefferson Defends America With a Moose"
- "Syphilis in Renaissance Europe: rapid evolution of an introduced sexually transmitted disease?", Knell 2004
- "How to confuse a moral compass: Survey 'magic trick' causes attitude reversal"
- "Melatonin Treatment Effects on Adolescent Students' Sleep Timing and Sleepiness in a Placebo-Controlled Crossover Study", Eckerberg et al 2012
Technology:
- "Capacity-approaching DNA storage", Erlich & Zielinski 2016 (If DNA storage gets real-world usage, it might help accelerate the DNA synthesis cost-curve, and we could get whole genome synthesis years before I project!)
- "Breakthrough silicon scanning discovers backdoor in military chip", Skorobogatov & Woods 2012
- "Fully Countering Trusting Trust through Diverse Double-Compiling", Wheeler 2009
- "Turning 8-Bit Sprites into Printable 3D Models"
- "Magic: the Gathering is Turing Complete"
Economics:
- "Do Immigrants Import Their Economic Destiny? How migration shapes the prosperity of countries"
- "When It Rains It Pours: The Long-run Economic Impacts of Salt Iodization in the United States", Adhvaryu et al 2016
- "Signaling and Productivity in the Private Financial Returns to Schooling", Bingley et al 2015 (As I've mentioned before, even if you aren't all that interested in heritability or genetic correlations, twins and family studies are still vital for causal inference in economics/medicine/sociology because they control for so many things.)
- "China's Gold Rush in the Hills of Appalachia: Buyers in Hong Kong and Beijing are paying top dollar for wild American ginseng, fueling a digging frenzy that could decimate the revered root for good"
- "Good Policy or Good Luck? Country growth performance and temporary shocks", Easterly et al 1993
- Experience curve effects
- "Ramit Sethi and Patrick McKenzie on Getting Your First Consulting Client"
- "Lehman Brothers, We Heard You Were Dead"
Philosophy:
- "Logical Induction", Garrabrant et al 2016
- "Not By Empathy Alone"
- "The Wisest Steel Man"
Fiction:
Ted Chiang:
This article is an example of looking at the world pragmatically, and acknowledging an actual truth. Kudos to the writers.
It reminds me of the scene at the start of Bad Boyz 2, where the drug kingpin has a giant pile of paper cash, and rats are nesting in it.
Kingpin: "This is a STUPID problem to have." ... Kingpin: "But it IS a problem. Hire exterminators."
Similarly, politics getting in the way of transforming the world with its irksome interest in transforming the world is exactly the sort of thing that clear eyed futurists need to figure on.
When pushed on why Anthony Magnabosco is out interviewing people he responds with, "I like talking to people and finding out what they believe." True enough, but disingenuous. He presents himself as a seeker of the truth and his root goal is he is out to change minds. If the obtaining the truth was your primary motivation, street interviews is an incredibly inefficient method. The interviews come off as incredibly patronising. Questions such as, "If I gave you evidence about a biblical contradiction, and I'm not saying I do, but if I did, would you change your mind?" Of course you have a contradiction up your sleeve.
Honesty and effectiveness appear to be conflicting goals in street epistemology.
I'm unable to edit past posts of mine; it seems that this broke very recently and I'm wondering if it's related to the changes you made.
Specifically, when I click the Submit or the "Save and Continue" buttons after making an edit, it goes to lesswrong.com/submit with a blank screen. When I look at the HTTP error code it says it's a 404.
I also checked the post after that to see if the edit still went through, and it didn't. In other words, my edit did not get saved.
Do you know what's going on? There were a few corrections/expansions on past posts that I need to push live soon.
I also enjoyed the linked Politics Is Upstream of Science, which went in-depth on the state interventions in science talked about in the beginning of this piece.
I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking
“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all based on myopic optimism regarding the intended lifespan of these places.
Many refugees may never be able return home, and that reality needs to be realized and incorporated into solutions. Treating their situation as temporary or reversible puts people into a kind of existential limbo; inhabitants of these interstitial places can neither return to their normal routines nor move forward with their lives..
From City of Thorns:
The UN had spent a lot of time developing a new product: Interlocking Stabilized Soil Blocks (ISSBs), bricks made of mud, that could be used to build cheap houses in refugee camps. It had planned to build 15,000 such houses in Ifo 2 but only managed to construct 116 before the Kenyan government visited in December 2010 and ordered the building stopped. The houses looked too much like houses, better even than houses that Kenyans lived in, said the Department for Refugee Affairs, not the temporary structures and tents that refugees were supposed to inhabit.
Peru had an uprising in the 1980s in which the brutality of the insurgents, the Sendero Luminoso, caused mass migration from the Andes down to the coast. Lima's population grew from perhaps a million to its current 8.5 million in a decade. This occurred through settlements in pure desert, where people lived in shacks made of cardboard and reed matting. These were called "young villages", Pueblos Jóvenes.
Today, these are radically different. Los Olivos is now a lower-middle-class suburb, boasting one of the largest shopping malls in South America, gated neighborhoods, mammoth casinos and plastic surgery clinics. All now have schools, clinics, paved roads, electricity and water; and there is not a cardboard house in sight. (New arrivals can now buy prefab wooden houses to set up on more managed spaces, and the state runs in power and water.)
Zaatari refugee camp in Jordan, opened 4 years ago seems to be well on it's way to becomming a permanent city. It has businesses, permanent structures, and it's own economy.
Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?
Not being bored. Living systems (and presumably more so for living systems that include intelligence) show more complex behavior than dead systems.
The part about healthcare is USA-specific, but the relationship between total hours and total pay is nonlinear at other places, too.
In Slovakia, the healthcare is set up so that everyone pays a fixed fraction of their income, and then everyone receives exactly the same healthcare regardless of how much they paid. So it shouldn't have any impact on hourly rate.
Yet, it is difficult to find a part-time work on the market. When I tried it, I had to work for 50% of my previous salary just to reduce the work to 4 days a week, and the employer still believed they were doing me a favor. (After a few weeks I decided that getting 50% of money for 80% of time is not a smart deal, so I quit.)
I believe the problem is signalling. Almost everyone is okay with working full-time; especially men. (Women can use having small kids as an excuse for a part-time job, but that also dramatically reduces their hourly rate, which is an important part of the pay gap.) If you are a man unwilling to work full-time, it makes you weird.
So it's not like the employer literally needs you there 5 days a week. It's simply a decision to not hire a weirdo, when there are non-weird candidates available. If you differ from the majority by not willing to work 5 days a week, 8 hours a day, who knows what else is weird about you? Why take the unnecessary risk? Also, well-paid employees are supposed pretend they love their job; and by asking for a part-time job you show too clearly that you actually care about something else more.
Thus, I sometimes had jobs where I was able to spend up to 50% of my working time just browsing websites from the company computer. But no comparably well paid option where I could officially work 4 days a week, or 6 hours a day, and then simply go home.
(I was also trying to get home office, so that instead of browsing the web I could do something useful. But the companies where the employees spend much time online are usually on some level aware of what is happening, so they don't allow home office. As long as everyone must stay in the building the whole day, the management can keep pretending that people are actually working.)
I believe that if for example 50% of people working in some profession would demand part-time work, this problem would mostly disappear. Then, wanting to work part-time would simply be normal. But that's a coordination problem, and I don't even know how many people would actually be interested in working part-time if that would be a legitimate option (with the same hourly rate).
The Europeans did not "proceed with a controlled extermination of the population". Yet, what happened to that population?
They still exist... so they were not exterminated? They did not carry out purposeful extermination, and in fact the indigenous people were not exterminated. So what exactly are you arguing?
The only thing that was very truly devastating to indigenous populations was smallpox exposure, and that was an accident. Also lots of internal wars, famine, civilization collapse, etc. But most of that was triggered by the smallpox plague 30+% die-off.
The fact that Europeans outnumber indigenous people 100:1 in north america (less so in central and south america) isn't some purposeful, master plan of the European colonialists. It's just the inevitable outcome of a number of historical accidents with compounding effects.
If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it's because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector).
Unless the extraterrestrial species are the only macroscopic life-form on their planet, it's likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.
Should probably have been posted in the open thread (not meant as a reproach)
The premise this article starts with is wrong. The argument goes that AIs can't take over the world, because they can't predict things much better than humans can. Or, conversely, that they will be able to take over because they can predict much better than humans.
Well so what if they can predict the future better? That's certainly one possible advantage of AI, but it's far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans. Humans didn't evolve to be engineers or computer programmers. It's really just an accident we are capable of it. Humans have such a hard time designing complex systems, keeping track of so many different things in our head, etc. Already these jobs are restricted to unusually intelligent people.
I think there are many possible optimizations to the mind to improve at these kinds of tasks. There are rare humans that are very good at these tasks, showing that human brains aren't anywhere near the peak. An AI that is optimized for them, will be able to design technologies we can't even dream of. We could theoretically make nanotechnology today, but there are so many interacting parts and complexities, humans are just unable to manage it. The internet has so much bugged software running it. It could probably be pwned in a weekend by a sufficiently powerful programming AI.
And the same is perhaps true with designing better AI algorithms, an AI optimized towards AI research, would be much better at it than humans.
Along the lines of my earlier GCTA, I've written a Wikipedia article on genetic correlations.
what's the most annoying part of your life/job?
Pain. Moderate but constant pain from old sports injuries makes me: spend money on pain meds and counter irritants, work longer hours because the pain is distracting and reduces my productivity, limit physical activity and travel, deviate from an optimal exercise routine, fall into a black hole of grumpiness occasionally.
how much would you pay for a solution?
If by "solution" you mean an easy, one-time, guaranteed fix: $10,000
You are literally asking me to solve the FAI problem right here and now.
No, I'm asking you to specify it. My point is that you can't build X if you can't even recognize X.
You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want.
Learning what humans want is pretty easy. However it's an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.
From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.
Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative. Since I don't expect to find myself in a privileged position, I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.
Later you say that CEV will average values. I don't have average values.
If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.
I see no evidence to believe this is true and lots of evidence to believe this is false.
You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.
I don't find it convincing. Even though it's long, I don't recognize any of the examples as being 'Ra' ness and I can't think of any examples of 'Ra' in my own experience. The name 'Ra' is also not that great as unlike some of the other reifications going around like Yvain's 'Moloch', which at least have some intuitive connection with their concept, 'Ra' seems pretty much arbitrary.
EDIT: Obormot and saturn2 on IRC note that 'Ra' seems in her telling to slightly overlap with the whole complacent-elite meritocracy going on in the Ivy League & Wall Street, of the Twilight of the Elites type.
most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.
Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
I didn't say there wasn't dissent. I said it wasn't an outlier view, and seems to be the majority opinion.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Look I'm sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn't really add anything. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site. At the very least, read some of the previous discussions first. You don't need to read everything, but there is a list of posts here.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)