Filter Last three months

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: turchin 11 October 2016 02:03:47PM *  3 points [-]

Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.

  1. Some can't survive on less or have other obligations that looks like charity (child support)
  2. We would have less initiative to earn more
  3. It would hurt our economy, as it is consumer driven. We must buy Iphones
  4. I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
  5. I pay taxes and it is like charity.
  6. I know better how to spent money on my needs.
  7. Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
  8. If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
  9. If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
  10. Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
  11. If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
Comment author: Houshalter 10 October 2016 08:07:29PM *  4 points [-]

I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.

There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.

These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.

I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.

Comment author: skeptical_lurker 10 October 2016 06:21:41PM 4 points [-]

That doesn't mean that there is nothing to do - if you don't know what FAI is, then you try to work out what it is.

Comment author: DanArmak 10 October 2016 02:54:19PM 4 points [-]

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Comment author: ChristianKl 10 October 2016 02:02:25PM 4 points [-]

Get employed by Google.

Comment author: turchin 10 October 2016 12:46:06PM 3 points [-]

There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?

Comment author: DanArmak 08 October 2016 09:44:11PM *  4 points [-]

These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.

With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.

And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.

Comment author: waveman 07 October 2016 09:51:03PM 3 points [-]

Estimated cost of tax evasion per year to the Federal gov is 450B.

Can I ask you to examine the apparent assumption here - that the $450B is all loss? Have you considered the possibility that the people who avoided the tax put the money to good use? Or that the government would not put that money to good use if it took it?

Comment author: waveman 07 October 2016 11:33:13AM 4 points [-]

A related concept is "inferential distance" - people can only move one step at a time from what they know.

Also typical mind fallacy.

Comment author: gjm 06 October 2016 06:42:39PM -1 points [-]

The article distinguishes between "emotional empathy" ("feeling with") and "cognitive empathy" ("feeling for"), and it's only the former that it (cautiously) argues against. It argues that emotional empathy pushes you to follow the crowd urging you to burn the witches, not merely out of social propriety but through coming to share their fear and anger.

So I think the author's answer to "why help all those strangers?" (meaning, I take it, something like "with what motive?") is "cognitive empathy".

I'm not altogether convinced by either the terminology or the psychology, but at any rate the claim here is not that we should be discarding every form of empathy and turning ourselves into sociopaths.

Comment author: Lumifer 05 October 2016 09:00:47PM 3 points [-]

You asked why is "the incidence of police encounters with blacks elevated". This is a direct answer.

If you want to know the reasons for different crime rates, this is going to get long and complicated.

Comment author: James_Miller 04 October 2016 11:30:04PM 4 points [-]

I'm extremely interested in the last three of these especially the Fermi paradox one. Great essays.

Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: Houshalter 03 October 2016 07:26:35PM *  4 points [-]

This seems as useful as telling depressed people to stop being depressed. Fear of embarrassment is one of the strongest drives humans have. Probably appearing to be a fool in the ancestral environment led to fewer mates or less status. It's not something you can just voluntarily turn off or push through easily.

The best strategy, I think, would be to work around it. Convince your brain that it's not embarrassing. Or that no one cares. Or pretend no one is watching. Or do it around supportive friends.

Comment author: WhySpace 03 October 2016 03:42:46PM *  4 points [-]

persufflation

That was a mild pain to google, so I'm leaving what I dug up here so others don't have to duplicate the effort.

Persufflation is perfusion with gaseous oxygen. Perfusion is when fluid going to an organ passes through the lymphatic system or blood vessels to get there.

If I'm reading this correctly, there's no thermodynamic reason to pump the organ full of oxygen gas, but only a biological one. Cells need less oxygen when they're on ice for an organ transplant, but they still consume O2. If this isn't being delivered via blood flow, another source is needed.

I take it that the persufflation is to help with recovering kidneys from liquid nitrogen temperatures, and not in getting there without damage?

Comment author: username2 03 October 2016 12:08:16PM 4 points [-]

How do you deal with embarrassment of having to learn as an adult things that most people learn in their childhood? I'm talking about things that you can't learn alone in private, such as swimming, riding a bicycle and things like that.

Comment author: CellBioGuy 02 October 2016 08:14:09PM *  4 points [-]

EQ is NOT the whole story. As I just noted above in another comment, there is amazing work on brain architecture coming out of the lab of Dr. Suzana Herculano-Houzel, a scientist studying neural structure across the vertebrates. I recommend her book, "The Human Advantage" and all the papers to have come out of her lab recently.

Three important things:

1 - Neural scaling laws differ from clade to clade. In a generic mammal, a brain 10x as large has only 4x as many neurons so there is diminishing returns to brain mass probably due to the need to maintain long connecting fibers. Primates break this relationship - all primate brains are roughly equally densely packed, and indeed are as densely packed as a generic mammal brain from a very small mammal. Something changed in primate embryonic development upwards of 50 megayears ago predisposing large primates to have much larger numbers of neurons (Practical example: turns out the cerebrum of an elephant is roughly equivalent to that of a chimp and the largest whales probably correspond to early homo erectus).

2 - Humans are actually incredibly generic primates. All of the pieces of our brains fall right on the primate trend lines in terms of size and cell number - our cerebrum is not oversized, its just that the cerebrum grows faster than other parts with increasing brain size across all the primates. We just happen to have the largest neuron number. And also, humans fall right on the body size to encephalization quotient trendline of all the primates, with only 3 primates falling off the trendline - chimps, gorillas, and orangutans are below the trendline with brains much smaller than you'd expect for their body sizes. She hypothesizes, for very sound reasons explored in their papers and her book, that this was due to energy constraints because brain tissue is energetically expensive, and that humans were able to get back onto the generic primate trendline and have brains as big as you'd expect for a primate of our body mass once we started cooking and could support the energy requirements of brain tissue.

3 - Birds are another clade that breaks the usual brain scaling laws. Their neurons do not get bigger with increasing brain size, much like primates, except that their neurons are ~6x as small as primate neurons. Thus, it turns out that corvids and parrots are packing brains equivalent to many monkeys that their EQ would never suggest.

Comment author: DanArmak 02 October 2016 07:43:42AM *  4 points [-]

Missing link: should point to Science Alert.

Article says the Chinese State Food and Drug Adminstration (CFDA or SFDA) conducted an internal review of the drugs currently pending approval, and found out that in more than 80%:

the data failed to meet analysis requirements, were incomplete, or totally non-existent. [Also], many clinical trial outcomes were written before the trials had actually taken place. [...] The report found that pretty much everyone involved was guilty of some kind of malpractice of fraud. [...] even third party independent investigators tasked with inspecting clinical trial facilities are mentioned in the report as being "accomplices in data fabrication due to cut-throat competition and economic motivation".

There's no matching news item on the SFDA site; it probably doesn't have an official version in English. The article linked relies on this and that.

Compare and contrast with Scott Alexander's idea of making the American FDA regulate less. Two ends of a spectrum? Different cultures and markets leading to different outcomes? Similar situations but better hidden in the American case?

Comment author: Fluttershy 02 October 2016 01:09:38AM 4 points [-]

OTOH it's plausible they don't have much compelling evidence mainly because they were resource-constrained. I'm still not expecting this to go anywhere, though.

Whole kidneys can already be stored and brought back up from liquid nitrogen temps via persufflation well enough to properly filter waste and produce urine, and possibly well enough to be transplanted (research pending), though this may or may not go anywhere, depending on the funding environment.

Comment author: gwern 01 October 2016 05:03:00PM 4 points [-]

Everything is heritable:

Politics/religion:

AI:

Statistics/meta-science/mathematics:

Psychology/biology:

Technology:

Economics:

Philosophy:

Fiction:

Comment author: WalterL 29 September 2016 02:42:11PM 4 points [-]

This article is an example of looking at the world pragmatically, and acknowledging an actual truth. Kudos to the writers.

It reminds me of the scene at the start of Bad Boyz 2, where the drug kingpin has a giant pile of paper cash, and rats are nesting in it.

Kingpin: "This is a STUPID problem to have." ... Kingpin: "But it IS a problem. Hire exterminators."

Similarly, politics getting in the way of transforming the world with its irksome interest in transforming the world is exactly the sort of thing that clear eyed futurists need to figure on.

Comment author: RainbowSpacedancer 29 September 2016 11:45:43AM 4 points [-]

When pushed on why Anthony Magnabosco is out interviewing people he responds with, "I like talking to people and finding out what they believe." True enough, but disingenuous. He presents himself as a seeker of the truth and his root goal is he is out to change minds. If the obtaining the truth was your primary motivation, street interviews is an incredibly inefficient method. The interviews come off as incredibly patronising. Questions such as, "If I gave you evidence about a biblical contradiction, and I'm not saying I do, but if I did, would you change your mind?" Of course you have a contradiction up your sleeve.

Honesty and effectiveness appear to be conflicting goals in street epistemology.

In response to Linkposts now live!
Comment author: VipulNaik 28 September 2016 10:56:22PM *  4 points [-]

I'm unable to edit past posts of mine; it seems that this broke very recently and I'm wondering if it's related to the changes you made.

Specifically, when I click the Submit or the "Save and Continue" buttons after making an edit, it goes to lesswrong.com/submit with a blank screen. When I look at the HTTP error code it says it's a 404.

I also checked the post after that to see if the edit still went through, and it didn't. In other words, my edit did not get saved.

Do you know what's going on? There were a few corrections/expansions on past posts that I need to push live soon.

Comment author: iceman 28 September 2016 09:49:14PM 4 points [-]

I also enjoyed the linked Politics Is Upstream of Science, which went in-depth on the state interventions in science talked about in the beginning of this piece.

Comment author: Houshalter 28 September 2016 05:33:31PM 4 points [-]

I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking

“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all based on myopic optimism regarding the intended lifespan of these places.

Many refugees may never be able return home, and that reality needs to be realized and incorporated into solutions. Treating their situation as temporary or reversible puts people into a kind of existential limbo; inhabitants of these interstitial places can neither return to their normal routines nor move forward with their lives..

From City of Thorns:

The UN had spent a lot of time developing a new product: Interlocking Stabilized Soil Blocks (ISSBs), bricks made of mud, that could be used to build cheap houses in refugee camps. It had planned to build 15,000 such houses in Ifo 2 but only managed to construct 116 before the Kenyan government visited in December 2010 and ordered the building stopped. The houses looked too much like houses, better even than houses that Kenyans lived in, said the Department for Refugee Affairs, not the temporary structures and tents that refugees were supposed to inhabit.

From reddit:

Peru had an uprising in the 1980s in which the brutality of the insurgents, the Sendero Luminoso, caused mass migration from the Andes down to the coast. Lima's population grew from perhaps a million to its current 8.5 million in a decade. This occurred through settlements in pure desert, where people lived in shacks made of cardboard and reed matting. These were called "young villages", Pueblos JĂłvenes.

Today, these are radically different. Los Olivos is now a lower-middle-class suburb, boasting one of the largest shopping malls in South America, gated neighborhoods, mammoth casinos and plastic surgery clinics. All now have schools, clinics, paved roads, electricity and water; and there is not a cardboard house in sight. (New arrivals can now buy prefab wooden houses to set up on more managed spaces, and the state runs in power and water.)

Zaatari refugee camp in Jordan, opened 4 years ago seems to be well on it's way to becomming a permanent city. It has businesses, permanent structures, and it's own economy.

Comment author: vallinder 27 September 2016 05:28:28PM 4 points [-]

I don't think it's fair to say that "nobody understood induction in any kind of rigorous way until about 1968." The linked paper argues that Solomonoff prediction does not justify Occam's razor, but rather that it gives us a specific inductive assumption. And such inductive assumptions had previously been rigorously studied by Carnap among others.

But even if we grant that assumption, I don't see why we should find it surprising that science made progress without having a rigorous understanding of induction. In general, successfully engaging in some activity doesn't require having a rigorous understanding of that activity, and making inductive inferences is something that comes very natural to human beings.

Moreover, it seems that algorithmic information theory has (at best) had extremely limited impact on actual scientific practice in the decades since the field was born. So even if it does constitute the first rigorous understanding of induction, the lesson seems to be that scientific progress does not require such an understanding.

Comment author: username2 27 September 2016 09:04:33AM 3 points [-]

tonight—and the U.S. POTUS election writ large—is shaping up to be a very consequential world event

Is that actually true? I've lived through many US presidential eras, including multiple ones defined by "change." Nothing of consequence really changed. Why should this be any different? (Rhetorical question, please don't reply as the answer would be off-topic.)

Consider the possibility that if you want to be effective in your life goals (the point of rationality, no?) then you need to do so from a framework outside the bounds of political thought. Advanced rationalists may use political action as a tool, but not for the search of truth as we care about here. Political commentary has little relevance to the work that we do.

Comment author: MattG2 20 September 2016 03:52:15PM *  4 points [-]

Let's say I have a set of students, and a set of learning materials for an upcoming test. My goal is to run an experiment to see which learning materials are correlated with better scores on the test via multiple linear regression. I'm also going to make the simplifying assumption that the effects of the learning materials are independent.

I'm looking for an experimental protocol with the following conditions:

  1. I want to be able to give each student as many learning materials as possible. I don't want a simple RCT, but a factorial experiment where students get many materials and the statistics tease out the linear regression.

  2. I have a prior about which learning materials will do better, I'd like to utilize this prior by originally distributing these materials to more students.

  3. (Bonus) Students are constantly entering this class, I'd love to be able to do some multi-armed bandit thingy where as I get more data I continually change this prior.

I've looked at most of the links going from https://en.wikipedia.org/wiki/Optimal_design but they mostly show the mathematical interpretation of each method, not a clear explanation of in which conditions you'd use that method.

Thanks!

Comment author: Gleb_Tsipursky 20 September 2016 12:46:22PM -2 points [-]

I agree that it does produce disassociation, but I don't think, for me, it's about disassociating from emotions. It's a disassociation from an identity label. It helps keep my identity small in way that speaks to my System 1 well.

In response to Seven Apocalypses
Comment author: James_Miller 20 September 2016 03:52:21AM *  4 points [-]

"A Disneyland with no children" apocalypse where optimization competition eliminates any pleasure we get from life.

A hell apocalypse where a large numbers of sentient lifeforms are condemned to very long term suffering possibly in a computer simulation.

Comment author: Gleb_Tsipursky 20 September 2016 01:59:22AM -2 points [-]

Weird works for me, and I actually associate positive value with weirdness. But of course your mileage may vary. Any term that works to indicate distance from an identity label viscerally to one's System 1 will do, as Gram_Stone pointed out.

Comment author: gwern 19 September 2016 06:22:44PM 4 points [-]

Still, my question remains - is there real data out there to support the contention that P(elite career|IQ) has a local max and then decreases for higher IQ?

No. As I point out in my comment there, the evidence is strongly the other way: TIP/SMPY. To the extent that measures like wealth hit diminishing returns or even fall (eg Zagorsky), it has as much to do with personal choices & values as ability: the physicist who could make money on Wall Street but chooses to continue studying particles, the person who chooses to become an influential but poor writer, etc. (There are many coins of the realm, and greenbacks are but one.)

Comment author: James_Miller 16 September 2016 02:40:54AM 3 points [-]

No, I think Adams assigns a higher probability to Trump winning than most people do. I think Adams accepted this theory on Trump would cost him money.

Comment author: DataPacRat 14 September 2016 05:53:50PM 4 points [-]

not the right place to start

Who says that's where I'm starting? :)

I already have my short-term physical supplies, including water, food, camping gear, and AA-battery-powerable handheld ham radio. I also have a highly-portable solar panel capable of keeping my phone, and the offline copy of Wikipedia I keep on its SD Card, functioning regardless of the power grid; and I have enough battery-backup stuff at home to run my laptop long enough to copy the latest Wikipedia dump (and whatever emergency-survival ebooks I've collected by then) onto that SD card.

Comment author: James_Miller 14 September 2016 03:20:51PM 3 points [-]

I agree with your first paragraph, but Adams has described how his Trump writing has decimated his ability to earn money as a public speaker because people who hire such speakers want to avoid controversy. Adams appearing on the podcast of an obscure college professor was an act of altruism.

Comment author: niceguyanon 14 September 2016 01:32:28PM 4 points [-]

Unless you are wealthy being NEET is generally not a good thing IMO because you will feel crappy about being low status, and you will lack resources. Not sure what your definition of doing nothing is, but reasonable ones include eating at nice restaurants, expensive video games, gym memberships, courting mates, concerts, clothes, etc... doing nothing costs a fortune.

Comment author: Furcas 13 September 2016 03:03:27PM *  4 points [-]

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Comment author: ChristianKl 13 September 2016 10:01:06AM 4 points [-]

I don't see much value in having a recent copy of Wikipedia or Project Gutenberg on my computer. In both cases the availability of the information is secured by other parties. It's more valuable to make sure that I store information that's not protected by other people

Comment author: NancyLebovitz 13 September 2016 12:41:12AM 4 points [-]

Relax when trying to remember something instead of making an effort.

Comment author: gjm 12 September 2016 10:45:36PM -1 points [-]

You are repeatedly telling me I've said things I actually haven't, telling me I think things I actually don't, telling me I don't know things I actually do, etc., etc. You have not yet succeeded in communicating any new insights to me; we may of course disagree about why that is.

Bored now. Bye.

Comment author: DataPacRat 12 September 2016 10:09:14PM 4 points [-]

Time to rebuild a library

My 5 terabyte harddrive went poof this morning, and silly me hadn't bought data-recovery insurance. Fortunately, I still have other copies of all my important data, and it'll just take a while to download everything else I'd been collecting.

Which brings up the question: What info do you feel it's important to have offline copies of, gathered from the whole gosh-dang internet? A recent copy of Wikipedia and the Project Gutenberg DVD are the obvious starting places... which other info do you think pays the rent of its storage space?

Comment author: Houshalter 12 September 2016 06:16:50PM 4 points [-]

Unfortunately it might also be an area where epistemic and instrumental rationality clash. In fact, most of the world does not have freedom of speech in the same way the US does - if one advocated HBD in, say, Germany, could one be thrown in prison in the same way people are imprisoned for saying 'seig heil'?

There is a difference between advocating something and merely believing it. But I'm mostly skeptical of the people that put "strongly disagree" on that question. As opposed to "disagree" or "neutral". The fact that it's so correlated with political ideology is more evidence that it's just political bias.

If I lived 200 years ago, I wouldn't go around advocating atheism. But I might have believed it privately, and I would be more skeptical of the openmindedness of people that say they "strongly oppose the evils of atheism".

The study I am thinking of did account for this.

I really don't know. When I researched this it seems like the effects are pretty hard to estimate. Different models give very different results. A recentish study using more modern climate models shows that the effects would be catastrophic and last for multiple years:

https://en.wikipedia.org/wiki/Nuclear_winter#2007_study_on_global_nuclear_war

the products of a nuclear explosion have very short half-lives - the worst would be over within an hour. Not only do we not have enough bombs to contaminate the world, but ground zero would be habitable again after a few months.

Those first few months are the problem though. The crops and livestock die or absorb the radioactive isotopes. The people too if they don't happen to have a fallout shelter handy.

Also the nuclear bombs themselves aren't the only concern. You would have to deal with all the waste left in the cities they destroy. Nuclear power plants would melt down with no one to contain them. Vast amounts of chemical waste would leak from abandoned chemical plants and waste storage. Oil would leak and pollute the oceans with no cleanup.

I don't know how to estimate the damage of this. But it should be at least a bad or worse than major industrial accidents of the past, like Bhopal, deepwater horizon, or Chernobyl. But all happening at once and with no one left to organize any kind of response.

while I think a nuclear war between allmost all countries is unlikly, its still a lot more likly then 90% of humanity killed by environmental or political collapse.

I think you are underestimating the secondary effects. I imagine a complete destruction of the global economy. There isn't enough food to go around and lots of countries are starving. This would lead to more war and chaos.

A few thousand years ago the civilizations of the mediterranean all collapsed almost at once. It's now speculated to be the result of a serious drought and bad weather. The states that couldn't feed their population got overthrown, and their hungry populations went to war with neighboring countries for food, until nothing of the old orders remained. It was a serious setback for humanity.

If that happened in the modern world, technological civilization might end and never be restarted. The modern world depends on hugely complex infrastructure and tons of different industries and inputs. If we lose that, it would be very difficult to rebuild. We've already extracted most of the easy to get to minerals and fossil fuels. Much farmland has been degraded from overuse and depends on inputs of fertilizer, irrigation systems, and of course modern machinery which would be difficult to replace.

Comment author: Viliam 12 September 2016 04:09:33PM 4 points [-]

As a person who has read 100% of the Sequences, I would also prefer if there would exist a shorter version. But, as far as I know, it doesn't exist yet. Someone would have to make it. Someone other than Eliezer, because this is not at the top of his priority list.

Would I be losing anything if I didn't need to be convinced, I just want to know the pointers?

You would be probably more likely to forget them. In general, longer text requires you to spend more time focusing on the idea. If someone would convert the Sequences into a PowerPoint presentation of 20 pages, a week later you probably wouldn't remember anything.

I realize how what I wrote here conflicts with my desire to have a shorter version of the Sequences, and... I don't know. Perhaps the shorter version should use other techniques for easier memorization, e.g. funny pictures.

Comment author: moridinamael 12 September 2016 03:35:48PM 4 points [-]

First, you should probably read the documents we refer to as the Sequences before you try to "correct" us.

Second,

A lot of things have you confused the territory being the map.

For example, that you exist, is a map.

That there is a being there, creature of some kind, it's a map.

That you have a brain.

Every. Single. Word. Is A Map.

We all know this.

What is the territory?

Become silent of all thoughts, without using thoughts to manipulate or lie, neither using thoughts not to manipulate or lie.

You seem to be referring to meditative states. A lot of us do this, for various reasons. It really has little to do with rationality or arationality. Quieting down and dissociating from one's thoughts certainly helps with clear thinking.

You think you are in control, thus the flow of life doesn't flow effortlessly. :)

We mostly don't believe in free will.

But it's fine to let go, and be present in this moment, where there, you are, the territory, which is arational.

There will be no reason for reasoning or understanding, it is arational.

Just because you're in a meditative state of thoughtlessness doesn't mean that you're doing anything beyond engaging with yet another set of maps. You're just engaging with them nonverbally and intuitively.

It is always the case, whether you think about it or not. I can welcome you in to see for yourself, there's a lot of beauty to be had.

Again, lots of us meditate, and we're all about beauty. Not sure where you're getting this perspective.

Please don't be dogmatic. Try and see for yourself the possible truth which is right before your eyes, the possible truth that you do not exist, that you, and the possibility that everything else is a fiction. The fiction of the mind.

But you will still be to function, to be able to go to AI conferences and talk about the latest improvements, or talk decision theory or whatever else you have going on in your life. Because the belief that you will lose these things, by becoming more aware, is a trick of the ego. It's highly improbable.

So go ahead, and see for yourself. Likely though you need to work on yourself, there's nothing which is more important than the machine which does not come with an instruction manual. That is you. What you think is you. What I mean is the practical you.

I see buried in here a sales pitch for engaging in some kind of meditative or mindfulness practice. I admit that the foundational documents of Less Wrong don't explicitly advocate for taking up meditation, but it's a popular community topic.

Comment author: gjm 12 September 2016 12:48:35PM -1 points [-]

It looks like the analysis didn't suppress responses that gave something other than 50 as an answer to the question about a coin flip. It probably should.

Comment author: Soothsilver 12 September 2016 12:09:43PM 4 points [-]

Being around here has made me think that I know everything interesting about the world and suppressed my excitement and joy from many minor things I could do. I also feel like my sense of wonder diminished. As I write this, I am a little unhappy, and in a period of depression, but I had similar feelings, if less intense, even before this period.

I was wondering whether you have any advice on how to restore this; or even better, how to "forget" as much rationality and transhumanism as possible (if not actually forgetting, then at least "to think and feel as I did before I read the Sequences")?

Comment author: Luke_A_Somers 11 September 2016 05:44:42PM 4 points [-]

Is there a thread for the calibration question analysis? I have some questions and comments about that, more than this.

Comment author: skeptical_lurker 11 September 2016 01:42:04PM 3 points [-]

This might make some sense if DNNs were being used to further our understanding of theoretical physics, but afaik they're not. They're being used to classify cat pics. SInce when do you use polynomial Hamiltonians to recognise cats?

These properties mean that neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones

No finite DNN can approximate sin(x) over the entire real numbers, unless you cheat by having a sin(x) activation function.

Comment author: Manfred 10 September 2016 11:48:43PM 4 points [-]

I'd blame the MIT press release organ for being clickbait, but the paper isn't much better. It's almost entirely flash with very little substance. This is not to say there's no math - the math just doesn't much apply to the real world. For example, the idea that deep neural networks work well because they recreate the hierarchical generative process for the data is a common misconception.

And then from this starting point you want to start speculating?

Comment author: John_Maxwell_IV 10 September 2016 01:46:48PM 4 points [-]

Thanks for the analysis!

The median amount donated to bugs rights charities is listed as $157.5. That implies that half of survey respondents donated >$150 to bugs rights charities. Obviously this is kind of implausible. I assume the real number who donated to bugs rights charities is 4 people, since the donations sum to $1083.0 and the average amount donated is $270.75. This also goes for the other donation-related questions--just something to keep in mind.

Comment author: morganism 05 September 2016 11:27:23PM 3 points [-]

Academic Publishing without Journals

By setting up the journals with a bitcoin type blockchain, you could reward reviewers, and citations. SciCred !

just a stub to think about

https://hack.ether.camp/#/idea/academic-publishing-without-journals

Comment author: fubarobfusco 05 September 2016 05:49:02PM 2 points [-]

It's not as if LW has a problem of too much material these days.

Comment author: Houshalter 05 September 2016 09:15:44AM 4 points [-]

I wrote a thing that turned out to be too long for a comment: The Doomsday Argument is even Worse than Thought

Comment author: buybuydandavis 04 September 2016 08:03:25PM 4 points [-]

“Why does anything exist at all?”

I lose no sleep over this. I think people who do are just confused by language.

I'd say that if you examine your concept of "why", you find it presupposes existence.

Comment author: Elo 02 September 2016 07:17:32AM -2 points [-]

Tried listening.

3 minutes: most scientists are wrong.

doubt the rest is worth it.

Comment author: Dagon 30 August 2016 02:01:02PM 4 points [-]

You can also point out the contradiction that they don't seem to be in a hurry to take the obvious first step by killing themselves. Proving that they see at least one human life as a net positive. Then talk about everyone else they don't want to kill or prevent being born.

Be aware, though, that this isn't truth-seeking. It's debate for the fun of it.

Comment author: gwern 28 August 2016 08:02:10PM 4 points [-]
Comment author: WalterL 26 August 2016 04:40:35PM 1 point [-]

Aw come on guys. Negative karma for literally pointing out a news site? What does that even mean?

Comment author: philh 26 August 2016 11:51:54AM 3 points [-]

I feel it's important to note that he was talking about writing styles, not philosophy.

Comment author: Elo 25 August 2016 11:00:45PM -2 points [-]

think like machines rather than humans

01101000 01100001 01101000 01100001 01101000 01100001 01101000 01100001

Comment author: ThisSpaceAvailable 21 August 2016 01:31:02AM *  4 points [-]

I suppose this might be better place to ask than trying to resurrect a previous thread:

What kind of statistics can Signal offer on prior cohorts? E.g. percentage with jobs, percentage with jobs in data science field, percentage with incomes over $100k, median income of graduates, mean income of graduates, mean income of employed graduates, etc.? And how do the different cohorts compare? (Those are just examples; I don't necessarily expect to get those exact answers, but it would be good to have some data and have it be presented in a manner that is at least partially resistant to cherry picking/massaging, etc.) Basically, what sort of evidence E does Signal have to offer, such that I should update towards it being effective, given both E, and "E has been selected by Signal, and Signal has an interest in choosing E to be as flattering rather than as informative as possible" are true?

Also, the last I heard, there was a deposit requirement. What's the refund policy on that?

Comment author: gwern 19 August 2016 08:40:42PM 4 points [-]

You would, at the very least, be in violation of several acts regarding approval of GMOs: https://www.loc.gov/law/help/restrictions-on-gmos/usa.php https://en.wikipedia.org/wiki/Regulation_of_the_release_of_genetically_modified_organisms#United_States Specifically, you'd be violating FDA requirements by releasing '“new animal drugs” (NADs)' without approval. Depending on whether mosquitoes are considered plant pests, it looks like you'd also be violating Department of Agriculture laws. I assume you'd probably also be violating a number of EPA laws but didn't see anything specifically about that.

Comment author: James_Miller 19 August 2016 03:32:33PM 4 points [-]

For you I suggest something that also advances your career so that you can devote more time to the project. If the answer to this isn't clear I suggest talking to your professors asking what they suggest. Another approach is to become a literal superhero. Assemble a group of scientists who on their own could eradicate mosquitoes and just do it. Don't wait for official approval.

Comment author: gjm 17 August 2016 11:37:16PM -2 points [-]

I thought I remembered seeing it linked some years back from a friend's blog. The friend in question has moved his blog a couple of times, though, and after looking through all the atheism-related stuff in its current incarnation I didn't find the link. It's also always possible, of course, that I'm misremembering.

Comment author: ChristianKl 12 August 2016 08:15:57PM 4 points [-]

I thought that to most LW'ers the weak version of "Calories in, Calories out" was uncontroversial.

EY likes to say that "mass in, mass out" works even better for predicting changes in weight.

Comment author: bbleeker 12 August 2016 06:45:59PM 3 points [-]

More like 1/100000, and then when they thaw you you'll be brain damaged and have to live in an institution forever. They don't really know how to do this yet. How far along are they now? Have they frozen and thawed a mouse yet, and did it behave the same as before? I won't let them freeze me earlier than that, because there's essentially no chance I'll be even able to walk and talk, let alone be someone present me would recognize as 'me'.

Comment author: The_Jaded_One 11 August 2016 09:27:01PM 4 points [-]

This is more something you would do for a laugh than something that is intended as a serious recruitment strategy. There is a disclaimer at the top of the post.

Cryo has bad signalling value - signals weird + selfish. It's hard to overcome this but I am open to suggestions.

Comment author: Soothsilver 10 August 2016 05:17:21PM 4 points [-]

We're consequentialists here, so I get all the credit for it even if it wasn't much effort, right?

^^

Comment author: jimrandomh 09 August 2016 07:34:06PM 4 points [-]

You have noticed things happening that don't match your model of how you think the world (and nutrition in particular) should work. Rather than defy the data, maybe you could come up with a different model that better explains the observations?

Comment author: Lumifer 09 August 2016 04:50:17PM 4 points [-]

What are "allowable" variables and what makes one "allowable"?

I'm aiming for something like "once you know income (and other allowable variables) then race should not affect the decision beyond that".

That's the same thing: if S (say, race) does not provide any useful information after controlling for X (say, income) then your classifier is going to "naturally" ignore it. If it doesn't, there is still useful information in S even after you took X into account.

This is all basic statistics, I still don't understand why there's a need to make certain variables (like race) special.

Comment author: James_Miller 08 August 2016 01:46:11AM *  4 points [-]

True if gene drive is like antibiotics, but is it? Every day we wait 1,200 people die of malaria because of delay, a price worth paying if, but only if, you get some significant benefit from waiting. Another big "unknown unknown" is what other viruses mosquitoes will put in us if we don't quickly eliminate them.

Comment author: James_Miller 07 August 2016 06:06:16PM 4 points [-]

Yes, this does reduce the benefit of getting Trump to support mosquito eradication.

Comment author: Dagon 05 August 2016 06:02:26PM 2 points [-]

I think there's a fundamental goal conflict between "fairness" and precision. If the socially-unpopular feature is in fact predictive, then you either explicitly want a less-predictive algorithm, or you end up using other features that correlate with S strongly enough that you might as well just use S.

If you want to ensure a given distribution of S independent of classification, then include that in your prediction goals: have your cost function include a homogeneity penalty. Not that you're now pretty seriously tipping the scales against what you previously thought your classifier was predicting. Better and simpler to design and test the classifier in a straightforward way, but don't use it as the sole decision criteria.

Redlining (or more generally, deciding who gets credit) is a great example for this. If you want accurate risk assessment, you must take into account data (income, savings, industry/job stability, other kinds of debt, etc.) that correlates with ethnic averages. The problem is not that the risk classifiers are wrong, the problem is that correct risk assessments lead to unpleasant loan distributions. And the sane solution is to explicitly subsidize the risks you want to encourage for social reasons, not to lie about the risk by throwing away data.

Comment author: Panorama 03 August 2016 10:37:36AM 4 points [-]

Medical benefits of dental floss unproven

The federal government has recommended flossing since 1979, first in a surgeon general's report and later in the Dietary Guidelines for Americans issued every five years. The guidelines must be based on scientific evidence, under the law.

Last year, the Associated Press asked the departments of Health and Human Services and Agriculture for their evidence, and followed up with written requests under the Freedom of Information Act.

When the federal government issued its latest dietary guidelines this year, the flossing recommendation had been removed, without notice. In a letter to the AP, the government acknowledged the effectiveness of flossing had never been researched, as required.

The AP looked at the most rigorous research conducted over the past decade, focusing on 25 studies that generally compared the use of a toothbrush with the combination of toothbrushes and floss. The findings? The evidence for flossing is "weak, very unreliable," of "very low" quality, and carries "a moderate to large potential for bias."

Comment author: ChristianKl 03 August 2016 10:31:06AM 4 points [-]

Last week I had a discussion with a person who believed that because a science fiction film said that dolphins use 30% of their brain, dolphins indeed use 30% of their brain and therefore more than humans with their 10%.

It felt a bit painful but it seem like the epistemic hygine of some people in our society is very poor. Various producers of TV shows might have more responsibility for not making facts up than they believe they have.

Comment author: Lumifer 02 August 2016 05:04:07PM 4 points [-]

There are other ways to prevent global warming. Plan C is creating artificial nuclear winter by volcanic explossion or starting large scale forest fires with nukes.

Goes straight into the "Shit LW people say" bucket.

Comment author: gjm 02 August 2016 10:54:12AM -1 points [-]

If the code is available in a form that enables people to build it, that seems likely to reduce sales considerably whatever the licence. (In any case, I don't think CC-ness of the licence is the relevant feature.)

If the source code is available then nagging, begging and crippling are easily removed. (Unless the crippling is a matter of omission and the uncrippling bits are paid for -- but that's just one variety of freemium.)

Your first suggestion, a good plugin API, seems like the way to go. moridinamael, what advantages do you see to open source over a plugin API?

Other possible options:

  • Divide the app into two parts. One is open-source and is the part that would be extended by plugins. One is closed-source and has most of the secret sauce in it. Someone buying the app gets the binaries for both parts and the source for the extensible part. Of course this is only any good if you can find a way to split the app up that doesn't kill its efficiency or break its architecture.
    • The open-source extensible part might be minimal (just enough to support plugins -- this ends up looking a lot like the "plugin API" option, I think) or maximal (so that the only closed-source part is an "engine" that does some clever thing you are hoping other people can't duplicate) or in between.
  • Have part of the app run not on the user's computer or mobile device but on servers under your control. Charge for access to those servers.
  • Just make it open source and do something entirely different to satisfy your capitalist rent-seeking exploitative desires :-).
Comment author: PhilGoetz 01 August 2016 07:15:57PM *  2 points [-]

From the International Craniofacial Institute's web page on cleft palate.

What they say:

Statistics reassure us that having a child with a cleft does not mean you’ll have other children with the same condition. In fact, your chances only increase by 2 to 5 percent compared to couples with no cleft-affected children.

What they mean:

The chances that your next child will have cleft palate increases from 0.15% to about 4%. Your odds ratio multiplier is 25.

Comment author: SquirrelInHell 01 August 2016 10:44:30AM 4 points [-]

Let me give some feedback about your writing style, which I find consistently cryptic. You tend to describe your thoughts starting in the middle and giving the context later, or skipping it altogether. E.g. the fist sentence reads

I find myself more and more interested in how the concept of "systematized winning" can be applied to a large group of people who have one thing in common, and that not even time, but - in my own very personal case - ...

Until this point, a context like "biology research" etc. does not appear anywhere, and a "large group of people who have one thing in common" could be all people who like ice cream. It is of course possible to decipher what you mean, but by writing in reverse order you make it unnecessarily hard.

~~~

Possibly, a part of the problems you are describing could be solved by storing all the raw data that is collected during research, not just conclusions. In some cases, the amount of data might pose technological problems, but humanity's capacity to store information cheaply is increasing very quickly. So we can just let the future generations analyse the data by themselves, if they care to do so.

Comment author: TheAncientGeek 01 August 2016 10:43:39AM 4 points [-]

Potential sentience had got to count, or it would be ok to kill sleeping peopje

View more: Prev | Next