Filter This month

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: CellBioGuy 04 October 2016 10:00:49PM *  11 points [-]

Advice solicited. Topics of interest I have lined up for upcoming posts include:

  • The history of life on Earth and its important developments
  • The nature of the last universal common ancestor (REALLY good new research on this just came out)
  • The origin of life and the different schools of thought on it
  • Another exploration of time in which I go over a paper that came out this summer that basically did exactly what I did a few months earlier with my "Space and Time Part II" calculations of our point in star and planet order that showed we are not early and are right around when you would expect to find the average biosphere, but extended it to types of stars and their lifetimes in a way I think I can improve upon.
  • My thoughs on how and why SETI has been sidetracked away from activities that are more likely to be productive towards activities that are all but doomed to fail, with a few theoretical case studies
  • My thoughts on how the Fermi paradox / 'great filter' is an ill-posed concept
  • Interesting recent research on the apparent evolutionary prerequisites for primate intelligence

Any thoughts on which of these are of particular interest, or other ideas to delve into?

Comment author: ChristianKl 03 October 2016 08:42:52PM 11 points [-]
Comment author: WhySpace 27 September 2016 02:06:25AM *  10 points [-]

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

Comment author: James_Miller 10 October 2016 01:59:55PM 10 points [-]

Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.

Comment author: ChristianKl 02 October 2016 04:37:21PM *  8 points [-]

The article misses the point. It doesn't talk about the significance of the story.

A better headline might be "The Chinese government decided that it's in their interest to be public about data fabrication by Chinese scientists."

Given that this comes right after the Chinese government decides that it makes sense to reduce red meat consumption in China, it's a sign of progress and good Chinese leadership.

In response to Linkposts now live!
Comment author: WhySpace 28 September 2016 05:06:25PM 9 points [-]

Awesome! This strikes me as a very good thing, especially with your suggested social norms. I have 3 additional suggestions, though:

  1. Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff. The title of the link serves much the same purpose, and gives readers enough info to decide whether or not to click through. This is standard practice on the more intellectual subreddit, since they already have the background context and knowledge that 90% of the article is spent explaining.

  2. Add a social norm where the best comments get linked to. I enjoy Yvain's SSC posts, and the comments section often contains some gems, but digging through all of them to find the gems is tedious. I intend to quote or rephrase gems when I find them, and link to them in comments here.

  3. Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Comment author: gjm 11 October 2016 03:10:30PM -1 points [-]

100%? Well, your future charitable donations will be markedly curtailed after you starve to death.

Comment author: username2 05 October 2016 06:16:23PM *  8 points [-]

The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.

The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.

None of this is important of course because, as is usual for politics, the whole mess degenerates into cheerleading for your team and condemning the other team, and sensitive analysis of the actual evidence would be giving aid and comfort to the hated enemy.

Comment author: MrMind 19 October 2016 03:40:28PM *  -1 points [-]

I've this weird fanfiction where LessWrong is a monastery/school of magic who has been abandoned by its creator a long time ago but it's still operating, that sometimes has been attacked by a disgruntled student who was expelled, but has somehow learned to do necromancy and has returned with an army of meat-puppets.
Now I'll have to incorporate that due to some random magic accident, the monastery disappeared, but not the rooms inside it.

Comment author: WalterL 17 October 2016 07:44:21PM 7 points [-]

I'd suggest you prioritize your personal security. Once you have an income that doesn't take up much of your time, a place to live, a stable social circle, etc...then you can think about devoting your spare resources to causes.

The reason I'd make this suggestion is that personal liberty allows you to A/B test your decisions. If you set up a stable state and then experiment, and it turns out badly, you can just chuck the whole setup. If you throw yourself into a cause without setting things up for yourself and it doesn't work out the fallout can be considerable.

Comment author: DanArmak 13 October 2016 11:19:20PM 6 points [-]

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

Comment author: Lightwave 12 October 2016 04:48:07PM 5 points [-]
Comment author: skeptical_lurker 10 October 2016 06:26:46PM 7 points [-]

Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.

Comment author: gjm 06 October 2016 06:17:54PM -1 points [-]

20 years ago the very first crude neural nets were just getting started

The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.

In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.

What's about 20 years old is "deep learning", which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That's not quite fair. There's been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)

In response to Linkposts now live!
Comment author: Houshalter 28 September 2016 04:24:57PM 7 points [-]

This is really awesome and could change the fate of lesswrong. I really think this will bring people back (at least more than any other easy to implement change.) I personally expect to spend more time here now, at least.

One thing to take note of is that lesswrong, by default, sorts by /new. As the volume of posts increases, it may be necessary to change the default sort to /hot or /top/?t=week. Especially if you want it to be presentable to newcomers or even old timers coming back to the site, you want them to see the best links first.

Comment author: Clarity 28 September 2016 03:20:27AM 7 points [-]
Comment author: CellBioGuy 25 September 2016 06:18:03AM *  7 points [-]

Astrobiology bloggery got interrupted by a SEVERE bout of a sleep disorder, developing systems to measure metabolic states of single yeast cells in order to freaking graduate soonish, and having a bit of a life for a while.

Astrobiology bloggery resumes within 1 week, with my blog moved from thegreatatuin.blogspot.com to thegreatatuin.wordpress.com, blogger being completely unusable when it comes to inserting graphs and the like. Dear gods I'm excited, the last year has seen a massive explosion in origin of life research and study of certain outer solar system bodies. To the point that I'm pretty sure the metabolism of the last universal common ancestor has been figured out and the origin of the ribosome (and therefore protein-coding genetics) as well.

Advice on running personal wordpress account welcomed.

Comment author: Manfred 21 September 2016 03:16:32AM *  7 points [-]

This is only true for simple systems - with more complications you can indeed sometimes deduce causal structure!

Suppose you have three variables: Utopamine conentration, smiling, and reported happiness. And further suppose that there is an independent noise source for each of these variables - causal nodes that we put in as a catch-all for fluctuations and external forcings that are hard to model.

If Utopamine is the root cause of both smiling and reported happiness, then the variation in happiness will be independent of the variation in smiling, conditional on the variation in Utopamine. But conditional on the variation in smiling, the variation in utopamine and reported happiness will still be correlated!

The AI can now narrow down the causal structure to 2, and perhaps it can even figure out the right one if there's some time lag in the response and it assumes that causation goes forward in time.

Comment author: WalterL 19 October 2016 09:21:03PM 6 points [-]

My life places me in a position to observe an uncommon number of people repenting and trying to change. As you might expect, humans being what we are, few accomplish their goal.

A fact that I've observed is that NONE of those who other themselves and blame the shard get it done. If someone says "I've got a terrible temper", he will still hit. If he says "I hit my girlfriend", he might stop. If someone says "I have shitty executive function", he will still be late. If he says "I broke my promise", he might change.

So, when you say "I have an addiction", I'm a bit concerned. A LW truism is that we don't have brains, we are brains. We aren't ghosts manning machines, we are machines.

I think it is some old "devil made me do it", stuff. The "other me" isn't real, so energy spent fighting him is wasted. Effort spent changing my behavior might bear fruit.

I'm reading a lot into phrasing, so if this isn't you, my bad. Just...my advice... be sure to own your stuff man. You either "have an addiction", or "screwed some randos without protection", and my experience suggests that thinking of it as the second one will help you more.

Comment author: username2 10 October 2016 09:23:33AM 6 points [-]

Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.

Comment author: ChristianKl 07 October 2016 03:33:10PM 6 points [-]

Because the IRS isn't popular and it's not a good move for a politician to speak in favor of the IRS and advocate increase of IRS funding.

Comment author: Lumifer 06 October 2016 05:07:42PM *  5 points [-]

What is the best source for this in your view?

The raw data is plentiful -- look at any standardized test scores (e.g. SAT) by race. For a full-blown argument in favor see e.g. this (I can't check the link at the moment, it might be that you need to go to the Wayback Machine to access it). For a more, um, mainstream discussion see Charles Murray's The Bell Curve. Wikipedia has more links you could pursue.

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

My view is that history is important and that outcomes are path-dependent. Slavery and segregation are crucial parts of the history of American blacks.

open to learning

Your social circles might have a strong reaction to you coming to anything other than the approved conclusions...

Comment author: ChristianKl 05 October 2016 09:00:42PM 6 points [-]

Our biosphere's junk DNA

Junk DNA generally doesn't survive that long in evolutionary timescales because there's nothing that prevents mutations. It seems a bad information storage system.

Comment author: DanArmak 02 October 2016 07:38:28AM *  6 points [-]

I can't figure out how to edit the post description to include a summary paragraph. Help?

... Now the actual link is gone and I can't edit it back in! It's supposed to point here. Mods/admins, can you help? Here is a screenshot of what I see.

Comment author: CellBioGuy 01 October 2016 11:41:19PM *  6 points [-]

Worth noting:

Possibly indicating that the end of the last glaciation rather than new invention drove the more or less simultaneous large-scale agricultural transitions that occurred all across the old and new world ~10k years ago.

Comment author: CellBioGuy 01 October 2016 11:32:37PM *  6 points [-]

My favorite crazy unlikely idea about that is that the Paleocene-Eocene Thermal Maximum 50 megayears ago - a 200k year pulse of high CO2 levels and temperatures in which the CO2 was added over a timescale of less than 10k years (potentially much less) and had an isotopic composition consistent with having been liberated from biogenic deposits - could theoretically be explained by all the coal and oil deposits of Antarctica being burned followed by some positive feedbacks kicking in.

(Most land of Antarctica never having been investigated geologically in any detail at all due to being under kilometers of ice) (And Antarctica at that time being completely unglaciated and relatively temperate despite being where it is now by then) (And subsequent glaciation having scraped most of the surface clean of anything that was on it at the time)

We have an advantage in that we evolved in the tropics - you can take a tropical animal and keep it warm near the poles by wrapping it in clothes. It's much more difficult to take a cold-adapted polar animal and keep it alive in the tropics...

Comment author: hg00 28 September 2016 01:43:41AM *  6 points [-]

My understanding is that a USA programmer would start at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year.

A pessimistic starting salary for a competent US computer programmer is $60K and senior ones can clear $200K. $100K is a typical starting salary for a computer science student who just graduated from a top university (also the median nationwide salary).

In the US market, foreigners come work as computer programmers by getting H1B visas. The stereotypical H1B visa programmer is from India, speaks mostly intelligible English with a heavy accent, gets hired by a company that wants to save money by replacing their expensive American programmers, and exists under the thumb of their employer (if they lose their job, their visa is jeopardized). I think that the average H1B makes less money than the average American coder. It sounds to me like you'd be a significantly more attractive hire than a typical H1B--you're fluent in English, and you've made contributions to Scheme?

The cost of living in the US is much higher than the Philippines. Raising a family in Silicon Valley is notoriously expensive. Especially if you want your kids to go to a "good school" where they won't be bullied. I don't know what metro has the best job availability/cost of living/school quality tradeoff. It will probably be one of the cities that's referred to as a "startup hub", perhaps Seattle or Austin. If your wife is willing to homeschool, you don't have to worry about school quality.

You can dip your toes in Option 1 without taking a big risk. Just start applying to US software companies. They'll interview you via Skype at first, and if you seem good, the best companies will be willing to pay for your flight to the US to meet the team. To save time you probably want to line up several US interviews for a single visit so you can cut down on the number of flights. Here are some characteristics to look for in companies to apply to:

  • The company has a process in place for hiring foreigners.

  • The company is looking for developers with your skill set.

  • The company's developer team is "clued in". Contributing to Scheme is going to be a big positive signal to the right employer. You can do things like read the company engineering blog, use BuiltWith, look up the employees on LinkedIn to figure out if the company seems clued in. Almost all companies funded by Y Combinator are clued in. If your interviewer's response to seeing Scheme on your resume is "What is Scheme?", then you're interviewing at the wrong company and you'll be offered a higher salary elsewhere.

  • The company is profitable but not sexy. For example, selling software to small enterprises. (You probably don't want to work for a business that sells software to large enterprises, as these firms are generally not "clued in". See above.) Getting a job at a sexy consumer product company like Google or Facebook is difficult because those are the companies that everyone is applying to. You can interview at those companies for fun, as the last places you look at. And you don't want to apply for a startup that's not yet profitable because then you're risking your wife and kids on an unproven business. I'm not going to tell you how to find these companies--if you use the same methods everyone else uses to find companies to apply to, you'll be applying to the same places everyone else is.

Of course you'll be sending out lots of resumes because you don't have connections. Maybe experiment with writing an email cover letter very much like the post you wrote here, including the word "fucking". I've participated in hiring software developers before, and my experience is that attempts at formal cover letters inevitably come across as stuffy and inauthentic. Catch the interviewer's interest with an interesting email subject line+first few sentences and tell a good story.

Actually you might have some connections--consider reaching out to companies that are affiliated with the rationalist community, posting to the Scheme mailing list if that's considered an acceptable thing to do, etc.

Consider donating some $ to MIRI if my advice ends up proving useful.

Comment author: Elo 27 September 2016 02:36:16AM -2 points [-]

yes

Comment author: username2 22 September 2016 12:20:52AM *  6 points [-]

Have you ever taken Adderall? I greatly suspect you have not.

People who fight chronic akrasia because of varoius degrees of ADHD and related mental disorders have a different response to stimulants than "normal" individuals. For me, Adderall puts me into cool, calm, clear focus. The kind of productive mode of being that most people get into by drinking a cup of coffee (except coffee makes me jittery and unfocused). Being on Adderall is just... "normal." Indeed the first time I tried it I thought the dose was too low because I didn't feel a thing.. until 8 hours later when I realized I was still cranking away good code and able to focus instead of my normal bouts of mid-day akrasia. I could probably count on my hands the number of times I had a full day of highly focused work without feeling stress or burn-out afterwards... now it's the new normal :)

For such people low-dose amphetamines don't provide any high, nor are they accompanied by some sort of berserker productivity binge like popular media displays. In the correct dosages they also don't seem to come with any addiction or withdraw -- I go off of it without any problems, other than reverting to the normal, viscous cycles of distraction and akrasia. (This isn't just anecdotal data -- the incidence rate of Adderall addiction among those following the prescribed plan is lost in the backround noise of people who are abusing in these trials.)

Honestly, see a psychiatrist that specializes in these things and talk to them about your inability to focus, your history of trouble in completing complex, long tasks, how this is affecting your career and personal growth goals, etc. Be honest about your shortcomings, and chances are they will work with you to find a treatment plan that truly helps you. You're not manipulating anybody.

Seriously, ADHD is a real mental disorder. Your first step should be to recognize it as such, and accept the fact that you might actually have a real medical condition that needs treatment. You're not manipulating the system, you're exactly the kind of person the system is trying to help! Prescription drugs are for more than just people who hear voices...

Comment author: WhySpace 22 September 2016 12:10:47AM 6 points [-]

Truth is not what you want it to be;

it is what it is,

and you must bend to its power or live a lie.

- Miyamoto Musashi

Comment author: Clarity 18 October 2016 12:01:37AM 3 points [-]

Sex and love addiction, sexual compulsions, insecure attachment, risky sexual behaviour, HOCD, HIVOCD

What if you lost the love of your life due to a sexual impulse? What if you recognised sexual impulsivity as a pattern of your behaviour, deeply deeply ingrained into your being, and that you want to overcome it? That’s me.

I chose the name clarity because when I started to post, I was dipping in and out of psychoses and other really mentally unhealthy states. I would have moments of clarity, inspired by stuff I read in the sequences and other LessWrong posts and they would be like gulps of air saving me from drowning in really turbulent water. Now that I’m on some kind of boat, I don’t have to actively think about how to breath.

Until now, again.

I haven’t posted a lot recently. Mainly because I have been doing really, really well. My epic failures I dare so have given me a reputation here, and I talk about them freely. But, again, I have been doing well lately.

With an exception. Let me explain:

Since I already have a soldiery mindset due to some abuse from my childhood I thought I could grow by joining the French Foreign Legion. I had decided not to in the past due to risk of permanent injury but considered it again. I decided not to this time because I figured I wouldn’t be able to meet, court and enjoy time with someone, fall in love etc. – it’s unsuitable for married life (which correlates strongly with happiness), according to this link: https://www.cervens.net/legionbbs123/archive/index.php/t-53.html

Lately I am infatuated with someone. She seems to have the potential to meet my criteria for a good potential wife: communication skills, personality, responsibility, emotional honesty, attractiveness, matching sex drives, and value alignment. I just wish I had some good comebacks for when a person is out and about with an Asian girl and people making comments that make me feel self-conscious. She gives me a different feeling than that bewilderment kind of pleasant feeling I would get when my ex housemate I fell for used open her small mouth really really wide in amazement at something, haha. I get more of the nice chill longing of when I think of that cute little housemate listening too hip-hop.

I’ve been thinking about her strong feelings for veganism so I looked up some stuff about the case for veganism.

I decided to go milk free after watching this: https://m.youtube.com/watch?v=UcN7SGGoCNI Wool free after watching watching just 243 of this video. https://m.youtube.com/watch?v=siTvjWE2aVw

So another recent experience really stood out to me as a bad choice, by a similar rationale. I consider myself heteroflexible, or perhaps hetero but rather sexually fluid. On Sunday night I went to a gay sauna, tossed up a bit between that and a brothel, but decided I prefer the idea of guys this time. I’m a bit anxious and unattached to guys physically, except if its porn (which I had watched before going). So I went into a dark room with two guys I later saw were ugly AF and of course, like previous times, they give me tonnes or props and validation as a good looking guy. One guy said he was a cleaner when I asked what he does. The other had scaly crusty balls. I didn’t stop, unfortunately. And now maybe that sore was Herpes or Genital Warts and now if I got herpes which is incurable, then it might ostracise me from 4/5 of the beautiful women in the world (maybe just not the slutty ones who have that too, and may just break my heart in time anyway).

Worst case scenario, I just HIV. I mean it’s a dark room, anything can happen, a grazing, a bite, etc., a pin prick from some vexed crazy guy. No accountability. In the heat of the moment something could slip off too. And, I’m not familiar with much more than the superficial statistics around HIV transition and lore, like that oral sex HIV could but they doubt it often happens – but as a medical researcher I know the quality of research must be judged in a case by case basis and never take the overviews credibility for granted.

I reflected in the moment and realised I wasn't enjoying myself in the slightest. I think it’s some need for validation, or loneliness or risk taking or a compulsion. Fuck me autocorrect almost corrected to compulsive homosexuality. Got to fix that too, or I will be outed.

I think I have HOCD, or something accounted for by these accounts:

I find each of them helpful and hope to revisit them.

http://blogs.psychcentral.com/sex-addiction/2013/03/when-straight-men-are-addicted-to-gay-sex/ http://www.sexaddictionscounseling.com/can-a-straight-man-be-addicted-to-gay-sex/ http://www.brainphysics.com/yourenotgay.php https://www.google.com.au/amp/m.wikihow.com/Overcome-Sexual-Addiction%3famp=1?client=ms-android-optus-au

If I don't do it (regardless of where unless I find myself in a stable relationship with that person before or within a week) again by 2020 I'll give one my close friends $141 as a prize to encourage me. 1/1/2020. If not I’ll donate the same amount to a sex, love and or romance focussed impulse control related group.

Masturbating alone is hedonically better and it’s safer anyway, what the fuck is wrong with me?

I have an addiction but I have some much will power and a track record of discipline. This is the last frontier. Never again.

Comment author: James_Miller 15 October 2016 07:18:08PM 5 points [-]

In ten years what's the probability that a CRISPR-competent terrorist group could exterminate mankind? The optimal consequentialist anti-terrorist policies if this answer is >1% should horrify a deontologicalist.

Comment author: SithLord13 11 October 2016 06:50:06PM 5 points [-]

Could chewing gum serve as a suitable replacement for you?

Comment author: ChristianKl 10 October 2016 12:53:15PM 5 points [-]

Nothing. I don't think facebook membership counts are a good measurement.

Comment author: turchin 10 October 2016 11:13:53AM 5 points [-]

If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?

Comment author: ChristianKl 08 October 2016 08:59:58PM 5 points [-]

I think we discussed this previously on LW. In general the argument isn't convincing in his case.

Gilead made 20$ billion with a drug that cures one virus. If a pharma company would think that his approach has a 10% of working to cure all viruses spending 100$ million or more would be very interesting for traditional pharma companies under the current incentive scheme.

Comment author: CarlShulman 07 October 2016 12:19:07AM 5 points [-]

Primates and eukaryotes would be good.

Comment author: Houshalter 06 October 2016 06:06:13PM *  5 points [-]

I think it's well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.

I recall notable researchers like Hinton making predictions that "X will take 5 years" and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:

Testing embedded image.

In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.

Google recently announced a neural net chip that is 7 years ahead of Moore's law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.

That's just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google's synthetic gradient paper or hypernetworks.

I think one of the biggest things holding the field back is that it's all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don't generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.

Comment author: gwern 05 October 2016 09:19:10PM *  5 points [-]

Lots of other problems with it too. Why is there any last-universal-common-ancestor in this scenario? You would want to drop a full ecosystem with millions of different organisms, each with different FEC shards of data. If you can deliver some bacteria to a virgin planet, you can deliver multiple kinds of bacteria, not just one. Yet, genetics finds that there's a LUCA (not that much of LUCA survives in current genomes).

Comment author: Lumifer 05 October 2016 08:15:57PM 4 points [-]

What are the reasons?

For example, there were 4,636 murders committed by white people and 5,620 murders committed by black people in 2015 (source). On the per-capita basis this makes the by-white murder rate to be about 2.2 per 100,000 and the by-black murder rate to be about 16.2 per 100,000.

Comment author: Lumifer 05 October 2016 06:02:19PM 2 points [-]

Boo politics discussion during the pre-election madness.

Comment author: moridinamael 03 October 2016 02:14:56PM 5 points [-]

Depends on in what way you're having trouble with it. If you need to interact with lots of people in whatever context, I find that taking an initial tone of mildly self-deprecating humor helps smooth things out. If you're the first one to mock yourself, it releases any tension that might be in the air. But then, you should let go of the self-deprecation before it starts to suggest actual low self-confidence.

It can also be good to formulate a pithy explanation for why you don't have the skill, so that you can casually explain the situation without bogging people down. "There weren't any swimming pools near where I grew up." Something short and simple, even if it leaves out important biographical details.

In the vast majority of cases, people are too involved in their own business to even think about you. If I see an adult swimming really badly, I just assume that nobody ever taught them to swim, which is a completely value-neutral assessment, and then continue on with whatever I was thinking about. I recently took a handful of jiu-jitsu lessons and was obviously as useless as a newborn kitten, but I don't really need to offer any kind of expository explanation for this lack of skill, because "just started learning" is a fully self-contained explanation.

Comment author: CellBioGuy 02 October 2016 08:05:25PM *  5 points [-]

In the hypothetical scenario in which there was something to find in Antarctica in the first place, given the thorough scraping the continent has gotten for 20+ megayears by kilometers-deep glaciers you can't expect to find much at all. The areas not covered by glaciers are generally mountains which erode - their modern exposed surfaces would have been quite deep underground at the time.

The sorts of things you could actually expect to find would be more along the lines of missing coal seams, long rods of long-ago-oxidized steel poking vertically through multiple strata into areas that would have held petroleum deposits at the time, really deep coal seams turned to ash in situ by underground gasification, hydrothermal features that concentrate copper and silver ore capped by weird craters that obliterate where the highest concentrations would have been with a big pile of copper-depleted gravel nearby. Perhaps odd isotope ratios in a very narrow sediment band if nuclear reactions were ever explored. The ecological effects you would expect on the continent are kind of overshadowed in the ocean sediment record by the worldwide climate event that the PETM represents (6C temperature spike, deep ocean hypoxia, phytoplankton death and repopulation).

It's worth noting that there are probably particular clades that are predisposed to being smart. There's a fascinating book out by Dr. Herculano-Houzel ("The Human Advantage") detailing recent work over the last decade examining brain structure across the mammals. She and her group found something fascinating: neural scaling laws differ from clade to clade. Mammals in general have a neural scaling law that if you make a brain 10x as large, it only has 4x as many neurons as the neurons on average increase in volume (partially due to longer connecting fibers). Primates break this though - all primate neurons are about the same size, which is remarkably small, the same size as that of a mammal that's like 10 grams in mass. A large primate brain is MUCH more powerful than a generic mammal brain of the same mass. Their recent work since that book came out indicates that birds also break that scaling law and have marvelously efficient brains - all bird neurons are approximately the same size like the primates, but what's more that size is 6x as small as those of primates. It is an interesting question if this would also have applied to dinosaurs, their close relatives who nonetheless were not under crazy selective pressure for low weight.

Comment author: Fluttershy 02 October 2016 12:40:42AM 5 points [-]

The most striking problem with this paper is how easy all of the tests of viability they used are to game. There are a bunch of simple tests you can do to check for viability, and it's fairly common for non-viable tissue to produce decent-looking results on at least a couple, if you do enough. (A couple of weeks ago, I was reading a paper by Fahy which described the presence of this effect in tissue slices.)

It may be worth pointing out that they only cooled the hearts to -3 C, as well.

Comment author: Elo 30 September 2016 12:50:31AM -2 points [-]

cat weight might be relevant, cat current age, cat body shape (fat/skinny), description of cat's response to catnip,

In response to Linkposts now live!
Comment author: Gram_Stone 28 September 2016 04:13:17PM 5 points [-]

Thank you James Lamine, Vaniver, and Trike Apps.

I also wanted to quote something Vaniver has said, but that was unfortunately downvoted below the visibility threshold at the time:

I've pushed for doing things the right way, even if it takes longer, rather than quicker attempts that are less likely to work.

Comment author: Alejandro1 26 September 2016 09:37:33PM 5 points [-]

Lately it seems that at least 50% of the Slate Star Codex open threads are filled by Trump/Clinton discussions, so I'm willing to bet that the debate will be covered there as well.

Comment author: Houshalter 26 September 2016 05:08:24PM 5 points [-]

"Base rate" is statistics jargon. I would ask something like "which disease is more common?" And then if they still don't understand, you can explain that its probably the disease that is most common, without explaining Bayes rule.

Comment author: 9eB1 26 September 2016 03:06:19PM 5 points [-]

I have read Convict Conditioning. The programming in that book (that is, the way the overall workout is structured) is honestly pretty bad. I highly recommend doing the reddit /r/bodyweightfitness recommended routine.

  1. It's free.

  2. It has videos for every exercise.

  3. It is a clear and complete program that actually allows for progression (the convict conditioning progression standards are at best a waste of time) and keeps you working out in the proper intensity range for strength.

  4. If you are doing the recommended routine you can ask questions at /r/bodyweightfitness.

The main weakness of the recommended routine is the relative focus of upper body vs. lower body. Training your lower body effectively with only bodyweight exercises is difficult though. If you do want to use Convict Conditioning, /r/bodyweightfitness has some recommended changes which will make it more effective.

Comment author: Elo 21 September 2016 11:28:58PM -2 points [-]

have updated the list of common human goals.
http://lesswrong.com/r/discussion/lw/mnz/list_of_common_human_goals/

social looked like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people. Do you have an established social network? Do you have intimacy?

and now looks like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people. Do you have an established social network? Do you have intimacy? Do you have seek opportunities to have soul to soul experiences with other people? Authentic connection?

From feedback from someone who felt it wasn't covered and had a strong goal of authentic connection.

http://bearlamp.com.au/list-of-common-human-goals/

Comment author: chron 16 October 2016 08:10:29PM *  4 points [-]

Interestingly, no notable historical group has combined both the genocidal and suicidal urges.

Actually such groups existed, for example the Khmer Rouge turned in on themselves after killing their enemies. Something similar happened with the movement lead by Zhang Xianzhong only to a much greater extent, i.e., they more-or-less depopulated the province of Sichuan, including killing themselves.

Comment author: turchin 16 October 2016 10:05:22AM *  3 points [-]

In 20 century most risks were created by superpowers. Should we include them in the list of potential agents?

Also it seems that some risks are non-agential, as they result from collective behaviors of a group of agents, like arms race, capitalism, resource depletion, overpopulation etc.

Comment author: SithLord13 15 October 2016 11:25:12PM 4 points [-]

Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.

Disregarding any discussion of legitimate climate concerns, isn't this a really bad decision? Isn't it better to be unblackmailable, to disincentivize blackmail.

Comment author: ChristianKl 13 October 2016 01:10:09PM 2 points [-]

tl;dr Obama doesn't really now what he's talking about but tries to use talking points to make sense of the new project.

Comment author: username2 11 October 2016 08:45:03PM 3 points [-]

"Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility." -Wikipedia

The very next sentence starts with "Utility is defined in various ways..." It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as "the greatest good for the greatest number" but the clutch is in the word "good" which is left undefined. This is as opposed to, say, virtue ethics which doesn't care per se about the consequences of actions.

Comment author: niceguyanon 11 October 2016 05:32:28PM 4 points [-]

https://www.quora.com/How-can-I-get-Wi-Fi-for-free-at-a-hotel/answer/Yishan-Wong

Want free wifi when staying at an hotel? Ask for it. Of course!, Duh, seems so obvious now that I think about it.

Comment author: turchin 11 October 2016 02:03:47PM *  3 points [-]

Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.

  1. Some can't survive on less or have other obligations that looks like charity (child support)
  2. We would have less initiative to earn more
  3. It would hurt our economy, as it is consumer driven. We must buy Iphones
  4. I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
  5. I pay taxes and it is like charity.
  6. I know better how to spent money on my needs.
  7. Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
  8. If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
  9. If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
  10. Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
  11. If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
Comment author: Houshalter 10 October 2016 08:07:29PM *  4 points [-]

I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.

There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.

These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.

I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.

Comment author: skeptical_lurker 10 October 2016 06:21:41PM 4 points [-]

That doesn't mean that there is nothing to do - if you don't know what FAI is, then you try to work out what it is.

Comment author: DanArmak 10 October 2016 02:54:19PM 4 points [-]

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Comment author: ChristianKl 10 October 2016 02:02:25PM 4 points [-]

Get employed by Google.

Comment author: turchin 10 October 2016 12:46:06PM 3 points [-]

There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?

Comment author: DanArmak 08 October 2016 09:44:11PM *  4 points [-]

These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.

With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.

And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.

Comment author: waveman 07 October 2016 09:51:03PM 3 points [-]

Estimated cost of tax evasion per year to the Federal gov is 450B.

Can I ask you to examine the apparent assumption here - that the $450B is all loss? Have you considered the possibility that the people who avoided the tax put the money to good use? Or that the government would not put that money to good use if it took it?

Comment author: waveman 07 October 2016 11:33:13AM 4 points [-]

A related concept is "inferential distance" - people can only move one step at a time from what they know.

Also typical mind fallacy.

Comment author: gjm 06 October 2016 06:42:39PM -1 points [-]

The article distinguishes between "emotional empathy" ("feeling with") and "cognitive empathy" ("feeling for"), and it's only the former that it (cautiously) argues against. It argues that emotional empathy pushes you to follow the crowd urging you to burn the witches, not merely out of social propriety but through coming to share their fear and anger.

So I think the author's answer to "why help all those strangers?" (meaning, I take it, something like "with what motive?") is "cognitive empathy".

I'm not altogether convinced by either the terminology or the psychology, but at any rate the claim here is not that we should be discarding every form of empathy and turning ourselves into sociopaths.

Comment author: Lumifer 05 October 2016 09:00:47PM 3 points [-]

You asked why is "the incidence of police encounters with blacks elevated". This is a direct answer.

If you want to know the reasons for different crime rates, this is going to get long and complicated.

Comment author: James_Miller 04 October 2016 11:30:04PM 4 points [-]

I'm extremely interested in the last three of these especially the Fermi paradox one. Great essays.

Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: Houshalter 03 October 2016 07:26:35PM *  4 points [-]

This seems as useful as telling depressed people to stop being depressed. Fear of embarrassment is one of the strongest drives humans have. Probably appearing to be a fool in the ancestral environment led to fewer mates or less status. It's not something you can just voluntarily turn off or push through easily.

The best strategy, I think, would be to work around it. Convince your brain that it's not embarrassing. Or that no one cares. Or pretend no one is watching. Or do it around supportive friends.

Comment author: WhySpace 03 October 2016 03:42:46PM *  4 points [-]

persufflation

That was a mild pain to google, so I'm leaving what I dug up here so others don't have to duplicate the effort.

Persufflation is perfusion with gaseous oxygen. Perfusion is when fluid going to an organ passes through the lymphatic system or blood vessels to get there.

If I'm reading this correctly, there's no thermodynamic reason to pump the organ full of oxygen gas, but only a biological one. Cells need less oxygen when they're on ice for an organ transplant, but they still consume O2. If this isn't being delivered via blood flow, another source is needed.

I take it that the persufflation is to help with recovering kidneys from liquid nitrogen temperatures, and not in getting there without damage?

Comment author: username2 03 October 2016 12:08:16PM 4 points [-]

How do you deal with embarrassment of having to learn as an adult things that most people learn in their childhood? I'm talking about things that you can't learn alone in private, such as swimming, riding a bicycle and things like that.

Comment author: CellBioGuy 02 October 2016 08:14:09PM *  4 points [-]

EQ is NOT the whole story. As I just noted above in another comment, there is amazing work on brain architecture coming out of the lab of Dr. Suzana Herculano-Houzel, a scientist studying neural structure across the vertebrates. I recommend her book, "The Human Advantage" and all the papers to have come out of her lab recently.

Three important things:

1 - Neural scaling laws differ from clade to clade. In a generic mammal, a brain 10x as large has only 4x as many neurons so there is diminishing returns to brain mass probably due to the need to maintain long connecting fibers. Primates break this relationship - all primate brains are roughly equally densely packed, and indeed are as densely packed as a generic mammal brain from a very small mammal. Something changed in primate embryonic development upwards of 50 megayears ago predisposing large primates to have much larger numbers of neurons (Practical example: turns out the cerebrum of an elephant is roughly equivalent to that of a chimp and the largest whales probably correspond to early homo erectus).

2 - Humans are actually incredibly generic primates. All of the pieces of our brains fall right on the primate trend lines in terms of size and cell number - our cerebrum is not oversized, its just that the cerebrum grows faster than other parts with increasing brain size across all the primates. We just happen to have the largest neuron number. And also, humans fall right on the body size to encephalization quotient trendline of all the primates, with only 3 primates falling off the trendline - chimps, gorillas, and orangutans are below the trendline with brains much smaller than you'd expect for their body sizes. She hypothesizes, for very sound reasons explored in their papers and her book, that this was due to energy constraints because brain tissue is energetically expensive, and that humans were able to get back onto the generic primate trendline and have brains as big as you'd expect for a primate of our body mass once we started cooking and could support the energy requirements of brain tissue.

3 - Birds are another clade that breaks the usual brain scaling laws. Their neurons do not get bigger with increasing brain size, much like primates, except that their neurons are ~6x as small as primate neurons. Thus, it turns out that corvids and parrots are packing brains equivalent to many monkeys that their EQ would never suggest.

Comment author: DanArmak 02 October 2016 07:43:42AM *  4 points [-]

Missing link: should point to Science Alert.

Article says the Chinese State Food and Drug Adminstration (CFDA or SFDA) conducted an internal review of the drugs currently pending approval, and found out that in more than 80%:

the data failed to meet analysis requirements, were incomplete, or totally non-existent. [Also], many clinical trial outcomes were written before the trials had actually taken place. [...] The report found that pretty much everyone involved was guilty of some kind of malpractice of fraud. [...] even third party independent investigators tasked with inspecting clinical trial facilities are mentioned in the report as being "accomplices in data fabrication due to cut-throat competition and economic motivation".

There's no matching news item on the SFDA site; it probably doesn't have an official version in English. The article linked relies on this and that.

Compare and contrast with Scott Alexander's idea of making the American FDA regulate less. Two ends of a spectrum? Different cultures and markets leading to different outcomes? Similar situations but better hidden in the American case?

Comment author: Fluttershy 02 October 2016 01:09:38AM 4 points [-]

OTOH it's plausible they don't have much compelling evidence mainly because they were resource-constrained. I'm still not expecting this to go anywhere, though.

Whole kidneys can already be stored and brought back up from liquid nitrogen temps via persufflation well enough to properly filter waste and produce urine, and possibly well enough to be transplanted (research pending), though this may or may not go anywhere, depending on the funding environment.

Comment author: gwern 01 October 2016 05:03:00PM 4 points [-]

Everything is heritable:

Politics/religion:

AI:

Statistics/meta-science/mathematics:

Psychology/biology:

Technology:

Economics:

Philosophy:

Fiction:

Comment author: WalterL 29 September 2016 02:42:11PM 4 points [-]

This article is an example of looking at the world pragmatically, and acknowledging an actual truth. Kudos to the writers.

It reminds me of the scene at the start of Bad Boyz 2, where the drug kingpin has a giant pile of paper cash, and rats are nesting in it.

Kingpin: "This is a STUPID problem to have." ... Kingpin: "But it IS a problem. Hire exterminators."

Similarly, politics getting in the way of transforming the world with its irksome interest in transforming the world is exactly the sort of thing that clear eyed futurists need to figure on.

Comment author: RainbowSpacedancer 29 September 2016 11:45:43AM 4 points [-]

When pushed on why Anthony Magnabosco is out interviewing people he responds with, "I like talking to people and finding out what they believe." True enough, but disingenuous. He presents himself as a seeker of the truth and his root goal is he is out to change minds. If the obtaining the truth was your primary motivation, street interviews is an incredibly inefficient method. The interviews come off as incredibly patronising. Questions such as, "If I gave you evidence about a biblical contradiction, and I'm not saying I do, but if I did, would you change your mind?" Of course you have a contradiction up your sleeve.

Honesty and effectiveness appear to be conflicting goals in street epistemology.

In response to Linkposts now live!
Comment author: VipulNaik 28 September 2016 10:56:22PM *  4 points [-]

I'm unable to edit past posts of mine; it seems that this broke very recently and I'm wondering if it's related to the changes you made.

Specifically, when I click the Submit or the "Save and Continue" buttons after making an edit, it goes to lesswrong.com/submit with a blank screen. When I look at the HTTP error code it says it's a 404.

I also checked the post after that to see if the edit still went through, and it didn't. In other words, my edit did not get saved.

Do you know what's going on? There were a few corrections/expansions on past posts that I need to push live soon.

Comment author: iceman 28 September 2016 09:49:14PM 4 points [-]

I also enjoyed the linked Politics Is Upstream of Science, which went in-depth on the state interventions in science talked about in the beginning of this piece.

View more: Prev | Next