Filter This month

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: CellBioGuy 04 October 2016 10:00:49PM *  11 points [-]

Advice solicited. Topics of interest I have lined up for upcoming posts include:

  • The history of life on Earth and its important developments
  • The nature of the last universal common ancestor (REALLY good new research on this just came out)
  • The origin of life and the different schools of thought on it
  • Another exploration of time in which I go over a paper that came out this summer that basically did exactly what I did a few months earlier with my "Space and Time Part II" calculations of our point in star and planet order that showed we are not early and are right around when you would expect to find the average biosphere, but extended it to types of stars and their lifetimes in a way I think I can improve upon.
  • My thoughs on how and why SETI has been sidetracked away from activities that are more likely to be productive towards activities that are all but doomed to fail, with a few theoretical case studies
  • My thoughts on how the Fermi paradox / 'great filter' is an ill-posed concept
  • Interesting recent research on the apparent evolutionary prerequisites for primate intelligence

Any thoughts on which of these are of particular interest, or other ideas to delve into?

Comment author: ChristianKl 03 October 2016 08:42:52PM 11 points [-]
Comment author: WhySpace 27 September 2016 02:06:25AM *  10 points [-]

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

Comment author: James_Miller 10 October 2016 01:59:55PM 10 points [-]

Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.

Comment author: DataPacRat 19 September 2016 06:35:24PM 10 points [-]

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Comment author: ChristianKl 02 October 2016 04:37:21PM *  8 points [-]

The article misses the point. It doesn't talk about the significance of the story.

A better headline might be "The Chinese government decided that it's in their interest to be public about data fabrication by Chinese scientists."

Given that this comes right after the Chinese government decides that it makes sense to reduce red meat consumption in China, it's a sign of progress and good Chinese leadership.

In response to Linkposts now live!
Comment author: WhySpace 28 September 2016 05:06:25PM 9 points [-]

Awesome! This strikes me as a very good thing, especially with your suggested social norms. I have 3 additional suggestions, though:

  1. Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff. The title of the link serves much the same purpose, and gives readers enough info to decide whether or not to click through. This is standard practice on the more intellectual subreddit, since they already have the background context and knowledge that 90% of the article is spent explaining.

  2. Add a social norm where the best comments get linked to. I enjoy Yvain's SSC posts, and the comments section often contains some gems, but digging through all of them to find the gems is tedious. I intend to quote or rephrase gems when I find them, and link to them in comments here.

  3. Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Comment author: gwern 19 September 2016 02:40:47AM 9 points [-]

My reading of the behavioral genetics literature is that high intelligence being driven by rare autism variants is looking unlikely. DeFries-Fulker extremes analyses like "Thinking positively: The genetics of high intelligence", Shakeshaft et al 2015 aren't consistent with the (relatively) high end being due to rare variants (but are consistent with the low end being due to rare variants) and current attempts to find rare variants enriched in the very high IQ with large effect sizes have turned up nothing: "A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence", Spain et al 2015. There is also an autism heritability observed in the GCTAs/LD score regression using only common SNPs (>=1% population frequency), along with a positive autism/intelligence genetic correlation, which undermines that idea.

My speculation at this point is that Spearman's law of diminishing returns is - based on all the genetic correlations with intelligence which have piled up and the current trends in brain imaging studies finding brain volume/thickness & global connectivity & white-matter integrity & connection speed to be the best predictors of intelligence - is due to intelligence reflecting a bottleneck between all the regions of the brain communicating to solve problems and that as the global communication becomes closer to optimal due to better health & development, individual specialized brain regions start to become the bottleneck to higher performance and shrinking the g factor.

Comment author: gjm 11 October 2016 03:10:30PM 0 points [-]

100%? Well, your future charitable donations will be markedly curtailed after you starve to death.

Comment author: skeptical_lurker 10 October 2016 06:26:46PM 7 points [-]

Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.

Comment author: username2 05 October 2016 06:16:23PM *  7 points [-]

The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.

The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.

None of this is important of course because, as is usual for politics, the whole mess degenerates into cheerleading for your team and condemning the other team, and sensitive analysis of the actual evidence would be giving aid and comfort to the hated enemy.

In response to Linkposts now live!
Comment author: Houshalter 28 September 2016 04:24:57PM 7 points [-]

This is really awesome and could change the fate of lesswrong. I really think this will bring people back (at least more than any other easy to implement change.) I personally expect to spend more time here now, at least.

One thing to take note of is that lesswrong, by default, sorts by /new. As the volume of posts increases, it may be necessary to change the default sort to /hot or /top/?t=week. Especially if you want it to be presentable to newcomers or even old timers coming back to the site, you want them to see the best links first.

Comment author: Clarity 28 September 2016 03:20:27AM 7 points [-]
Comment author: CellBioGuy 25 September 2016 06:18:03AM *  7 points [-]

Astrobiology bloggery got interrupted by a SEVERE bout of a sleep disorder, developing systems to measure metabolic states of single yeast cells in order to freaking graduate soonish, and having a bit of a life for a while.

Astrobiology bloggery resumes within 1 week, with my blog moved from thegreatatuin.blogspot.com to thegreatatuin.wordpress.com, blogger being completely unusable when it comes to inserting graphs and the like. Dear gods I'm excited, the last year has seen a massive explosion in origin of life research and study of certain outer solar system bodies. To the point that I'm pretty sure the metabolism of the last universal common ancestor has been figured out and the origin of the ribosome (and therefore protein-coding genetics) as well.

Advice on running personal wordpress account welcomed.

Comment author: Manfred 21 September 2016 03:16:32AM *  7 points [-]

This is only true for simple systems - with more complications you can indeed sometimes deduce causal structure!

Suppose you have three variables: Utopamine conentration, smiling, and reported happiness. And further suppose that there is an independent noise source for each of these variables - causal nodes that we put in as a catch-all for fluctuations and external forcings that are hard to model.

If Utopamine is the root cause of both smiling and reported happiness, then the variation in happiness will be independent of the variation in smiling, conditional on the variation in Utopamine. But conditional on the variation in smiling, the variation in utopamine and reported happiness will still be correlated!

The AI can now narrow down the causal structure to 2, and perhaps it can even figure out the right one if there's some time lag in the response and it assumes that causation goes forward in time.

Comment author: gwern 19 September 2016 06:20:19PM *  7 points [-]

von Neumann was noted as being social and extraverted long before he began his lobbying and politicking, and was never described as a second Dirac, so I don't think he was simply acting out of expediency. If high intelligence enabled faking extraversion & social skills, which are useful in almost all contexts*, we would see a noted personality correlation with intelligence and increasing with intelligence, which we don't - extraversion is largely independent of IQ, it's Openness in the Big Five which correlates. High-functioning autistic people are also not noted for easily acquiring psychopath-level skills in imitating & manipulating without feeling.

* see for example the correlation of increasing extraversion with increasing lifetime income in the Terman semi-high IQ sample

Comment author: buybuydandavis 19 September 2016 03:51:50AM 6 points [-]

If your lunatic sensor didn't go off reading this, you should get it adjusted.

A funny comment at LW.

Even lunatics can be right.

Gwern said

The assumption here is that both the general population and elite professions are described by a normal distribution (N(100,15) and N(125,6.5), respectively)

Is it? I didn't see that. assumption stated. Problem is, they didn't explicitly specify where they got their distributions. At least I don't see it.

Looking again at some of their conclusions in the preceding paragraph, it does look like they're assuming gaussians based on mean and sd a small sample, then projecting that out to the tails. Clearly malpractice.

They don't come out and say it, but the "This means that" below shows that they are extrapolating to the tails.

This means that 95% of people in intellectually elite professions have IQs between 112 and 138 99.98% have IQs between 99 and 151.

Funny that an article talking about how hard it is to be smart can be so dumb.

Still, my question remains - is there real data out there to support the contention that P(elite career|IQ) has a local max and then decreases for higher IQ?

Comment author: Lightwave 12 October 2016 04:48:07PM 4 points [-]
Comment author: username2 10 October 2016 09:23:33AM 6 points [-]

Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.

Comment author: ChristianKl 07 October 2016 03:33:10PM 6 points [-]

Because the IRS isn't popular and it's not a good move for a politician to speak in favor of the IRS and advocate increase of IRS funding.

Comment author: gjm 06 October 2016 06:17:54PM -1 points [-]

20 years ago the very first crude neural nets were just getting started

The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.

In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.

What's about 20 years old is "deep learning", which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That's not quite fair. There's been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)

Comment author: ChristianKl 05 October 2016 09:00:42PM 6 points [-]

Our biosphere's junk DNA

Junk DNA generally doesn't survive that long in evolutionary timescales because there's nothing that prevents mutations. It seems a bad information storage system.

Comment author: DanArmak 02 October 2016 07:38:28AM *  6 points [-]

I can't figure out how to edit the post description to include a summary paragraph. Help?

... Now the actual link is gone and I can't edit it back in! It's supposed to point here. Mods/admins, can you help? Here is a screenshot of what I see.

Comment author: CellBioGuy 01 October 2016 11:32:37PM *  6 points [-]

My favorite crazy unlikely idea about that is that the Paleocene-Eocene Thermal Maximum 50 megayears ago - a 200k year pulse of high CO2 levels and temperatures in which the CO2 was added over a timescale of less than 10k years (potentially much less) and had an isotopic composition consistent with having been liberated from biogenic deposits - could theoretically be explained by all the coal and oil deposits of Antarctica being burned followed by some positive feedbacks kicking in.

(Most land of Antarctica never having been investigated geologically in any detail at all due to being under kilometers of ice) (And Antarctica at that time being completely unglaciated and relatively temperate despite being where it is now by then) (And subsequent glaciation having scraped most of the surface clean of anything that was on it at the time)

We have an advantage in that we evolved in the tropics - you can take a tropical animal and keep it warm near the poles by wrapping it in clothes. It's much more difficult to take a cold-adapted polar animal and keep it alive in the tropics...

Comment author: hg00 28 September 2016 01:43:41AM *  6 points [-]

My understanding is that a USA programmer would start at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year.

A pessimistic starting salary for a competent US computer programmer is $60K and senior ones can clear $200K. $100K is a typical starting salary for a computer science student who just graduated from a top university (also the median nationwide salary).

In the US market, foreigners come work as computer programmers by getting H1B visas. The stereotypical H1B visa programmer is from India, speaks mostly intelligible English with a heavy accent, gets hired by a company that wants to save money by replacing their expensive American programmers, and exists under the thumb of their employer (if they lose their job, their visa is jeopardized). I think that the average H1B makes less money than the average American coder. It sounds to me like you'd be a significantly more attractive hire than a typical H1B--you're fluent in English, and you've made contributions to Scheme?

The cost of living in the US is much higher than the Philippines. Raising a family in Silicon Valley is notoriously expensive. Especially if you want your kids to go to a "good school" where they won't be bullied. I don't know what metro has the best job availability/cost of living/school quality tradeoff. It will probably be one of the cities that's referred to as a "startup hub", perhaps Seattle or Austin. If your wife is willing to homeschool, you don't have to worry about school quality.

You can dip your toes in Option 1 without taking a big risk. Just start applying to US software companies. They'll interview you via Skype at first, and if you seem good, the best companies will be willing to pay for your flight to the US to meet the team. To save time you probably want to line up several US interviews for a single visit so you can cut down on the number of flights. Here are some characteristics to look for in companies to apply to:

  • The company has a process in place for hiring foreigners.

  • The company is looking for developers with your skill set.

  • The company's developer team is "clued in". Contributing to Scheme is going to be a big positive signal to the right employer. You can do things like read the company engineering blog, use BuiltWith, look up the employees on LinkedIn to figure out if the company seems clued in. Almost all companies funded by Y Combinator are clued in. If your interviewer's response to seeing Scheme on your resume is "What is Scheme?", then you're interviewing at the wrong company and you'll be offered a higher salary elsewhere.

  • The company is profitable but not sexy. For example, selling software to small enterprises. (You probably don't want to work for a business that sells software to large enterprises, as these firms are generally not "clued in". See above.) Getting a job at a sexy consumer product company like Google or Facebook is difficult because those are the companies that everyone is applying to. You can interview at those companies for fun, as the last places you look at. And you don't want to apply for a startup that's not yet profitable because then you're risking your wife and kids on an unproven business. I'm not going to tell you how to find these companies--if you use the same methods everyone else uses to find companies to apply to, you'll be applying to the same places everyone else is.

Of course you'll be sending out lots of resumes because you don't have connections. Maybe experiment with writing an email cover letter very much like the post you wrote here, including the word "fucking". I've participated in hiring software developers before, and my experience is that attempts at formal cover letters inevitably come across as stuffy and inauthentic. Catch the interviewer's interest with an interesting email subject line+first few sentences and tell a good story.

Actually you might have some connections--consider reaching out to companies that are affiliated with the rationalist community, posting to the Scheme mailing list if that's considered an acceptable thing to do, etc.

Consider donating some $ to MIRI if my advice ends up proving useful.

Comment author: Elo 27 September 2016 02:36:16AM -2 points [-]

yes

Comment author: username2 22 September 2016 12:20:52AM *  6 points [-]

Have you ever taken Adderall? I greatly suspect you have not.

People who fight chronic akrasia because of varoius degrees of ADHD and related mental disorders have a different response to stimulants than "normal" individuals. For me, Adderall puts me into cool, calm, clear focus. The kind of productive mode of being that most people get into by drinking a cup of coffee (except coffee makes me jittery and unfocused). Being on Adderall is just... "normal." Indeed the first time I tried it I thought the dose was too low because I didn't feel a thing.. until 8 hours later when I realized I was still cranking away good code and able to focus instead of my normal bouts of mid-day akrasia. I could probably count on my hands the number of times I had a full day of highly focused work without feeling stress or burn-out afterwards... now it's the new normal :)

For such people low-dose amphetamines don't provide any high, nor are they accompanied by some sort of berserker productivity binge like popular media displays. In the correct dosages they also don't seem to come with any addiction or withdraw -- I go off of it without any problems, other than reverting to the normal, viscous cycles of distraction and akrasia. (This isn't just anecdotal data -- the incidence rate of Adderall addiction among those following the prescribed plan is lost in the backround noise of people who are abusing in these trials.)

Honestly, see a psychiatrist that specializes in these things and talk to them about your inability to focus, your history of trouble in completing complex, long tasks, how this is affecting your career and personal growth goals, etc. Be honest about your shortcomings, and chances are they will work with you to find a treatment plan that truly helps you. You're not manipulating anybody.

Seriously, ADHD is a real mental disorder. Your first step should be to recognize it as such, and accept the fact that you might actually have a real medical condition that needs treatment. You're not manipulating the system, you're exactly the kind of person the system is trying to help! Prescription drugs are for more than just people who hear voices...

Comment author: WhySpace 22 September 2016 12:10:47AM 6 points [-]

Truth is not what you want it to be;

it is what it is,

and you must bend to its power or live a lie.

- Miyamoto Musashi

Comment author: Douglas_Knight 19 September 2016 06:34:47PM *  6 points [-]

First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this would mainly be noise, if he correctly reported the results, which he doesn't.

Second, there are some careful studies of high IQ (SMPY etc) by taking the well designed SAT test, which doesn't have a very high ceiling for adults and giving it to children below the age of 13. By giving the test to representative samples, they can well characterize the threshold for the top 3%. Using self-selected samples, they think that they can characterize up to 1/10,000. In any event, within the 3% they find increasing SAT score predicts increasing probability of accomplishments of all kinds, in direct contradiction of these claims.

Comment author: James_Miller 19 September 2016 04:30:35AM 6 points [-]

My reading of the behavioral genetics literature is that high intelligence being driven by rare autism variants is looking unlikely.

I haven't looked at this literature, but people with autism and very high IQs might be able to fake being neurotypical. As Steve Hsu told me, we don't know if von Neumann had a normal personality because he certainly had the intelligence to fake being normal if he felt this suited his interests.

Comment author: Dagon 19 September 2016 02:17:43AM 4 points [-]

Yikes. If your lunatic sensor didn't go off reading this, you should get it adjusted.

From a theoretical standpoint, democratic meritocracies should evolve five IQ defined 'castes', The Leaders, The Advisors, The Followers, The Clueless and The Excluded.

If that doesn't bother you, notice that this guy is putting a lot of weight on really simplistic statistics about the edge cases (the half-percent or less of the population which is very smart and/or is "successful in" one of his preferred "intellectually elite professions"). Oh, I see Gwern actually commented about this in a comment.

Basically, this is a lovely irony of a presumed-high-IQ author jumping to a pretty ridiculous conclusion because he's not willing/able to try to dissolve his questions and do the hard work to be rigorous in his research.

Comment author: DanArmak 13 October 2016 11:19:20PM 4 points [-]

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

Comment author: ChristianKl 10 October 2016 12:53:15PM 5 points [-]

Nothing. I don't think facebook membership counts are a good measurement.

Comment author: turchin 10 October 2016 11:13:53AM 5 points [-]

If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?

Comment author: ChristianKl 08 October 2016 08:59:58PM 5 points [-]

I think we discussed this previously on LW. In general the argument isn't convincing in his case.

Gilead made 20$ billion with a drug that cures one virus. If a pharma company would think that his approach has a 10% of working to cure all viruses spending 100$ million or more would be very interesting for traditional pharma companies under the current incentive scheme.

Comment author: CarlShulman 07 October 2016 12:19:07AM 5 points [-]

Primates and eukaryotes would be good.

Comment author: Houshalter 06 October 2016 06:06:13PM *  5 points [-]

I think it's well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.

I recall notable researchers like Hinton making predictions that "X will take 5 years" and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:

Testing embedded image.

In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.

Google recently announced a neural net chip that is 7 years ahead of Moore's law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.

That's just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google's synthetic gradient paper or hypernetworks.

I think one of the biggest things holding the field back is that it's all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don't generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.

Comment author: Lumifer 06 October 2016 05:07:42PM *  4 points [-]

What is the best source for this in your view?

The raw data is plentiful -- look at any standardized test scores (e.g. SAT) by race. For a full-blown argument in favor see e.g. this (I can't check the link at the moment, it might be that you need to go to the Wayback Machine to access it). For a more, um, mainstream discussion see Charles Murray's The Bell Curve. Wikipedia has more links you could pursue.

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

My view is that history is important and that outcomes are path-dependent. Slavery and segregation are crucial parts of the history of American blacks.

open to learning

Your social circles might have a strong reaction to you coming to anything other than the approved conclusions...

Comment author: gwern 05 October 2016 09:19:10PM *  5 points [-]

Lots of other problems with it too. Why is there any last-universal-common-ancestor in this scenario? You would want to drop a full ecosystem with millions of different organisms, each with different FEC shards of data. If you can deliver some bacteria to a virgin planet, you can deliver multiple kinds of bacteria, not just one. Yet, genetics finds that there's a LUCA (not that much of LUCA survives in current genomes).

Comment author: Lumifer 05 October 2016 06:02:19PM 2 points [-]

Boo politics discussion during the pre-election madness.

Comment author: moridinamael 03 October 2016 02:14:56PM 5 points [-]

Depends on in what way you're having trouble with it. If you need to interact with lots of people in whatever context, I find that taking an initial tone of mildly self-deprecating humor helps smooth things out. If you're the first one to mock yourself, it releases any tension that might be in the air. But then, you should let go of the self-deprecation before it starts to suggest actual low self-confidence.

It can also be good to formulate a pithy explanation for why you don't have the skill, so that you can casually explain the situation without bogging people down. "There weren't any swimming pools near where I grew up." Something short and simple, even if it leaves out important biographical details.

In the vast majority of cases, people are too involved in their own business to even think about you. If I see an adult swimming really badly, I just assume that nobody ever taught them to swim, which is a completely value-neutral assessment, and then continue on with whatever I was thinking about. I recently took a handful of jiu-jitsu lessons and was obviously as useless as a newborn kitten, but I don't really need to offer any kind of expository explanation for this lack of skill, because "just started learning" is a fully self-contained explanation.

Comment author: CellBioGuy 02 October 2016 08:05:25PM *  5 points [-]

In the hypothetical scenario in which there was something to find in Antarctica in the first place, given the thorough scraping the continent has gotten for 20+ megayears by kilometers-deep glaciers you can't expect to find much at all. The areas not covered by glaciers are generally mountains which erode - their modern exposed surfaces would have been quite deep underground at the time.

The sorts of things you could actually expect to find would be more along the lines of missing coal seams, long rods of long-ago-oxidized steel poking vertically through multiple strata into areas that would have held petroleum deposits at the time, really deep coal seams turned to ash in situ by underground gasification, hydrothermal features that concentrate copper and silver ore capped by weird craters that obliterate where the highest concentrations would have been with a big pile of copper-depleted gravel nearby. Perhaps odd isotope ratios in a very narrow sediment band if nuclear reactions were ever explored. The ecological effects you would expect on the continent are kind of overshadowed in the ocean sediment record by the worldwide climate event that the PETM represents (6C temperature spike, deep ocean hypoxia, phytoplankton death and repopulation).

It's worth noting that there are probably particular clades that are predisposed to being smart. There's a fascinating book out by Dr. Herculano-Houzel ("The Human Advantage") detailing recent work over the last decade examining brain structure across the mammals. She and her group found something fascinating: neural scaling laws differ from clade to clade. Mammals in general have a neural scaling law that if you make a brain 10x as large, it only has 4x as many neurons as the neurons on average increase in volume (partially due to longer connecting fibers). Primates break this though - all primate neurons are about the same size, which is remarkably small, the same size as that of a mammal that's like 10 grams in mass. A large primate brain is MUCH more powerful than a generic mammal brain of the same mass. Their recent work since that book came out indicates that birds also break that scaling law and have marvelously efficient brains - all bird neurons are approximately the same size like the primates, but what's more that size is 6x as small as those of primates. It is an interesting question if this would also have applied to dinosaurs, their close relatives who nonetheless were not under crazy selective pressure for low weight.

Comment author: CellBioGuy 01 October 2016 11:41:19PM *  5 points [-]

Worth noting:

Possibly indicating that the end of the last glaciation rather than new invention drove the more or less simultaneous large-scale agricultural transitions that occurred all across the old and new world ~10k years ago.

Comment author: Elo 30 September 2016 12:50:31AM -2 points [-]

cat weight might be relevant, cat current age, cat body shape (fat/skinny), description of cat's response to catnip,

In response to Linkposts now live!
Comment author: Gram_Stone 28 September 2016 04:13:17PM 5 points [-]

Thank you James Lamine, Vaniver, and Trike Apps.

I also wanted to quote something Vaniver has said, but that was unfortunately downvoted below the visibility threshold at the time:

I've pushed for doing things the right way, even if it takes longer, rather than quicker attempts that are less likely to work.

Comment author: Alejandro1 26 September 2016 09:37:33PM 5 points [-]

Lately it seems that at least 50% of the Slate Star Codex open threads are filled by Trump/Clinton discussions, so I'm willing to bet that the debate will be covered there as well.

Comment author: Houshalter 26 September 2016 05:08:24PM 5 points [-]

"Base rate" is statistics jargon. I would ask something like "which disease is more common?" And then if they still don't understand, you can explain that its probably the disease that is most common, without explaining Bayes rule.

Comment author: 9eB1 26 September 2016 03:06:19PM 5 points [-]

I have read Convict Conditioning. The programming in that book (that is, the way the overall workout is structured) is honestly pretty bad. I highly recommend doing the reddit /r/bodyweightfitness recommended routine.

  1. It's free.

  2. It has videos for every exercise.

  3. It is a clear and complete program that actually allows for progression (the convict conditioning progression standards are at best a waste of time) and keeps you working out in the proper intensity range for strength.

  4. If you are doing the recommended routine you can ask questions at /r/bodyweightfitness.

The main weakness of the recommended routine is the relative focus of upper body vs. lower body. Training your lower body effectively with only bodyweight exercises is difficult though. If you do want to use Convict Conditioning, /r/bodyweightfitness has some recommended changes which will make it more effective.

Comment author: Elo 21 September 2016 11:28:58PM -2 points [-]

have updated the list of common human goals.
http://lesswrong.com/r/discussion/lw/mnz/list_of_common_human_goals/

social looked like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people. Do you have an established social network? Do you have intimacy?

and now looks like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people. Do you have an established social network? Do you have intimacy? Do you have seek opportunities to have soul to soul experiences with other people? Authentic connection?

From feedback from someone who felt it wasn't covered and had a strong goal of authentic connection.

http://bearlamp.com.au/list-of-common-human-goals/

In response to Against Amazement
Comment author: moridinamael 20 September 2016 08:07:28PM *  5 points [-]

There are other emotional reactions which should register as confusion but don't.

Imagine a smart person who sees asphalt being deposited to pave a road. "How disgusting," they think. "Surely our civilization can think of something better than this." They spend a few minutes ruminating on various solutions for road construction and maintenance that would obviously be better than asphalt and then get distracted and never think about it again.

They thus manage to never realize that asphalt is a fantastic solution to this problem, that stacks of PhDs have been written on asphalt chemistry and thermal processes, that it's a highly optimized, cheap, self-healing material, that it's the most economical solution by leaps and bounds. All they noticed was disgust based purely on error and ignorance.

Any thought of the form "That's stupid, I can easily see a better way" should qualify as confusion.

Comment author: gwern 20 September 2016 07:36:41PM 5 points [-]

You did read the rest of the article right, perhaps looked at the bibliography with over a dozen references?

Checkmate atheists.

(More seriously, you should've posted that to the cognitive science stack where there might actually be someone who knows something about IQ or gifted & talented education.)

Comment author: Throawey 20 September 2016 04:54:25AM *  5 points [-]

For a while now, I have been working on a potentially impactful project. The main limiting factor is my own personal productivity- a great deal of the risk is frontloaded in a lengthy development phase. Extrapolating the development duration based on progress so far does not yield wonderful results. It appears I should still be able to finish it in a not-absurd timespan, it will just be slower than ideal.

I've always tried to improve my productivity, and I've made great progress in that compared to ten or even five years ago, but at this point I've picked most of the standard low hanging fruit. I've already fiddled with some extremely easy and safe kinda-nootropics already- melatonin, occasional caffeine pills- but not things like modafinil or amphetamines, or some of the less studied options.

And while thinking about this today, I decided to just run some numbers on amphetamines. Based on my current best estimates of market realities and the potential success and failure cases of the project, assuming amphetamines could improve my productivity by 30% on average, the expected value of taking amphetamines for the duration of development comes out to...

...a few hundred human lives.

And, in the best-reasonable case scenario, a lot more than that. This wasn't really unexpected, but it's surprisingly the first time I actually did the math.

So I imagine the God of Dumb Trolley Problems sits me down for a thought experiment and explains: "In a few years, there will be a building full of 250 people. A bomb will go off and kill all of them. You have two choices." The god leans in for dramatic effect. "Either you can do nothing, and let all of them die... or..." It lowers its head just enough for shadows to cast over its features... "You take this low, safe dose of Adderall for a few years, and the bomb magically gets defused."

This is not a difficult ethical problem. Even taking into account potential side effects, even assuming the amphetamines were obtained illegally and so carried legal liability, this is not a difficult ethical problem. When I look at this, I feel like the answer of what I should do is blindingly obvious.

And yet I have a strong visceral response of "okay yeah sure but no." I assume part of this is fairly extreme risk aversion to the idea of getting anything like amphetamines outside of a prescription. Legal trouble would be pretty disastrous, even if unlikely. And part of me is spooked about doing something like this without expert oversight.

But why not just try to get an actual prescription? For this, or some other advantageous semi-nootropic, at least. Once again, I just get a gross feeling about the idea of trying to manipulate the system. How about if I just explain the situation in full, with zero manipulation, to a sympathetic doctor? The response from my gut feels like a blank "... no."

So basically, I feel stuck. Part of me wants to recognize the risk aversion as excessive, and suggests I should at least take whatever steps I can safely. The other part is saying "but that is doing something waaaay out of the ordinary and maybe there's a reason for that that you haven't properly considered."

I am not even sure what I want to ask with this post. I guess if you've got any ideas or insights, I'd like to hear them.

Comment author: turchin 19 September 2016 11:23:14PM 5 points [-]

These two lines seem to me contradictory. It is not clear to me should I upload you or preserve your brain.

  • I don't understand how the cells of the brain produce qualia and consciousness, and have a certain concern that an attempt at uploading my mind into digital form may lose important parts of my self. If you haven't solved those fundamental problems of how brains produce minds, I would prefer to be revived as a biological, living being, rather than have my mind uploaded into software form.

  • I understand that all choices contain risk. However, I believe that the "information" theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain. I also suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages. In order to try to gain those advantages, and minimize those disadvantages, I am willing to volunteer to let my cryonically-preserved brain be used for experimental mind-uploading procedures, provided that certain preconditions are met, including:

Comment author: moridinamael 19 September 2016 02:12:28AM 4 points [-]

One interpretation I've seen is that ~130 is about as high as a human brain can get while still using basically the same architecture as an IQ 100 brain. The further beyond that you get, the more you're using significantly different systems. These differences may tend to be autism-related, such that the higher IQ comes at the expense of impairments.

Comment author: ChristianKl 13 October 2016 01:10:09PM 2 points [-]

tl;dr Obama doesn't really now what he's talking about but tries to use talking points to make sense of the new project.

Comment author: username2 11 October 2016 08:45:03PM 3 points [-]

"Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility." -Wikipedia

The very next sentence starts with "Utility is defined in various ways..." It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as "the greatest good for the greatest number" but the clutch is in the word "good" which is left undefined. This is as opposed to, say, virtue ethics which doesn't care per se about the consequences of actions.

Comment author: SithLord13 11 October 2016 06:50:06PM 4 points [-]

Could chewing gum serve as a suitable replacement for you?

Comment author: Houshalter 10 October 2016 08:07:29PM *  4 points [-]

I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.

There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.

These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.

I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.

Comment author: DanArmak 10 October 2016 02:54:19PM 4 points [-]

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Comment author: ChristianKl 10 October 2016 02:02:25PM 4 points [-]

Get employed by Google.

Comment author: turchin 10 October 2016 12:46:06PM 3 points [-]

There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?

Comment author: DanArmak 08 October 2016 09:44:11PM *  4 points [-]

These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.

With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.

And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.

Comment author: waveman 07 October 2016 09:51:03PM 3 points [-]

Estimated cost of tax evasion per year to the Federal gov is 450B.

Can I ask you to examine the apparent assumption here - that the $450B is all loss? Have you considered the possibility that the people who avoided the tax put the money to good use? Or that the government would not put that money to good use if it took it?

Comment author: Lumifer 05 October 2016 09:00:47PM 3 points [-]

You asked why is "the incidence of police encounters with blacks elevated". This is a direct answer.

If you want to know the reasons for different crime rates, this is going to get long and complicated.

Comment author: Lumifer 05 October 2016 08:15:57PM 3 points [-]

What are the reasons?

For example, there were 4,636 murders committed by white people and 5,620 murders committed by black people in 2015 (source). On the per-capita basis this makes the by-white murder rate to be about 2.2 per 100,000 and the by-black murder rate to be about 16.2 per 100,000.

Comment author: James_Miller 04 October 2016 11:30:04PM 4 points [-]

I'm extremely interested in the last three of these especially the Fermi paradox one. Great essays.

Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: Houshalter 03 October 2016 07:26:35PM *  4 points [-]

This seems as useful as telling depressed people to stop being depressed. Fear of embarrassment is one of the strongest drives humans have. Probably appearing to be a fool in the ancestral environment led to fewer mates or less status. It's not something you can just voluntarily turn off or push through easily.

The best strategy, I think, would be to work around it. Convince your brain that it's not embarrassing. Or that no one cares. Or pretend no one is watching. Or do it around supportive friends.

Comment author: username2 03 October 2016 12:08:16PM 4 points [-]

How do you deal with embarrassment of having to learn as an adult things that most people learn in their childhood? I'm talking about things that you can't learn alone in private, such as swimming, riding a bicycle and things like that.

Comment author: DanArmak 02 October 2016 07:43:42AM *  4 points [-]

Missing link: should point to Science Alert.

Article says the Chinese State Food and Drug Adminstration (CFDA or SFDA) conducted an internal review of the drugs currently pending approval, and found out that in more than 80%:

the data failed to meet analysis requirements, were incomplete, or totally non-existent. [Also], many clinical trial outcomes were written before the trials had actually taken place. [...] The report found that pretty much everyone involved was guilty of some kind of malpractice of fraud. [...] even third party independent investigators tasked with inspecting clinical trial facilities are mentioned in the report as being "accomplices in data fabrication due to cut-throat competition and economic motivation".

There's no matching news item on the SFDA site; it probably doesn't have an official version in English. The article linked relies on this and that.

Compare and contrast with Scott Alexander's idea of making the American FDA regulate less. Two ends of a spectrum? Different cultures and markets leading to different outcomes? Similar situations but better hidden in the American case?

Comment author: Fluttershy 02 October 2016 12:40:42AM 4 points [-]

The most striking problem with this paper is how easy all of the tests of viability they used are to game. There are a bunch of simple tests you can do to check for viability, and it's fairly common for non-viable tissue to produce decent-looking results on at least a couple, if you do enough. (A couple of weeks ago, I was reading a paper by Fahy which described the presence of this effect in tissue slices.)

It may be worth pointing out that they only cooled the hearts to -3 C, as well.

Comment author: gwern 01 October 2016 05:03:00PM 4 points [-]

Everything is heritable:

Politics/religion:

AI:

Statistics/meta-science/mathematics:

Psychology/biology:

Technology:

Economics:

Philosophy:

Fiction:

Comment author: RainbowSpacedancer 29 September 2016 11:45:43AM 4 points [-]

When pushed on why Anthony Magnabosco is out interviewing people he responds with, "I like talking to people and finding out what they believe." True enough, but disingenuous. He presents himself as a seeker of the truth and his root goal is he is out to change minds. If the obtaining the truth was your primary motivation, street interviews is an incredibly inefficient method. The interviews come off as incredibly patronising. Questions such as, "If I gave you evidence about a biblical contradiction, and I'm not saying I do, but if I did, would you change your mind?" Of course you have a contradiction up your sleeve.

Honesty and effectiveness appear to be conflicting goals in street epistemology.

In response to Linkposts now live!
Comment author: VipulNaik 28 September 2016 10:56:22PM *  4 points [-]

I'm unable to edit past posts of mine; it seems that this broke very recently and I'm wondering if it's related to the changes you made.

Specifically, when I click the Submit or the "Save and Continue" buttons after making an edit, it goes to lesswrong.com/submit with a blank screen. When I look at the HTTP error code it says it's a 404.

I also checked the post after that to see if the edit still went through, and it didn't. In other words, my edit did not get saved.

Do you know what's going on? There were a few corrections/expansions on past posts that I need to push live soon.

Comment author: Houshalter 28 September 2016 05:33:31PM 4 points [-]

I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking

“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all based on myopic optimism regarding the intended lifespan of these places.

Many refugees may never be able return home, and that reality needs to be realized and incorporated into solutions. Treating their situation as temporary or reversible puts people into a kind of existential limbo; inhabitants of these interstitial places can neither return to their normal routines nor move forward with their lives..

From City of Thorns:

The UN had spent a lot of time developing a new product: Interlocking Stabilized Soil Blocks (ISSBs), bricks made of mud, that could be used to build cheap houses in refugee camps. It had planned to build 15,000 such houses in Ifo 2 but only managed to construct 116 before the Kenyan government visited in December 2010 and ordered the building stopped. The houses looked too much like houses, better even than houses that Kenyans lived in, said the Department for Refugee Affairs, not the temporary structures and tents that refugees were supposed to inhabit.

From reddit:

Peru had an uprising in the 1980s in which the brutality of the insurgents, the Sendero Luminoso, caused mass migration from the Andes down to the coast. Lima's population grew from perhaps a million to its current 8.5 million in a decade. This occurred through settlements in pure desert, where people lived in shacks made of cardboard and reed matting. These were called "young villages", Pueblos Jóvenes.

Today, these are radically different. Los Olivos is now a lower-middle-class suburb, boasting one of the largest shopping malls in South America, gated neighborhoods, mammoth casinos and plastic surgery clinics. All now have schools, clinics, paved roads, electricity and water; and there is not a cardboard house in sight. (New arrivals can now buy prefab wooden houses to set up on more managed spaces, and the state runs in power and water.)

Zaatari refugee camp in Jordan, opened 4 years ago seems to be well on it's way to becomming a permanent city. It has businesses, permanent structures, and it's own economy.

Comment author: vallinder 27 September 2016 05:28:28PM 4 points [-]

I don't think it's fair to say that "nobody understood induction in any kind of rigorous way until about 1968." The linked paper argues that Solomonoff prediction does not justify Occam's razor, but rather that it gives us a specific inductive assumption. And such inductive assumptions had previously been rigorously studied by Carnap among others.

But even if we grant that assumption, I don't see why we should find it surprising that science made progress without having a rigorous understanding of induction. In general, successfully engaging in some activity doesn't require having a rigorous understanding of that activity, and making inductive inferences is something that comes very natural to human beings.

Moreover, it seems that algorithmic information theory has (at best) had extremely limited impact on actual scientific practice in the decades since the field was born. So even if it does constitute the first rigorous understanding of induction, the lesson seems to be that scientific progress does not require such an understanding.

Comment author: username2 27 September 2016 09:04:33AM 3 points [-]

tonight—and the U.S. POTUS election writ large—is shaping up to be a very consequential world event

Is that actually true? I've lived through many US presidential eras, including multiple ones defined by "change." Nothing of consequence really changed. Why should this be any different? (Rhetorical question, please don't reply as the answer would be off-topic.)

Consider the possibility that if you want to be effective in your life goals (the point of rationality, no?) then you need to do so from a framework outside the bounds of political thought. Advanced rationalists may use political action as a tool, but not for the search of truth as we care about here. Political commentary has little relevance to the work that we do.

Comment author: MattG2 20 September 2016 03:52:15PM *  4 points [-]

Let's say I have a set of students, and a set of learning materials for an upcoming test. My goal is to run an experiment to see which learning materials are correlated with better scores on the test via multiple linear regression. I'm also going to make the simplifying assumption that the effects of the learning materials are independent.

I'm looking for an experimental protocol with the following conditions:

  1. I want to be able to give each student as many learning materials as possible. I don't want a simple RCT, but a factorial experiment where students get many materials and the statistics tease out the linear regression.

  2. I have a prior about which learning materials will do better, I'd like to utilize this prior by originally distributing these materials to more students.

  3. (Bonus) Students are constantly entering this class, I'd love to be able to do some multi-armed bandit thingy where as I get more data I continually change this prior.

I've looked at most of the links going from https://en.wikipedia.org/wiki/Optimal_design but they mostly show the mathematical interpretation of each method, not a clear explanation of in which conditions you'd use that method.

Thanks!

Comment author: Gleb_Tsipursky 20 September 2016 12:46:22PM -2 points [-]

I agree that it does produce disassociation, but I don't think, for me, it's about disassociating from emotions. It's a disassociation from an identity label. It helps keep my identity small in way that speaks to my System 1 well.

In response to Seven Apocalypses
Comment author: James_Miller 20 September 2016 03:52:21AM *  4 points [-]

"A Disneyland with no children" apocalypse where optimization competition eliminates any pleasure we get from life.

A hell apocalypse where a large numbers of sentient lifeforms are condemned to very long term suffering possibly in a computer simulation.

View more: Next