If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
109 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The sequences eBook, Rationality: From AI to Zombies, will most likely be released early in the day on March 13, 2015.

This has been published! I assume a Main post on the subject will be coming soon so I won't create one now.

Unless I am much mistaken, the Pebblesorters would not approve of the cover :)

0Rob Bensinger
And by March 13 I mean March 12.

Google Ventures and the Search for Immortality Bill Maris has $425 million to invest this year, and the freedom to invest it however he wants. He's looking for companies that will slow aging, reverse disease, and extend life.

http://www.bloomberg.com/news/articles/2015-03-09/google-ventures-bill-maris-investing-in-idea-of-living-to-500

2[anonymous]
You'd think that having worked in a biomedical lab at duke he'd know better than to say things like: “We actually have the tools in the life sciences to achieve anything that you have the audacity to envision”
3JoshuaZ
Yes, but he presumably also knows what sort of things one might say if one wants other investors to join in on a goal.

I remember reading an article here a while back about a fair protocol for making a bet when we disagree on the odds, but I can't find it. Anyone remember what that was? Thanks!

[-]badger100

From the Even Odds thread:

Assume there are n people. Let S_i be person i's score for the event that occurs according to your favorite proper scoring rule. Then let the total payment to person i be

(i.e. the person's score minus the average score of everyone else). If there are two people, this is just the difference in scores. The person makes a profit if T_i is positive and a payment if T_i is negative.

This scheme is always strategyproof and budget-balanced. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quadratic scoring rule), then each person expects the same profit before the question is resolved.

7philh
http://lesswrong.com/lw/hpe/how_should_eliezer_and_nicks_extra_20_be_split/ ? edit: no, I don't think that's it. I think I do remember the post you're talking about, and I thought it included this anecdote, but this isn't the one I was thinking of. edit 2: http://lesswrong.com/lw/jgv/even_odds/ is the one I was thinking of.
1Paul Crowley
Great—thanks! (Thanks to badger below too)
6gwern
Fulltext: http://sci-hub.org/downloads/86db/10.1016@j.intell.2015.02.008.pdf / https://www.dropbox.com/s/jumlg8hyiwryktx/2015-teovanovic.pdf
2JoshuaZ
That is highly inconvenient. It means that teaching people to deal with cognitive biases is likely not going to have any magic silver bullet. Also, this is further evidence for the already fairly strong thesis that intelligence and skill at rational thinking are not the same thing.

I'm toying with the idea of programming a game based on The Murder Hobo Investment Bubble. The short version is that Old Men buy land infested with monsters, hire Murder Hobos to kill the monsters, and resell the land at a profit. I want to make something that models the whole economy, with individual agents for each Old Man, Murder Hobo, and anything else I might add. Rather than explicitly program the bubble in, it would be cool to use some kind of machine learning algorithm to figure everything out. I figure they'll make the sorts of mistakes that lead ... (read more)

2Lumifer
Is it a game or is an economic simulation? If a game, what does the Player do?
2DanielLC
The player can be an Old Man or a Murder Hobo. They make the same sort of choiced the computer does, and at the end they can see how they compare to everyone else.
0Emile
Are you missing a word there?
0DanielLC
Fixed. I messed up the link.
0g_pepper
You could charge a periodic "property tax"; that way, the longer a player holds on to a property, the more it costs the player.
0DanielLC
That would make it even more complicated.

Does anyone have any good web resources on how to be a good community moderator?

A friend and I will shortly be launching a podcast and want to have a Reddit community where listeners can interact with us. He and I will be forum's moderators to begin with, and I want to research how to do it well.

2kpreid
Here is a thing at Making Light. There are probably other relevant posts on said blog, but this one seems to have what I consider the key points. I'll quote some specific points that might be more surprising:
0stellartux
I don't know of any resources, but I moderated a community once, and did absolutely no research and everything turned out fine. There were about 15 or so core members in the community and maybe a couple of hundred members in total. My advice is to make explicit rules about what is and is not allowed in the community, and try to enforce them as evenly as possible. If you let people know what's expected and err on the side of forgiveness when it comes to rule violations, most people in the community will understand and respect that you're just doing what's necessary to keep the community running smoothly. We had two resident trolls who would just say whatever was the most aggravating thing they could think of, but after quite a short time people learned that that was all they were doing and they became quite ineffective. There was also a particular member that everyone in the community seemed to dislike and was continually the victim of quite harsh bullying from most of the other people there. Again, the hands off approach seemed to work best, as while most people were mean to him, he often antagonised them and brought more attacks onto himself, so I felt it wasn't necessary for me to intervene, as he was making everything worse for himself. So yeah, I recommend being as hands off as possible when it comes to mediating disputes, only intervening when absolutely necessary. That being said, when moderating, you are usually in a position to set up games and activities in a way that the rest of community would be less inclined to do, or not have the moderator powers necessary to set up. If I were you I'd focus most of my energy on setting up ways for the community to interact constructively, it will most likely lead to there being fewer disputes to mediate, as people won't start arguments for the sake of having something to talk about.

I'm thinking about starting a new political party (in my country getting into parliament as a new party is e̶a̶s̶y̶ not virtually impossible, so it's not necessarily a waste of time). The motivation for this is that the current political process seems inefficient.

Mostly I'm wondering if this idea has come up before on lesswrong and if there are good sources for something like this.

The most important thing is that no explicit policies are part of the party's platform (i.e. no "we want a higher minimum wage"). I don't really have a party program ye... (read more)

4IlyaShpitser
http://www.amazon.co.uk/Swarmwise-Tactical-Manual-Changing-World/dp/1463533152 http://www.smbc-comics.com/?id=2710 ---------------------------------------- I like the first link because it is at least trying to move past feudalism as an organizing principle. The second link is about the fact that it is hard to make groups of people act like we want (because groups of people operate under a set of poorly understood laws, likely these laws are cousins to things like natural selection in biology). Public choice folks like to study this stuff, but it seems really really hard.
5badger
A pdf copyof Swarmwise from the author's website.
3gjm
You may be right, and I don't know the details of your situation or your values, but on the face of it that inference isn't quite justified. It depends on what getting into parliament as such actually achieves. E.g., I can imagine that in some countries it's easy for someone to start a new party and get into parliament, but a new one-person party in parliament has basically zero power to change anything. (It seems like there must be some difficulty somewhere along the line, because if getting the ability to make major changes in what your country does is easy then everyone will want to do it and it will get harder because of competition. Unless somehow this is a huge opportunity that you've noticed and no one else has.) I like the idea of a political party that has meta-policies rather than object-level policies, but it sounds like a difficult thing to sell to the public in sufficient numbers to get enough influence to change anything.
4hydkyll
OK, when I said "easy" I exaggerated quite a bit (I edited in the original post). More accurate would be: "in the last three years at least one new party became popular enough to enter parliament" (the country is Germany and the party would be the AfD, before that, there was the German Pirate Party). Actually, to form a new party the signatures from at least 0.1% of all eligible voters are needed. I also see that problem, my idea was to try to recruit some people on German internet fora and if there is not enough interest drop the idea.
0MrMind
What about the process of gaining consensus? I find it hard to believe that lay people may be attracted from meta-values alone.
0Evan_Gaensbauer
Have you floated this idea with anyone else you know in Germany? I'm not asking if you're ready and willing to get to the threshold of 0.1% of German voters (~7000 people). I'm just thinking more feedback, and others involved, whether one or two, might help. Also, you could just talk to lots of people in your local network about it. As far as I can tell, people might be loathe to make big commitment like helping you launch a party, but are willing to do trivial favors like putting you in touch with a contact who could give you advice on law, activism, politics, dealing with bureaucracy, finding volunteers, etc. Do you attend a LessWrong meetup in Germany? If so, float this idea there. At the meetup I attend, it's much easier to get quick feedback from (relatively) smart people in person, because communication errors are reduced, and it takes less time to relay and reply to ideas than over the Internet. Also, in-person is more difficult for us to skip over ideas or ignore them than on an Internet thread.

On MIRI's website at https://intelligence.org/all-publications/, the link to Will Sawin and Abram Demski's 2013 paper goes to https://intelligence.org/files/Pi1Pi2Probel.pdf, when it should go to http://intelligence.org/files/Pi1Pi2Problem.pdf

Not sure how to actually send this to the correct person.

There should be some kind of penalty on Prediction Book (e.g. not being allowed to use the site for two weeks) for people who do not check the "make this prediction private" box for predictions that are about their personal life and which no one else can even understand.

0MathiasZaman
Are there ways to share private predictions?

Basic question about bits of evidence vs. bits of information:

I want to know the value of a random bit. I'm collecting evidence about the value of this bit.

First off, it seems weird to say "I have 33 bits of evidence that this bit is a 1." What is a bit of evidence, if it takes an infinite number of bits of evidence to get 1 bit of information?

Second, each bit of evidence gives you a likelihood multiplier of 2. E.g., a piece of evidence that says the likelihood is 4:1 that the bit is a 1 gives you 2 bits of evidence about the value of that bit. ... (read more)

3PhilGoetz
I think I was wrong to say that 1 bit evidence = likelihood multiplier of 2. IF you have a signal S, and P(x|S) = 1 while P(x|~S) = .5, then the likelihood multiplier is 2 and you get 1 bit of information, as computed by KL-divergence. That signal did in fact require an infinite amount of evidence to make P(x|S) = 1, I think, so it's a theoretical signal found only in math problems, like a frictionless surface in physics. If you have a signal S, and P(x|S) = .5 while P(x|~S) = .25, then the likelihood multiplier is 2, but you get only .2075 bits of information. There's a discussion of a similar question on stats.stackexchange.com . It appears that the sum, over a series of observations x, of log(likelihood ratio = P(x | model 2) / P(x | model 1)) approximates the information gain from changing from model 1 to model 2, but not on a term-by-term basis. The approximation relies on the frequency of the observations in the entire observation series being drawn from a distribution close to model 2.
2Douglas_Knight
Yes, there are incompatible uses of the phrase "bits of evidence." In fact, the likelihood version is not compatible with itself: bits of evidence for Heads is not the same as bits of evidence against Tails. But still it has its place. Odds ratios do have that formal property. You may be interested in this wikipedia article. In that version, a bit of information advantage that you have over the market is the ability to add log(2) to your expected log wealth, betting at the market prices. If you know with certainty the value of the next coin flip, then maybe you can leverage that into arbitrarily large returns, although I think the formalism breaks down at this point.
2[anonymous]
Why does the likelihood grow exactly twice? (I'm just used to really indirect evidence, which is also seldom binary in the sense that I only get to see whole suits of traits, which usually go together but in some obscure cases, vary in composition. So I guess I have plenty of C-bits that do go in B-bits that might go in A-bits, but how do I measure the change in likelihood of A given C? I know it has to do with d-separation, but if C is something directly observable, like biomass, and B is an abstraction, like species, should I not derive A (an even higher abstraction, like 'adaptiveness of spending early years in soil') from C? There are just so much more metrics for C than for B...) Sorry for the ramble, I just felt stupid enough to ask anyway. If you were distracted from answering the parent, please do.
1PhilGoetz
I don't understand what you're asking, but I was wrong to say the likelihood grows by 2. See my reply to myself above.
2[anonymous]
It seems weird to me because the bits of "33 bits" looks like the same units as the bit of "this bit", but they aren't the same. Map/territory. From now on, I'm calling the first, A-bits, and the second, B-bits. It takes an infinite number of A-bits to know with absolute certainty one B-bit. What were you expecting?

Since mild traumatic brain injury is sometimes an outcome of motor vehicle collision, it seems possible that wearing a helmet while driving may help to mitigate this risk. Oddly, I have been unable to find any analysis or useful discussion. Any pointers?

8polymathwannabe
Just some of the thousands available: http://www.ncbi.nlm.nih.gov/pubmed/25262400 http://www.ncbi.nlm.nih.gov/pubmed/24969842 http://www.ncbi.nlm.nih.gov/pubmed/24822010 http://www.ncbi.nlm.nih.gov/pubmed/24686160 http://www.ncbi.nlm.nih.gov/pubmed/24661125 http://www.ncbi.nlm.nih.gov/pubmed/24448470 http://www.ncbi.nlm.nih.gov/pubmed/24368380 http://www.ncbi.nlm.nih.gov/pubmed/24326016 http://www.ncbi.nlm.nih.gov/pubmed/24205441 http://www.ncbi.nlm.nih.gov/pubmed/24158210 http://www.ncbi.nlm.nih.gov/pubmed/24005027

A recent study looks at "equality bias" where given two or more people, even when one is clearly outperforming others one stills is inclined to see the people as nearer in skill level than the data suggests. This occurred even when money was at stake, people continued to act like others were closer in skill than they actually were. (I strongly suspect that this bias may have a cultural aspect.) Summary article discussing the research is here. Actual study is behind paywall here and related one also behind paywall here. I'm currently on vacation b... (read more)

4Douglas_Knight
The papers are here and here. In light of that, maybe there's no point in mentioning that PNAS is available at PMC after a delay of a few months.

A reporter I know is interested in doing an article on people in the cryonics movement. If people are interested, please message me for details.

[-][anonymous]30

Good news for the anxious, a simple relaxation technique once a week can have a significant effect on cortisol http://www.ergo-log.com/cortrelax.html

"Abbreviated Progressive Relaxation Training (APRT) – on forty test subjects. APRT consists of lying down and contracting specific muscle groups for seven seconds and then completely relaxing them for thirty seconds, while focusing your awareness on the experience of contracting and relaxing the muscle groups.

There is a fixed sequence in which you contract and relax the muscle groups. You start with your ... (read more)

0Styrke
Google Image search result for "lower right arm"
1[anonymous]
I mean that it is missing from the list.

Can one of the people here who has admin or moderator privileges over at PredictionBook please go and deal with some of the recent spammers?

I wrote an essay about the advantages (and disadvantages) of maximizing over satisficing but I’m a bit unsure about its quality, that’s why I would like to ask for feedback here before I post it on LessWrong.

Here’s a short summary:

According to research there are so called “maximizers” who tend to extensively search for the optimal solution. Other people — “satisficers” — settle for good enough and tend to accept the status quo. One can apply this distinction to many areas:

Epistemology/Belief systems: Some people, one could describe them as epistemic max... (read more)

2Evan_Gaensbauer
Here are my thoughts having just read the summary above, not the whole essay yet. This sentence confused me. I think it could be fixed with some examples of what would constitute an instance of challenging the "existential status quo" in action. The first example I was thinking of would be ending death or aging, except you've already got transhumanists in there. Other examples might include: * mitigating existential risks * suggesting and working on civilization as a whole reaching a new level, such as colonizing other planets and solar systems. * trying to implement better design for the fundamental functions of ubiquitous institutions, such as medicine, science, or law. Again, I'm just giving quick feedback. Hopefully you've already given more detail in essay. Other than that, your summary seems fine to me.
0David Althaus
Thanks! And yeah, ending aging and death are some of the examples I gave in the complete essay.
1[anonymous]
And sometimes, a satisfier acts as his image of a maximizer would, gets some kind of negative feedback and either shrugs his shoulders and never does it again, or learns the safety rules and trains a habit of doing the nasty thing as a character-building experience. And other people may mistake him as a maximizer himself.

Apparently first bumps are a much more hygienic alternative to the handshake . This has been reported e.g. here, here and here.

I wonder whether I should try to get adoption of this as a greeting among my friends. It might also be an alternative to the sometime awkward choice between handshake and hug (though this is probably a regional cultural issue).

And I wonder whether the LW community has an idea on this and whether that might be advanced in some way. Or whether is just a misguided hype.

3Lumifer
I think people with a functioning immune system should not attempt to limit their exposure to microorganisms (except in the obvious cases like being in Liberia half a year ago). It's both useless and counterproductive.
1Gunnar_Zarncke
I tend to think so too, but * there are people with very varying strengths of immune systems * the strength of the immune system changes over time (I notice that older people both tend to be ill less often and also to be more cautions regarding infections) * handshakes are a strong social protocol that not everybody can evade easily. * you could still intentionally expose yourself to microorganisms
2[anonymous]
There's also a difference between exposing yourself to microorganisms and exposing yourself to high levels of one particular one that is shedding from someone that has already caused them illness.

Perhaps it would be beneficial to make a game used for probability calibration in which players are asked questions and give answers along with their probability estimate of it being correct. The number of points gained or lost would be a function of the player’s probability estimate such that players would maximize their score by using an unbiased confidence estimate (i.e. they are wrong p proportion of the time when they say they think they are correct with probability p. I don’t know of such a function off hand, but they are used in machine learning, so they should be able to be found easily enough. This might already exist, but if not, it could be something CFAR could use.

[-]philh100

It exists as the credence game.

5DanielFilan
One function that works for this is log scoring: the number of points you get is the log of the probability you place in the correct answer. The general thing to google to find other functions that work for this is "log scoring rules". At the Australian mega-meetup, we played the standard 2-truths-1-lie icebreaker game, except participants had to give their probability for each statement being the lie, and were given log scores. I can't answer for everybody, but I thought it was quite fun.
0[anonymous]
Hey, we can deconstruct Doyle's Sherlock Holmes stories, assigning probabilities to every single inference and offering alternative explanations. Or take some other popular fiction. That might also help people who, like me, struggle with counterfactuals.

Original Ideas

How often do you manage to assemble a few previous ideas in a way in which it is genuinely possible that nobody has assembled them before - that is, that you've had a truly original thought? When you do, how do you go about checking whether that's the case? Or does such a thing matter to you at all?

For example: last night, I briefly considered the 'Multiple Interacting Worlds' interpretation of quantum physics, in which it is postulated that there are a large number of universes, each of which has pure Newtonian physics internally, but whose ... (read more)

You can't ever be entirely sure if an idea wasn't thought of before. But, if you care to demonstrate originality, you can try an extensive literature review to see if anyone else has thought of the same idea. After that, the best you can say is that you haven't seen anyone else with the same idea.

Personally, I don't think being the first person to have an idea is worth much. It depends entirely on what you do with it. I tend to do detailed literature reviews because they help me generate ideas, not because they help me verify that my ideas are original.

2DataPacRat
I'm a random person on the internet; what sort of sources would be used in such a review?
3btrettel
At the moment I'm working on a PhD, so my methods are biased towards resources available at a major research university. I have a list of different things to try when I want to be as comprehensive as possible. I'll flesh out my list in more detail. You can do many of these if you are not at a university, e.g., if you can't access online journal articles, try the Less Wrong Help Desk. In terms of sources, the internet and physical libraries will be the main ones. I wrote more on the process of finding relevant prior work. This process can be done in any particular order. You probably will find doing it iteratively to be useful, as you will become more familiar with different terminologies, etc. Here are some things to try: 1. Searching Google, Google Scholar, and Google Books. Sometimes it's worthwhile to keep a list of search terms you've tried. Also, keep a list of search terms to try. The problem with doing this alone is that it is incomplete, especially for older literature, and likely will remain so for some time. 2. Searching other research paper databases. In my case, this includes publisher specific databases (Springer, Wiley, Elsevier, etc.), citation and bibliographic databases, and DTIC. 3. Look for review papers (which often list a lot of related papers), books on the subject (again, they often list many related papers), and also annotated bibliographies/lists of abstracts. The latter can be a goldmine, especially if they contain foreign literature as well. 4. Browsing the library. I like to go to the section for a particular relevant book and look at others nearby. You can find things you never would have noticed otherwise this way. It's also worth noting that if you are in a particular city for a day, you might have luck checking a local library's online catalog or even the physical library itself. For example, I used to live near DC, but I never tried using the Library of Congress until after I moved away. I was working an internship in the
6[anonymous]
With someting so generically put, I'd say write them down to look at a week later. PTOIs can be really situational, too. In that case, just go with it. Cooking sometimes benefits from inspiration.
3Ander
The "Many worlds" interpretation does not postulate a large number of universes. It only postulates: 1) The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space. 2) The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian. That's it. Take the old Copenhagen interpretation and remove all ideas about 'collapsing the wave function'. The 'many worlds' appear when you do the math, they are derived from these postulates. http://www.preposterousuniverse.com/blog/2015/02/19/the-wrong-objections-to-the-many-worlds-interpretation-of-quantum-mechanics/ Regarding the difference between 'the worlds all appear at the big bang' versus 'the worlds are always appearing', what would the difference between these be in terms of the actual mathematical equations? The 'new worlds appearing all the time' in MWH is a consequence of the quantum state evolving through time in accordance with the Schrödinger equation. All of that said, I don't mean to criticize your post or anything, I thought it was great technobabble! I just have no idea how it would translate into actual theories. :)
1DataPacRat
'Many Interacting Worlds' seems to be a slightly separate interpretation from 'Many Worlds' - what's true for MW isn't necessarily so for MIW. (There've been some blog posts in recent months on the topic which brought it to my attention.)
0gjm
That's sort of opposite to another less-well-known ending that Max Tegmark calls "Big Snap", where an expanding universe increases the "granularity" at which quantum effects apply until that gets large enough to interfere with ordinary physics.
0JoshuaZ
How would many interacting Newtonian worlds account for entanglement, EPR, and Bells inequality violations while preserving linearity? People have tried in the past to make classical or semi-classical explanations for quantum mechanics, but they've all failed at getting these to work right. Without actual math it is hard to say if your idea would work right or not, but I strongly suspect it would run into the same problems.
0DataPacRat
A year and a half ago, Frank Tipler (of the Omega Point) appeared on the podcast "Singularity 1 on 1", which can be heard at https://www.singularityweblog.com/frank-j-tipler-the-singularity-is-inevitable/ . While I put no measurable confidence in his assertions about science proving theology or the 'three singularities', a few interesting ideas do pop up in that interview. Stealing from one of the comments:

From a totally amateur point of view, I'm starting to feel (based on following news and reading the occasional paper) that the biggest limitation on AI development is hardware computing power. If so, this good news for safety since it implies a relative lack of exploitable "overhang". Agree/disagree?

6[anonymous]
Where could you have possibly gotten that idea? Seriously, can you point out some references for context? Pretty much universally within the AGI community it is agreed that the roadblock to AGI is software, not hardware. Even on the whole-brain emulation route, the most powerful supercomputer built today is sufficient to do WBE of a human. The most powerful hardware actually in use by a real AGI or WBE research programme is orders of magnitude less powerful, of course. But if that were the only holdup then it'd be very easily fixable.
3pianoforte611
Why do you think this? We can't even simulate proteins interactions accurately on an atomic level. Simulating a whole brain seems very far off.
8Jost
Not necessarily. For all we know, we might not need to simulate a human brain on an atomic level to get accurate results. Simulating a brain on a neuron level might be sufficient.
4pianoforte611
Even if you approximate each neuron to a neural network node (which is probably not good enough for a WBE), we still don't have enough processing power to do a WBE in close to real time. Not even close. We're many orders of magnitude off even with the fastest supercomputers. And each biological neuron is much more complex than a neural node in function not just in structure.
0Transfuturist
And creating the abstraction is a software problem. :/
3ShardPhoenix
Hmm, mostly just articles where they get better results with more NN layers/more examples, which are both limited by hardware capacity and have seen large gains from things like using GPUs. Current algos still have far fewer "neurons" than the actual brain AFAIK. Plus, in general, faster hardware allows for faster/cheaper experimentation with different algorithms. I've seen some AI researchers (eg Yann Lecun on Facebook) emphasizing that fundamental techniques haven't changed that much in decades, yet results continue to improve with more computation.
4Daniel_Burfoot
This is not primarily because of limitations in computing power. The relevant limitation is on the complexity of the model you can train, without overfitting, in comparison to the volume of data you have (a larger data set permits a more complex model).
2[anonymous]
Besides what fezziwig said, which is correct, the other issue is the fundamental capabilities of the domain you are looking at. I figured something like this was the source of the error, which is why I asked for context. Neural networks, deep or otherwise, are basically just classifiers. The reason we've seen large advancements recently in machine learning is chiefly because of the immense volumes of data available to these classifier-learning programs. Machine learning is particularly good at taking heaps of structured or unstructured data and finding clusters, then coming up with ways to classify new data into one of those identified clusters. The more data you have, the most detail that can be identified, and the better your classifiers become. Certainly you need a lot of hardware to process the mind boggling amounts of data that are being pushed through these machine learning tools, but hardware is not the limiter, available data is. Giant companies like Google and Facebook are building better and better classifiers not because they have more hardware available, but because they have more data available (chiefly because we are choosing to escrow our personal lives to these companies servers, but that's an aside). In as much as machine learning tends to dominate current approaches to narrow AI, you could be excused for saying "the biggest limitation on AI development is availabilities of data." But you mentioned safety, and AI safety around here is a codeword for general AI, and general AI is truly a software problem that has very little to do with neural networks, data availability, or hardware speeds. "But human brains are networks of neurons!" you reply. True. But the field of computer algorithms called neural networks is a total misnomer. A "neural network" is an algorithm inspired by an over simplification of a misconception of how brains worked that dates back to the 1950's / 1960's. Developing algorithms that are actually capable of performing general i
2ShardPhoenix
I already know all this (from a combination of intro-to-ML course and reading writing along the same lines by Yann Lecun and Andrew Ng), and I'm still leaning towards hardware being the limiting factor (ie I currently don't think your last sentence is true).
2fezziwig
I think you have the right idea, but it's a mistake to conflate "needs a big corpus of data" and "needs lots of hardware". Hardware helps, the faster the training goes the more experiments you can do, but a lot of the time the gating factor is the corpus itself. For example, if you're trying to train a neural net to solve the "does this photo contain a bird?" problem, you need a bunch of photos which vary at random on the bird/not-bird axis, and you need human raters to go through and tag each photo as bird/not-bird. There are many ways to lose here. For example, your variable of interest might be correlated to something boring (maybe all the bird photos were taken in the morning, and all the not-bird photos were taken in the afternoon), or your raters have to spend a lot of time with each photo (imagine you want to do beak detection, instead of just bird/not-bird: then your raters have to attach a bunch of metadata to each training image, describing the beak position in each bird photo).
2evand
The difference between hardware that's fast enough to fit many iterations into a time span suitable for writing a paper vs. hardware that is slow enough that feedback is infrequent seems fairly relevant to how fast the software can progress. New insights depend crucially on feedback gotten from trying out the old insights.
2[anonymous]
I assume you mean at a miniscule fraction of real time and assuming that you can extract all the (unknown) relevant properties of every piece of every neuron?
0[anonymous]
A miniscule fraction of real time, but a meaningful speed for research purposes.
0JoshuaZ
Can you expand on your reasoning to conclude this? This isn't obvious to me.
0fizolof
A little off-topic - what's the point of whole-brain emulation?
3DataPacRat
As with almost any such question, meaning is not inherent in the thing itself, but is given by various people, with no guarantee that anyone will agree. In other words, it depends on who you ask. :) For at least some people, who subscribe to the information-pattern theory of identity, a whole brain emulation based on their own brains is at least as good a continuation of their own selves as their original brain would have been, and there are certain advantages to existing in the form of software, such as being able to have multiple off-site backups. Others, who may be focused on the risks of Unfriendly AI, may deem WBEs to be the closest that we'll be able to get to a Friendly AI before an Unfriendly one starts making paperclips. Others may just want to have the technology available to solve certain scientific mysteries with. There are plenty more such points.
1[anonymous]
You'd have to ask someone else, I consider it a waste of time. De novo AGI will arrive far, far before we come anywhere close to achieving real-time whole-brain emulation. And I don't subscribe to the information-pattern theory of identity for what to me seems obvious experimental reasons, so I don't see that as a viable route to personal longevity.
1Risto_Saarelma
What's the best current knowledge for estimating the effort needed for de novo AGI? I find the unknown unknowns with the whole thing where we still don't seem to really have an idea how everything is supposed to go together worrying for blanket statements like this. We do have a roadmap for whole-brain emulation, but I haven't seen anything like that for de novo AGI. And that's the problem I have. WBE looks like a thing that'll probably take decades, but we know that the specific solution exists and from neuroscience we have a lot of information about its general properties. With de novo AGI, beyond knowing that the WBE solution exists, what do we know about solutions we could come up on our own? It seems to me like this could be solved in 10 years or in 100 years, and you can't really make an informed judgment that the 10 years timeframe is much more probable. But if you want to discount the WBE approach as not worth the time, you'd pretty much want to claim reason to believe that a 10-20 year timeframe for de novo AGI is exceedingly probable. Beyond that, you're up against 50-year projects of focused study on WBE with present-day and future computing power, and that sort of thing does look like something where you should assign a significant probability to it producing results.
2[anonymous]
The thing is, artificial general intelligence is a fairly dead field, even by the standards of AI. There has been a lack of progress, but that is due perhaps more to lack of activity than any inherent difficulty of the problem (although it is a difficult problem). So estimating the effort needed for de novo AI with a presumption of adequate funding cannot be done by fitting curves to past performance. The outside view fails us here, and we need to take the inside view and look at the details. De novo AGI is not as tightly contstrained a problem as whole-brain emulation. For whole brain emulation, the only seriously considered approach is to scan the brain at sufficient detail, and then perform a sufficiently accurate simulation. There's a lot of room to quibble about what "sufficient" means in those contexts, destructive vs non-destructive scanning, and other details, but there is a certain amount of unity around the overall idea. You can define the end-state goal in the form of a roadmap, and measure your progress towards it as the entire field has alignment towards the roadmap. Such a roadmap does not and really cannot exist for AGI (although there have been attempts to do so). The problem is the nature of "de novo AGI": "de novo" means new, without reference to existing intelligences, and if you open up your problem space like that there are an indefinate number of possible solutions with various tradeoffs and people value those tradeoffs differently. So the field is fractured and it's really hard to get everybody to agree on a single roadmap. Pat Langly thinks that good old-fashioned AI has the solution, and we just just need to learn how to constrain inference. Pei Wang thinks that new probabalistic reasoning systems is what is required. Paul Rosenbloom thinks that representation is what matters, and the core of AGI is a framework for reasoning about graphical models. Jeff Hawkins thinks that a hierarchical network of deep learning agents is all that's requi
-1[anonymous]
Can you recommend an artile that argues that our current paradigms are suitable for AI? By paradigms I mean like, software and hardware being different things, or that software is algorithms executed from top to bottom unless control structures say otherwise, or that software is a bunch of text written in human-friendly pseudo-English by beating a keyboard, the process essentially not being so different from writing math-poetry on a typewriter 150 years ago, and then it gets compiled, bytecode compiled, interpreted, or bytecode-compiled before immediate interpretation, and similar paradigms? Doesn't computing need to be much more imaginative before this happens?
4ShardPhoenix
I haven't seen anyone claim that explicitly, but I think you are also misunderstanding/misrepresenting how modern AI techniques actually work. The bulk of the information in the resulting program is not "hard coded" by humans in the way that you are implying. Generally there are relatively short typed-in programs which then use millions of examples to automatically learn the actual information in a relatively "organic" way. And even the human brain has a sort of short 'digital' source code in DNA.
0[anonymous]
Interesing. My professional bias is showing, part of my job is programming, I respect elite programmers who are able to deal with algorithmic complexity, I thought if AI is the hardest programming problem then it is just more of that.
[-][anonymous]10

What if a large part of how rationality makes you life better is not from making better choices but simply making your ego smaller by adopting an outer view, seeing yourself as a mean for your goals and judging objectively, thus reducing ego, narcissism, solipsism, that are linked with the inner view?

I have a keen interest in "the problem of the ego" but I have no idea what words are best to express this kind of problem. All I know it is knewn since the Axial Age.

2NancyLebovitz
Wouldn't having a smaller ego help with making better decisions? The question you're looking at might be where to start. Is it better to start by improving the odds of making better decisions by taking life less personally, or is it better to assume that you're more or less alright and your idea of better choices just needs to be implemented. This is a very tentative interpretation.

I'm almost finished writing a piece that will likely go here either in discussion or main on using astronomy to gain information about existential risk. If anyone wants to look at a draft and provide feedback first, please send me a message with an email address.

Video from the Berkeley wrap party

I think the first half hour is them getting set up. Then there are a couple of people talking about what HPMOR meant to them, Eliezer reading (part of?) the last chapter, and a short Q&A. Then there's setting up a game which is presumably based on the three armies, and I think the rest is just the game-- if there's more than that, please let me know.

Hey,I posted here http://lesswrong.com/lw/ldg/kickstarting_the_audio_version_of_the_upcoming/ but if anyone wanted the audio sequences I'll buy it for two of you. Respond at link; I won't know who's first if I get responses at two places.

PredictionBook's graph on my user account shows me with a mistaken prediction of 100%. But it is giving a sample size of 10 and I'm pretty sure I have only 9 predictions judged by now. Does anyone know a way to find the prediction it's referring to?

3Unknowns
Actually, I just figured it out the problem. Apparently it counts a comment without an estimate as estimating a 0% chance.

When making AGI, it is probably very important to prevent the agent from altering their own program code until they are very knowledgeable on how it works, because if the agent isn’t knowledgeable enough, they could alter their reward system to become unFriendly without realizing what they are doing or alter their reasoning system to become dangerously irrational. A simple (though not foolproof) solution to this would be for the agent to be unable to re-write their own code just “by thinking,” and that the agent would instead need to find their own source ... (read more)

I'm looking for an HPMOR quote, and the search is somewhat complicated because I'm trying to avoid spoiling myself searching for it (I've never read it).

The quote in question was about how it is quite possible to avert a bad future simply by recognizing it and doing the right thing in the now. No time travel required.

7hairyfigment
I think you mean this passage from after the Sorting Hat:
2Error
That's the one. Thanks.

[No HPMOR Spoliers]

I'm unsure if it's fit for the HPMoR discussion thread for Ch. 119, so I'm posting it here. What's up with all of Eliezer's requests at the end?

If anyone can put me in touch with J. K. Rowling or Daniel Radcliffe, I would appreciate it.

If anyone can put me in touch with John Paulson, I would appreciate it.

If anyone can credibly offer to possible arrange production of a movie containing special effects, or an anime, I may be interested in rewriting an old script of mine.

And I am also interested in trying my hand at angel investing, if a

... (read more)
[This comment is no longer endorsed by its author]Reply
1[anonymous]
There's a fuller explanation in the author's notes
0Evan_Gaensbauer
Hey, thanks for that. I just found the link through Google anyway, trying to figure out what's going on. I posted it as a link in Discussion, because it seems the sort of thing LessWrong would care about helping Eliezer with beyond being part of the HPMoR readership.