Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Arenamontanus 21 October 2014 11:53:43PM 0 points [-]

It is pretty cute. I did a few Matlab runs with power-law distributed hazards, and the effect holds up well: http://aleph.se/andart2/uncategorized/anthropic-negatives/

Comment author: itaibn0 21 October 2014 11:52:27PM 0 points [-]

I don't think guided training is generally the right way to disabuse an AIXI agent of misconception we think it might get. What training amounts to is having the agent's memory begin with some carefully constructed string s0. All this does is change the agent's prior from some P based on Kolmogorov complexity to the prior P' (s) = P (s0+s | s0) (Here + is concatenation). If what you're really doing is changing the agent's prior to what you want, you should do that with self-awareness and no artificial restriction. In certain circumstances guided training might be the right method, but the general approach should be to think about what prior we want and hard-code it as effectively as possible. Taken to the natural extreme this amounts to making an AI that works on completely different principles than AIXI.

Comment author: fubarobfusco 21 October 2014 11:52:02PM 0 points [-]

"Religion" means too many different things. To a sociologist, religion is not just a creed, it's a social behavior; it's something people do, not only something they believe. People get together and do various things together, which they explain in various terms — a Zen Buddhist meditation session doesn't look very much like a High Church service, except that both involve a lot of people in a hall together.

Comment author: KatjaGrace 21 October 2014 11:47:59PM *  0 points [-]

One way you could frame a large disagreement in it is as whether there are likely to be simple insights that can 'put the last puzzle piece' in a system (as Bostrom suggest), or massively improve a system in one go, rather than via a lot of smaller insights. This seems like a thing where we should be able to get heaps of evidence from our past experiences with insights and technological progress.

Comment author: DavidLS 21 October 2014 11:32:38PM *  0 points [-]

"Did you kill yourself at any point during the last 24 hours?" is not likely to produce anything useful at all.

I see. Right now the system doesn't have any defined questions. I believe that suitable questions will be found so I'm focusing on the areas I have a solid background in.

If a project is unsafe in a literal way, shipping the product to consumers (or offering it for sale) is of course illegal. However, when considering a sous vide cooker in the past I have always worried about the dangers of potentially eating undercooked food (eg. diarrhea, nausea, and light headedness), which was how I took your meaning previously. "Product is safe for use, but accidental use might lead to undesirable outcomes". As I mentioned in our discussion here this project is not intended to be a replacement for the FDA.

shipping a product to random people and asking them "Is it useful?" ... is not likely to produce anything useful

I agree that "is it useful" is not a particularly useful question to ask, but I don't see any harm in supporting it. If you are looking for a better question, "80% of users used the product twice a week or more three months after receiving it" sounds like information that would personally help me make a buying decision. (Have you used the product today?)

So perhaps frequency of use might be a better question? I wasn't haggling over what questions to ask because it was your example.

never mind a proper scientific study

I think rigor in data collection and data processing are what make something scientific. For an example, you could do a rigorous study on "do you think the word turtle is funny?".

Comment author: KatjaGrace 21 October 2014 11:13:54PM 0 points [-]

The recalcitrance for making networks and organizations in general more efficient is high. A vast amount of effort is going into overcoming this recalcitrance, and the result is an annual improvement of humanity's total capacity by perhaps no more than a couple of percent.

This is the first time we have close to a quantitative estimate for recalcitrance: a couple of percent a year for something as big as human society, and 'a vast amount' of effort. Do you think this recalcitrance is high relative to that of other systems under consideration? Also, what efforts do you think count as 'overcoming this recalcitrance'?

Comment author: KatjaGrace 21 October 2014 11:11:14PM *  0 points [-]

Bostrom says that a fast takeoff would leave scant time for human deliberation (p64). While it certainly leaves much less time to deliberate than the other scenarios, I can imagine two days allowing substantially more deliberation fifty years hence than it would now. Information aggregation and deliberation technologies could arguably be improved a lot. Automatic but non-agent systems might also be very good by the time human-level agents are feasible.

Comment author: NancyLebovitz 21 October 2014 11:03:24PM 0 points [-]

http://www.vox.com/2014/5/20/5732208/the-green-lantern-theory-of-the-presidency-explained

Summary: The President doesn't have all that much freedom to control the government. The office was designed that way.

Comment author: KatjaGrace 21 October 2014 10:59:52PM 0 points [-]

Crossover involves the project contributing more to itself than the outside world does. Note that this is not implied by even the quickest growth. A massive project might still mostly engage in trade with the rest of society, benefiting largely from their innovations, and contributing its own to wider progress. The picture above shows an AI project growing to rival the whole world in capability, at which point it might naturally contribute a lot to its own growth, relative to what the world contributes. However the world can also grow. If we are talking about a particular AI project, then other AI projects are a natural part of the rest of the world to be experiencing fast growth at the same time. And if fast-growing AI projects were improving every labor-based part of the economy, then outside growth should also increase. We will consider this sort of thing more in a future chapter about whether single projects take power, but I think it is worth asking whether reaching 'crossover' should be the default expectation.

Comment author: DanielLC 21 October 2014 10:55:40PM *  0 points [-]

Based on the phrase "change which charities I donate to" I had assumed he or she was already donating to multiple charities, presumably including action in subsaharan africa.

The money being donated to charities that are not in Sub-Saharan Africa would be better donated to charities that are. Even if that were not the case, that would just mean that the money that is donated to charities that are in Sub-Saharan Africa would be better donated to charities that are not. The money from a single donor isn't enough to change which continent you should donate to.

Also can you explain the "magnitude" thing?

An order of magnitude is a power of ten.

I'm not sure I follow your definition of "effectiveness".

Here's an example of what I mean.

The Seeing Eye trains dogs to help mitigate the effects of blindness for about $50,000 each. The Fred Hollows Foundation performs cataract surgeries to cure blindness for about 25$ each. It's not generally clear how to relate how much good two different charities are, but it is pretty obvious that a cataract surgery does more good than a guide dog, and for 2,000 times less. Thus, the Fred Hollows Foundation is more than three orders of magnitude more cost-effective than The Seeing Eye. Even if The Seeing Eye was tax-free and the Fred Hollows Foundation was taxed at 99.9%, it would be worth while to donate to The Seeing Eye.

Comment author: KatjaGrace 21 October 2014 10:55:09PM *  0 points [-]

What empirical evidence could you look at to better predict the future winner of the Foom Debate? (for those who looked at it above)

Comment author: KatjaGrace 21 October 2014 10:53:41PM 0 points [-]

Has anything else had ambiguous-maybe-low recalcitrance and high optimization power applied in the past? What happened?

Comment author: E_Ransom 21 October 2014 10:43:29PM 0 points [-]

What parallels exist between AI programming and pedagogy?

Today, I had to teach my part-timer how to delete books from our inventory. This is a two-phase process: delete the book from our inventory records then delete the book from our interlibrary loan records. My PTer is an older woman not at all versed in computers, so to teach her, I first demonstrated the necessary steps, then asked her to do it while I guided her, then asked her to do it alone. She understood the central steps and began to delete books at a reasonable rate.

A few minutes in, she hit the back button one too many times and came upon a screen that was unfamiliar to her. The screen had buttons leading back to the interface she needed to use. They were very clearly labeled. But she could not understand the information in the labels, either because she had shut down all "receiving" from the direction of the screen in a panic or because she did not want to try for fear of "messing the computer up."

Helping her with this made me think of the problems AI programmers have. They cannot tear levers from their mind and give that set of inferences to an AI wholesale. They cannot say "the AI will KNOW that, if it hits back once too many times, to just hit the button that says 'Delete Holdings.' After all, its job is to delete holdings so it knows that the 'Delete Holdings' interface is the one it needs." Just like my PTers, in order to make that inference, the AI must be able to receive information about this new surrounding, process that information, and infer from it how to obtain its goal (i.e. getting back to 'Delete Holdings').

What sort of lessons and parallels could be drawn from AI programming that would be useful in pedagogy? I will admit I am ignorant of AI theory or practice save what I have picked up from the Sequences. But, the overlaps seems worth exploring. Indeed, I suspect others have explored it before me. The Sequences are certainly didactic. I also wonder if teaching (especially teaching those who are technologically illiterate) would be a useful experience to those planning to work in AI programming and ethics.

Comment author: NancyLebovitz 21 October 2014 10:32:48PM 0 points [-]

See also Second Life.

When more computer resources are available, I expect to see games where part of the fun for players is remodeling the environment.

Comment author: DanArmak 21 October 2014 10:28:22PM 0 points [-]

Today almost everyone chooses to invest in PvP instead of PvE. Not just society but arguably the human brain is wired to engage in status games, often antagonistic and violent ones. Saying "let them play PvP" is basically saying "let everything stay the same".

Comment author: Nornagest 21 October 2014 10:27:24PM *  1 point [-]

There are PvE elements early in some Minecraft game types, but once they're overcome, or if you pick a game type that disables them, the major challenge becomes building things that are impressive to you or to other players. If I had to classify that as anything in this typology it'd be PvP, but I actually think it's reflecting something orthogonal to it, more along the lines of the game vs. toy distinction. (Game: Doom. Toy: SimCity.)

One thing Minecraft does do to stretch its PvE content is procedural generation, elsewhere associated with the Roguelike genre and its relatives (Diablo, Torchlight, etc.)

Comment author: Vladimir_Nesov 21 October 2014 09:52:30PM *  1 point [-]

If Moral Parliament can make deals, it could as well decide on a single goal to be followed thereafter, at which point moral uncertainty is resolved (at least formally). For this to be a good idea, the resulting goal has to be sensitive to facts discovered in the future. This should also hold for other deals, so it seems to me that unconditional redistribution of resources in not the kind of deal that a Moral Parliament should make. Some unconditional redistributions of resources are better than others, but even better are conditional deals that say where the resources will go depending on what is discovered in the future. And while resources could be wasted, so that at a future point you won't be able to direct at much in a new direction, seats in the Moral Parliament can't be.

Comment author: Vulture 21 October 2014 09:51:49PM 1 point [-]

I think there are already plenty enough ways of distinguishing extraverts from introverts.

Comment author: Liso 21 October 2014 09:47:42PM 1 point [-]

Lemma1: Superintelligence could be slow. (imagine for example IQ test between Earth and Mars where delay between question and answer is about half hour. Or imagine big clever tortoise which could understand one sentence per hour but then could solve riemann hypothesis)

Lemma2: Human organization could rise quickly. (It is imaginable that bilions join organization during several hours)

Next theorem is obvious :)

Comment author: Jiro 21 October 2014 09:28:52PM 0 points [-]

Not having a reason is a simplification that does not hold up:

What Chesterton actually said is that he wants to know something's use, and if you read the whole quote it's clear from context that he really does mean what one would consider as a use in the ordinary sense. Incompetence and apathy don't count.

"Not having a reason" is a summary; summaries by necessity gloss over details.

Comment author: Lumifer 21 October 2014 09:28:29PM 3 points [-]

You don't run out of PvE content in games where players produce the content. The major contemporary example is Minecraft.

Another example relevant to this post is real life.

Comment author: Princess_Stargirl 21 October 2014 09:26:01PM 0 points [-]

One million! That is alot of words.

The following link has the word counts for a bunch of well known Novels and Series. No single book mentioned in the article is even close to a million words. http://electricliterature.com/infographic-word-counts-of-famous-books/

Notably "Les Miserable" and "War and Peace" are at approx 531,000 and 563,000 (the lengths of these works vary significantly by translation. W&P can be up to around 590K).

Comment author: Liso 21 October 2014 09:21:32PM 1 point [-]

This is similar to question about 10time quicker mind and economic growth. I think there are some natural processes which are hard to be "cheated".

One woman could give birth in 9 month but two women cannot do it in 4.5 month. Twice more money to education process could give more likely 2*N graduates after X years than N graduates after X/2 years.

Some parts of science acceleration have to wait years for new scientists. And 2 time more scientists doesnt mean 2 time more discoveries. Etc.

But also 1.5x more discoveries could bring 10x bigger profit!

We could not suppose only linear dependencies in such a complex problems.

Comment author: Nornagest 21 October 2014 09:16:07PM *  0 points [-]

Puzzles, minigames, and survival-oriented content are also PvE in a general sense, but are not about fighting.

Similarly, I've encountered reputation and social conflict systems that are PvP in a general sense without an announcer bombastically proclaiming "HEADSHOT" at any point. It's a lot harder to get these figured out right, though, and most games don't bother.

Comment author: cousin_it 21 October 2014 09:15:11PM *  1 point [-]

I don't want to accept that the best possible future must contain many unhappy people, because status contests have many losers and few winners. There has to be a better way, like procedurally generating PvE content. Of course if some people like PvP, let them play PvP.

Comment author: KatjaGrace 21 October 2014 09:10:36PM 1 point [-]

So, when the AI is turned on, there could be a hardware overhang of 1, 10, or 100 right within the computer it is on.

I didn't follow where this came from.

Also, when you say 'hardware overhang', do you mean the speedup available by buying more hardware at some fixed rate? Or could you elaborate - it seems a bit different from the usage I'm familiar with.

Comment author: Stuart_Armstrong 21 October 2014 09:10:13PM 0 points [-]

Ok, I don't like gnomes making current decisions based on their future values. Let's make it simpler: the gnomes have a utility function linear in the money owned by person X. Person X will be the person who appears in their (the gnome's) room, or, if no-one appeared, some other entity irrelevant to the experiment.

So now the gnomes have subjectively indistinguishable utility functions, and know they will reach the same decision upon seeing "their" human. What should this decision be?

If they advise "buy the ticket for price $x", then they expect to lose $x with probability 1/4 (heads world, they see a human), lose/gain nothing with probability 1/4 (heads world, they don't see a human), and gain $1-x with probability 1/2 (tails world). So this gives an expected gain of 1/2-(3/4)x, which is zero for x=$2/3.

So this seems to confirm your point.

"Not so fast!" shouts a voice in the back of my head. That second head-world gnome, the one who never sees a human, is a strange one. If this model is vulnerable, it's there.

So let's do without gnomes for a second. The incubator always creates two people, but in the heads world, the second person can never gain (nor lose) anything, no matter what they agree to: any deal is nullified. This seems a gnome setup without the gnomes. If everyone is an average utilitarian, then they will behave exactly as the total utilitarians would (since population is equal anyway) and buy the ticket for x<$2/3. So this setup has changed the outcome for average utilitarians. If its the same as the gnome setup (and it seems to be) then the gnome setup is interfering with the decisions in cases we know about. The fact that the number of gnomes is fixed is the likely cause.

I'll think more about it, and post tomorrow. Incidentally, one reason for the selfish=average utilitarian is that I sometimes model selfish as the average between total utilitarian incubator and anti-incubator (where the two copies hate each other in the tail world). 50%-50% on total utilitarian vs hatred seems to be a good model of selfishness, and gives the x<$1/2 answer.

Comment author: ChristianKl 21 October 2014 09:09:25PM 1 point [-]

I don't see that PvE is somehow less war than PvP. Both are about fighting.

Fighting isn't the only kind of interaction between humans. When I'm dancing Salsa I'm not fighting with my dance partner or fighting the environment.

Comment author: KatjaGrace 21 October 2014 09:07:41PM 0 points [-]

If the initial system does not find a hardware overhang, it seems unclear to me that a 1000x less expensive system necessarily will. For any system which doesn't have a hardware overhang, there is another 1000x less efficient that also doesn't.

Comment author: KatjaGrace 21 October 2014 09:05:04PM 0 points [-]

If there is already a "hardware overhang" when key algorithms are created, then perhaps a great deal of recursive self-improvement can occur rapidly within existing computer systems.

Do you mean that if a hardware overhang is large enough, the AI could scale up quickly to the crossover, and so engage in substantial recursive self-improvement? If the hardware overhang is not that large, I'm not sure how it would help with recursive self-improvement.

Comment author: KatjaGrace 21 October 2014 09:01:16PM 0 points [-]

This would probably conclude that superintelligence will explode, because, looking only at more and more complex organisms, the computational power of evolution has decreased dramatically owing to larger generation times and smaller population sizes, yet the rate of intelligence increase has probably been increasing.

It seems hard to know how other parameters have changed though, such as the selection for intelligence.

Comment author: singularitard 21 October 2014 09:00:27PM 0 points [-]

I didn't say that, top level commenter did. I wish their evaluations of all charities were at least as detailed as that.

Comment author: KatjaGrace 21 October 2014 08:59:39PM 0 points [-]

In order to model intelligence explosion, we need to be able to measure intelligence.

Describe a computer's power as <Memory, FLOPS>. What is the relative intelligence of these 3 computers?

<M, S> <M, 2S> <2M, S>

Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.

Comment author: elharo 21 October 2014 08:54:44PM 0 points [-]

if people use data and inferences they can make with the data without any concern about error bars, about heterogeneity, about noisy data, about the sampling pattern, about all the kinds of things that you have to be serious about if you’re an engineer and a statistician—then you will make lots of predictions, and there’s a good chance that you will occasionally solve some real interesting problems. But you will occasionally have some disastrously bad decisions. And you won’t know the difference a priori. You will just produce these outputs and hope for the best.

--Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley, Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

Comment author: RomeoStevens 21 October 2014 08:50:56PM 1 point [-]

Someone replied, asking why anyone should care about the minutia of lifeless, non-agenty forces? How could anyone expend so much of their mental efforts on such trivia when there are these complex, elaborate status games one can play instead? Feints and countermoves and gambits and evasions, with hidden score-keeping and persistent reputation effects… and that’s just the first layer! The subtle ballet of interaction is difficult even to watch, and when you get billions of dancers interacting it can be the most exhilarating experience of all.

Can these people start wearing tags so I can stop interacting with them?

Comment author: okay 21 October 2014 08:49:03PM *  0 points [-]

It'd just model a world where if the machine it sees in the mirror turns off, it can no longer influence what happens.

When the function it uses to model the world becomes detailed enough, it can predict only being able to do certain things if some objects in the world survive, like the program running on that computer over there.

Comment author: Manfred 21 October 2014 08:48:09PM *  0 points [-]

Needless to say that all the bold statements I'm about to make are based on an "inside view". [...]

Spare us :P Not only are Stuart's advantages not really that big, but it's worthwhile to discuss things here. Something something title of this subreddit.

The consensus view on LW seems to be that much of the SSA vs. SIA debate is confused and due to discussing probabilities detached from decision problems of agents with specific utility functions.

Hm, this makes me sad, because it means I've been unsuccessful. I've been trying to hammer on the fact that an agent's probability assignments are determined by the information it has. Since SSA and SIA describe pieces of information ("being in different worlds are mutually exclusive and exhaustive events" and "being different people are mutually exclusive and exhaustive events"), quite naturally they lead to assigning different probabilities. If you specify what information your agent is supposed to have, this will answer the question of what probability distribution to use.

Comment author: Metus 21 October 2014 08:47:18PM 0 points [-]

Finding out where to donate is exhaustive.

There are a couple of organisations affiliated with LW or organsiations that are inspired by the memespace. A remotely exhaustive list would be CFAR, MIRI, FHI, GiveWell. Which ones did I forget? Further, there are more traditional, gigantic organisations like the various organs of the UN or the catholic church. Finally, there are organisations like Wikipedia or the Linux foundation. In this jungle how should I found out where to donate my personal marginal monetary unit?

I posit that I should not. In no possible way am I qualified to judge that but I know just enough economics to claim that a mild amount of diversification should be better on aggregate than any kind of monoculture. GiveWell does some of this work of evaluating charities, but if everyone was donating to GiveWell instead of some to other charities I am sure those other causes would suffer quite some. Or is GiveWell intended as the universal charity and I should just not worry about where my money exactly will go except for the eventual internet outrage?

The dream is a one-click solution: This is how much money I am willing to give, have an organisation take it and distribute optimally relative to some chosen measure. Is GiveWell this?

Comment author: ChristianKl 21 October 2014 08:44:14PM 0 points [-]

They don't publish very long write-ups, it's more like a checklist of their particular criteria.

I do think the length of the analysis of GiveDirectly is fairly long (http://www.givewell.org/international/top-charities/give-directly). If you think that the recommendation of GiveDirectly is a mistake based on naive assumptions it makes sense to read the article.

Comment author: Stuart_Armstrong 21 October 2014 08:36:58PM 0 points [-]

Thanks for engaging with my paper ^_^ I will think about your post and construct a more detailed answer.

The consensus view on LW seems to be that much of the SSA vs. SIA debate is confused and due to discussing probabilities detached from decision problems of agents with specific utility functions.

Really? That's my view, but I didn't know it had spread!

Comment author: JQuinton 21 October 2014 08:34:14PM 1 point [-]

I don't think PvE is necessarily scientific knowledge. It's more like experience (to expound on the analogy further). While we're currently in one environment -- Earth -- it might be possible for us to explore other environments in the future. But, like the analogy proposes, it would take an enormous amount of manhours/manpower to actually reach this new content.

Comment author: wadavis 21 October 2014 08:20:48PM 0 points [-]

Devils advocating that somethings are without reason and that is an exception to the rule is a fairly weak straw man.

Not having a reason is a simplification that does not hold up: Incompetence, apathy, out of date thinking, because grey was the factory default colour palette(credit to fubarobfusco), are all reasons. It is a mark of expertise in your field to recognize these reasonless reasons.

Seriously, this happens all the time! Why did that guy driving beside me swerve wildly, is he nodding off, texting, or are there children playing around that blind corner? Why did this specification call for a impossible to source part, because the drafter is using european software with european part libraries in north america, or the design has a tight tolerance and the minor differences between parts matter.

Comment author: Metus 21 October 2014 08:12:37PM 0 points [-]

I wouldn't be opposed to having some adverts on LessWrong paying for site maintenance and donating to MIRI, FHI, CFAR, GiveWell. Any major one I forgot?

This might be a great plugin for the browser: Install it, have it display ads on either all (charitable) sites to pay out to the specific sites, a specific charity or have adverts only on specific sites. Kind of like a reverse adblock.

Comment author: Lumifer 21 October 2014 08:12:11PM *  -1 points [-]

which I believe is what you are asking

No, not really. Recall the setting -- I am about to produce a sous vide circulator and am interested (1) whether people find that product useful; and (2) whether the product is safe. I see nothing in your post which indicates how the process of answering my questions will work.

By the way, shipping a product to random people and asking them "Is it useful?" and "Did you kill yourself at any point during the last 24 hours?" is not likely to produce anything useful at all, never mind a proper scientific study.

Comment author: Kaj_Sotala 21 October 2014 07:59:02PM 2 points [-]

Is there a good reason to go through a publisher these days? At least assuming that you're not certain you'll get a big publisher who's really enthusiastic about marketing you?

Yes, if you manage to find a publisher they'll get your book in bookstores and maybe do some marketing for you if you're lucky, but as others in the thread have indicated, getting through the process and into print may take years - and unless you manage to get a big publisher who's really invested in your book, the amount of extra publicity you're likely to get that way will be quite limited.

Instead you could put your books up on Amazon as Kindle and CreateSpace versions: one author reports that on a per-unit basis, he makes three times more money from a directly published ebook priced at $2.99 than he would from a $7.99 paperback sold through a publisher, and almost as much as he would from a $25 hardcover. When you also take into account the fact that it's a lot easier to get people to buy a $3 book than a $25 or even a $8 book, his total income will be much higher. As a bonus, he gets to keep full rights to his work and do whatever he wants with them. Also they can be on sale for the whole time that one would otherwise have spent looking for a publisher.

One fun blog post speculates that the first book to earn its author a billion dollars will be self-published ebook. Of course you're not very likely to earn a billion dollars, but the same principles apply for why it's useful to publish one's work that way in general:

Prediction #1: The first B-book will be an e-book.

The reason is that you can’t have great sales without great distribution. There are roughly a billion computers on the planet connected to the internet and all of them can read e-books in numerous formats using free software. There are roughly four billion mobile devices, and most of those will soon be able to read e-books.

The sales channel for e-books is growing rapidly and has global reach. That’s why the first B-book will be in e-format. [...]

Prediction #2: The first B-book will be self-published.

Self-publishing is the best way to get the royalty rate high enough and the retail price low enough to make the B-book a reality.

The fact is that most publishers aren’t going to price your e-book at $2.99 or $3.99. They’ll want it at $9.99 or $12.99, which is probably too high for the market. And they’ll pay you only 25% royalties on the wholesale price, which is too low. If you want an aggressively priced e-book and a high royalty rate, you’ll almost certainly need to publish it yourself.

I feel like if you want money, you should go for self-publishing. If you're more interested in getting a lot of readers, you should again go for self-publishing. Of course the most likely outcome for any book is that you won't get much of either, but at least self-publishing gives you better odds than a traditional publisher. (Again, with a possible exception for the case where you get a big publisher to put up a massive marketing campaign for you.)

Comment author: Stuart_Armstrong 21 October 2014 07:49:11PM 1 point [-]

When I read this, my first reaction was "I have to show this comment to Anders" ^_^

Comment author: Arenamontanus 21 October 2014 07:44:00PM 3 points [-]

Neat. The minimal example would be if each risk had 50% chance of happening: then the observable correlation coefficient would be -0.5 (not -1, since there is 1/3 chance to get neither risk). If the chance of no disaster happening is N/(N+2), then the correlation will be -1/(N+1).

It is interesting to note that many insurance copula methods are used to make size-dependent correlations, but these are nearly always of the type of stronger positive correlations in the tail. This suggests - unsurprisingly - that insurance does not encounter much anthropic risk.

Comment author: Gunnar_Zarncke 21 October 2014 07:42:01PM *  0 points [-]

I'm not sure that science 'itself' (i.e. without cultural aspects shared with religion) "reliably gives its users comparative advantage". The advantage for the individual is quite small - if not negative in some cases. It is only by the society embracing science that it gains the society at large a large advantage.

Now that we have science we individuals may find that 'doing' science is to our individual disadvatage and abstain from it (freerider-wise).

If on the other hand you see science as a set of cultural rules and customs - and your university example points in that direction - then science already has lots in common with religion. Why not build on that?

Comment author: Stuart_Armstrong 21 October 2014 07:34:25PM 0 points [-]

Similar in that one quadrant is empty, otherwise a distinct effect.

Comment author: DavidLS 21 October 2014 07:34:01PM *  0 points [-]

The link I gave to the data collection webapp describes the data collection more depth, which I believe is what you are asking about between 6 and 7.

From that url:

Core function:

  • Every day an SMS/email is sent to participants with a securely generated one time URL.
  • The participant visits this URL and is greeted with a list of questions to answer.

Potential changes to this story:

  • If the URL is not used within 16 hours, it expires forever.
  • If a participant does not enter data for more than 3 days, they are automatically removed from the study.
  • If a participant feels that they need to removed from the study, they may do so at any time. They will be prompted to provide details on their reasons for doing so. These reasons will be communicated to the study organizer.
  • The study organizer may halt the study for logistical or ethical reasons at any time
Comment author: Stuart_Armstrong 21 October 2014 07:33:55PM 0 points [-]

The pandemic/recession example is almost certainly wrong; it was just an illustration of the concept.

Comment author: pan 21 October 2014 07:25:42PM 0 points [-]

In an old article by Eliezer we're asked what we would tell Archimedes through a chronophone. I've found this idea to actually be pretty instructive if I instead ask what I would tell myself through a chronophone if I could call back only a few years.

The reason the chronophone idea is useful is because it forces you to speak in terms of 'cognitive policies' since if you use anything relevant to your own time period it will be translated into something relevant to the time period you're calling. In this way if I think about what I would tell my former self I think: 1) what mistakes did I make when I was younger? 2) what sort or cognitive policies or strategies would have allowed me to avoid those mistakes, and finally 3) am I applying the analogue of those strategies in my life today?

Comment author: solipsist 21 October 2014 07:18:46PM 0 points [-]

Does anyone know of a compelling fictional narrative motivating the CHSH inequality or another quantum game?

I'm looking for something like:

Earth's waging a two-front war on opposite ends of the galaxy. The aliens will attack each front with either Star Destroyers or Nebula Cleanser, randomly and independently. The generals of the eastern and western fronts must coordinate their defense plans, or humanity will fall.

There are two battle plans: Alpha and Bravo. If the aliens at either front attack with Star Destroyers, generals must both choose the same battle plan. If, however, both fronts are attacked with Nebula Cleansers, the generals must choose opposite battle plans.

The emergency battle plans from Earth have been sent at 99% of the speed of light in opposite directions to the two fronts, hundreds of light years away. The plans will arrive with mere days to spare -- there will no time for the generals to coordinate with each other. If the two fronts battle plans are classical, what are the best odds of the generals coordinating and saving Earth? What if the plans contain quantumly entangled particles?

(only, hopefully, with deeper characters, etc.)

Comment author: Jiro 21 October 2014 07:15:41PM 0 points [-]

"Intentionally low" barriers have this way of expanding when the people who put the barriers in place either find they don't work to keep people away, or stand to benefit from making the barrier stronger.

Also, you're still forcing your decision on people who are poor enough that they can't afford to get across the barrier easily. (Whether that happens, of course, depends on the exact barrier used.)

Comment author: gwern 21 October 2014 07:15:09PM 3 points [-]

The effect is even clearer if we have a probabilistic relation between pandemics, recessions and extinction (something like: extinction risk proportional to product of recession size times pandemic size). Then we would see an anti-correlation rising smoothly with intensity.

So something like the plot of asteroid impact sizes vs time in "The Anthropic Shadow" where the upper-right corner is empty?

Comment author: gwern 21 October 2014 07:13:59PM 1 point [-]

Sure. You can supply food and water and other forms of palliative care (if you don't have food and water, Ebola might kill you but thirst & starvation definitely will kill you), and resources can be used to enforce quarantines like posting watchmen to sealed-up houses or soldiers to bottlenecks.

Comment author: Brillyant 21 October 2014 07:09:42PM 1 point [-]

In my view, it's always so hard to tell what was truly "botched". Further, how can we know what level of influence the President has in such cases where something actually was botched? Regardless of what someone's politics are, the federal gov't and all the agencies that are somehow intertwined with it is huge, and I'm not sure to what extent one man's incompetence has much to do with avoiding apparent gaffes that show up in the media.

Obamacare is a strange example of Obama's incompetence, I think. I mean, they tried to roll out a hotly controversial brand new program in a nation of 300+ million people. It seems very likely in my view such a rollout would be loudly criticized for it's flaws no matter how well it went. And it's so early... might be a huge success or a big failure... no clue.

Ebola is another one I'm not sure about—How can we know what good looks like? And how can we tie that to Obama? Something like Ebola dominates the news cycle for x days/weeks and it seems to become evidence that things were botched; evidence that the guy in charge blew it. I mean reasonably, what could Barack Obama do about the spread of the Ebola virus? He listens to his expert advisers and makes a decision. Then some huge chain of command takes over, with possible weak links and mistakes poised to happen from the President down to the doctors and researchers on the front lines. Certainly possible it's the President's fault, but it seems unlikely.

Anyway, it seems to happen to both Reds and Blues. Make it political and try to tear down the other guy's heroes and leaders.

Comment author: chaosmage 21 October 2014 07:05:52PM 0 points [-]

Not a good analogy. Something that works and reliably gives its users comparative advantage (such as science) shouldn't need a mechanism to keep alive an "essential message". Institutions to teach it, and to keep it clean, yes: but those are universities, not religions.

And universities, once established, also tend to be extremely durable. They just haven't been around for thousands of years yet. But while they have, many more newly-founded religions than universities have died.

Comment author: Jiro 21 October 2014 07:05:28PM *  0 points [-]

I can't actually think of a situation where cctv footage could be abused to convict an innocent man.

A couple of ideas come to mind immediately:

-- Just like reading all your email is likely to turn up something that sounds bad, tracking all your movements is likely, just by chance, to turn up something that looks suspicious; you may have been seen near a known drug dealers' den, or bordello, or you often visit a person who has been convicted of a crime, or you have been seen near children's playgrounds too much.

-- Use of the CCTV footage to catch you in a lie--bearing in mind that everyday human life involves telling necessary lies every so often. This can make you look really bad--oh, no, he lied to his wife about where he was, maybe he had an affair. He lied in his political speech--who knows what he was doing in back alleys back then?

-- Using the CCTV to capture images of something that would be embarrassing in public. Of course, you would have to make a mistake to show something private in public, but CCTV has the effect of greatly expanding the effect of such mistakes. Imagine someone caught on camera in bondage gear, or kissing a member of the same sex (or just cheating on their spouse). Or wearing a symbol of a sports team that is accused of being racist.

-- Taking a CCTV image out of context

Of course, you're being too narrow by asking for a conviction; these can be used to damage someone without convicting him of anything. Driving a politician out of office or blackmailing someone is not convicting him, after all.

Comment author: TheAncientGeek 21 October 2014 07:04:08PM *  0 points [-]

Would resources have been of use in fighting pandemics before modern medicine?

Comment author: Froolow 21 October 2014 06:57:28PM 0 points [-]

I agree with everything you've said, but I would point out that I already allow myself to be tracked by Google, so the true cost is only the difference between the 'badness' of Google and Microsoft.

Comment author: Froolow 21 October 2014 06:54:51PM *  0 points [-]

Don't worry about the tone, opportunity cost is that hinterland where it is too complicated to explain to someone who doesn't get it in one sentence, but too fundamental not to need to talk about so it is very difficult to judge tone when you're not sure whether you can assume familiarity with economic concepts.

It sounds to me like we basically agree - the cost of switching search engine is ten minutes (assumption) and this pays off about 50 cents a day for forever (assumption). This makes cutting off the analysis at one year arbitrary, which I agree with. You also have to compare the effort you put into searching with anything else you could do with that time, (even if you would have been doing those searches 'naturally') for the purpose of correctly calculating opportunity cost.

I think we disagree on the final step - if this is to be ineffective you need to be able to find an activity which is a better use of my time than conducting those daily searches. Since my primary contribution to charitable causes is from my salary, and I use a lot of Google in my job (I would be fired if I didn't do internet searches because I would be totally ineffective) I can't think what else I should be doing - what is a better use of my time than doing those searches? Assume we're only interested in maximising my total charitable giving.

Comment author: singularitard 21 October 2014 06:52:25PM *  0 points [-]

Rationalism is a toolset with which to approach problems, not a belief system. By my perception, at least.

Comment author: singularitard 21 October 2014 06:49:34PM 0 points [-]

Odd, I usually have the opposite problem and lose a day somewhere.

Comment author: singularitard 21 October 2014 06:48:14PM 0 points [-]

Based on the phrase "change which charities I donate to" I had assumed he or she was already donating to multiple charities, presumably including action in subsaharan africa.

Also can you explain the "magnitude" thing? I'm not sure I follow your definition of "effectiveness".

Comment author: singularitard 21 October 2014 06:45:14PM 0 points [-]

They don't publish very long write-ups, it's more like a checklist of their particular criteria.

Comment author: singularitard 21 October 2014 06:44:26PM 0 points [-]

You might be right, but it is definitely something that requires research instead of just taking their word for it.

Comment author: singularitard 21 October 2014 06:43:30PM 0 points [-]

Amnesty, UNICEF, Bill and Linda Gates Foundation, as far as mainstream charities go. I believe they all have specific Canadian divisions if you are worried about tax reasons.

Some others you might check out are Canadian Centre for Policy Alternatives, Canada Without Poverty, Equiterre, Canadian Council For International Cooperation, Tides Canada, CoDevelopment. I had a longer list but misplaced it.

I also strongly suggest you research each charity on your own instead of depending on whether or not a ranking website tells you it is good.

Comment author: othercriteria 21 October 2014 06:21:41PM 0 points [-]

To pick a frequentist algorithm is to pick a prior with a set of hypotheses, i.e. to make Bayes' Theorem computable and provide the unknowns on the r.h.s. above (as mentioned earlier you can in theory extract the prior and set of hypotheses from an algorithm by considering which outcome your algorithm would give when it saw a certain set of data, and then inverting Bayes' Theorem to find the unknowns.

Okay, this is the last thing I'll say here until/unless you engage with the Robins and Wasserman post that IlyaShpitser and I have been suggesting you look at. You can indeed pick a prior and hypotheses (and I guess a way to go from posterior to point estimation, e.g., MAP, posterior mean, etc.) so that your Bayesian procedure does the same thing as your non-Bayesian procedure for any realization of the data. The problem is that in the Robins-Ritov example, your prior may need to depend on the data to do this! Mechanically, this is no problem; philosophically, you're updating on the data twice and it's hard to argue that doing this is unproblematic. In other situations, you may need to do other unsavory things with your prior. If the non-Bayesian procedure that works well looks like a Bayesian procedure that makes insane assumptions, why should we look to Bayesian as a foundation for statistics?

(I may be willing to bite the bullet of poor frequentist performance in some cases for philosophical purity, but I damn well want to make sure I understand what I'm giving up. It is supremely dishonest to pretend there's no trade-off present in this situation. And a Bayes-first education doesn't even give you the concepts to see what you gain and what you lose by being a Bayesian.)

Comment author: JQuinton 21 October 2014 06:18:27PM 2 points [-]

This is a quote from memory from one of my professors in grad school:

Last quarble, the shanklefaxes ulugled the flurxurs. The flurxurs needed ulugled because they were mofoxiliating, which caused amaliaxas in the hurble-flurble. The shakletfaxes domonoxed a wokuflok who ulugles flurxurs, because wokuflok nuxioses less than iliox nuxioses.

  1. When did the shaklefaxes ulugle the flurxurs?
  2. Why did the shaklefaxes ulugle the flurxurs?
  3. Who did they get to ulugle the flurxurs?
  4. If you were the shaklefaxes, would you have your ulugled flurxurs? Why or why not?
  5. Would you domonox a wokuflok who ulugles flurxurs instead of an iliox? Why or why not?

Notice how if you only memorize things, you can reasonably answer the first three questions but not the last two. But if you actually understand things, you can answer all five. Instead of memorizing things, you will get a lot further in life if you actually understand the reasoning behind them.

Comment author: shminux 21 October 2014 06:16:28PM *  0 points [-]

Obamacare, an excellent idea and long overdue, but implemented and deployed in the worst way possible, is a typical example. The Ebola crisis response is another. Handling of the Snowden affair... Take almost any issue, political or economic, international or domestic, and it has been botched pretty bad, not out of malice, but out of incompetence. Well, maybe the Quantitative Easing is an exception, I am not qualified to judge.

Comment author: Troubadour 21 October 2014 05:44:07PM 1 point [-]

Does it work with adblock? I have preferences against being solicited with advertisements --- and I presume that is where the charity money is coming from.

In response to Noticing
Comment author: Vaniver 21 October 2014 05:31:50PM 0 points [-]

Someone who has studied marketing may unconsciously evaluate every ad that they see, and after seeing enough examples, gain a strong understanding of what counts as a good ad and what counts as a bad ad.

If anything, this may be a counterexample. Consider Comic Sans: as far as I can tell, most people actually like Comic Sans. But graphic designers hate it, and the technical reasons seem overall less significant than the tribal signal. Or consider OkTrends; they launched a data-driven project to figure out what profile pictures are better, and discovered that many of the profile pictures they thought were terrible were associated with higher success.

Comment author: Lumifer 21 October 2014 05:27:10PM *  0 points [-]

they're probably much better at identifying (and finding) pests than you are.

They also have a set of incentives which does not match yours.

Comment author: Brillyant 21 October 2014 05:26:19PM 0 points [-]

You're over-thinking it.

The only math needed is to decide how much benefit is lost using Goodsearch vs. whatever search engine you currently use—if it's slower, less effective relevant-stuff-finding engine, etc, then it might very well not be worth it. Maybe Goodsearch is like AOL circa 1997? Or equivalent to Asking Jeeves?

You could also—even if it's a best-in-the-market engine functionality wise—decide whatever advertising you are exposed to has some negative value, but I'd take that to be largely preferential if there was no loss of performance.

If Goodsearch is an equally valuable search tool to your current, then switch. It would be like refusing to put a Goodgarbage or Goodvaccum or Goodkitchentable in your home that promised to yield $0.01 to charity per use.

It's just a search engine. Assuming it's functionality remains equal to other leading search engines (maybe a big "if"), then it's a simple, one-time 10-minute switch in exchange for an ongoing $0.01 per search...or $XX.XX per year.

It seems to me this would be a pretty effective little fund-raising tool for a large(r) organization. Get a church congregation or school to change their search engines over to Goodsearch and fund charitable projects each year.

Unless Goodsearch is not a good search engine...

Comment author: Lumifer 21 October 2014 05:25:12PM *  0 points [-]

Well, if we were to approach this seriously, there a few more factors in play.

On the benefits side you need to estimate the expected length of time that this scheme will be operational. It's not just GoodSearch being around, it also them continuing to offer the same rate (and the price of generic eyeballs has been going down since as far back as I can remember and shows no signs of stopping) while providing adequate service.

You also need to figure out the appropriate discount rate since $1 in 2040 is quite different from $1 in 2014.

On the costs side you need to estimate how many additional reconfigurations you might need (browsers change, config files become corrupted, etc. etc.). Also, every time you find find a particular Bing search inadequate and need to re-search using Google, that's more time cost which could easily swamp the initial 10-minute estimate. If you believe the Bing search to be inferior to Google you should also include the opportunity costs of missing something important without realizing it.

More importantly, you need to realize what the main cost is -- it's not reconfiguration time, it's you allowing yourself to be tracked by Bing, etc. (that's what the advertisers are actually paying for). That cost is hard to estimate and probably depends on the individual, but it exists and ignoring it is unwise.

P.S. By the way, it turns out Goodsearch doesn't donate 1c/search. It donates 50% of its revenue -- that's quite a different thing.

Comment author: DanielLC 21 October 2014 05:18:11PM 0 points [-]

Still, I'm hoping we can make something that does something in addition to that.

Their children will be fine. You don't even need a breeding population. You just need to know how to make an egg, a sperm, and an artificial uterus.

In the perhaps this whole point is moot because its unlikely an intelligence explosion will take long enough for there to be time for other researchers to construct an alternative AGI.

It might encounter another AGI as it spreads, although I don't think this point will matter much in the ensuing war (or treaty, if they decide on that).

Comment author: SteveG 21 October 2014 05:15:05PM 0 points [-]

At the same time this additional x1,000,000 or so hardware overhang is developing (there is a good chance that a significant hardware overhang existed before the AI was turned on in the first place), the system is in the process of being interfaced and integrated by the development team with an array of other databases and abilities.

Some of these databases and abilities were available to the system from the start. The development team has to be extremely cautious about what they interface with which copies of the AI-these decisions are probably more important to the rate of takeoff than creating the system in the first place.

Language Acquisition

Because of the centrality of language to human cognition, the relationship between language acquisition and takeoff speed is worth analyzing as a separate question from takeoff speed for an abstract notion of general intelligence.

Human-level language acquisition capability seems to be a necessary condition to develop human-level AGI, but I do not believe it is a necessary condition to develop an intelligence either capable of manufacturing, or even of commerce, or hiring people and giving them a set of instructions. (For this reason, among, others, thinking about surpassing human-level does not seem to be the right question to ask if we are debating policy.)

Here are three scenarios for advanced AI language acquisition:

1) The system, like a child, is initially capable of language acquisition, but in a somewhat different way. (Note that for children, language acquisition and object recognition skills develop at about the same time. For this reason, I believe that these skills are intertwined, although they have not been that intertwined in the development of AI systems so far.)

2) The system begins with parts of a single language somewhat hardwired.

3) The system performs other functions than language acquisition, and any language capability has to be interfaced in a second phase of development.

If 1) comes about, and the system has language acquisition capability initially, then it will be able to acquire all human languages it is introduced to very quickly. However, the system may still have conceptual deficits it is unable to overcome on its own. In the movies the one they like is a deficit of emotion understanding, but there could be others, for instance, a system that acquired language may not be able to do design. I happen to think that emotion understanding may prove more tractable than in the movies. So much of human language is centered around feelings, therefore a significant level of emotion understanding (which differs from emotion recognition) is a requirement for some important portions of language acquisition. Some amount of emotion recognition and even emotion forecasting is required for successful interactions with people.

In case 2), if the system is required to translate from its first language, it will also be capable of communicating with people in these other languages within a very short time, because word and phrase lookup tables can be placed right in working memory. However, it may have lower comprehension and its phrasing might sound awkward.

In either case 1) or 2), roughly as soon as the system develops some facility at language, it will be capable of superhumanly communicating with millions of people at a time, and possibly with everyone. Why? Because computers have been capable of personalized communication with millions of people for many years already.

In case 3), the system was designed for other purposes but can be interfaced in a more hard-wired fashion with whatever less-than-complete forms of linguistic processing are available at the time. These less-than-complete abilities are already considerable today, and they will become even more considerable under any scenario other than disinclination to advance and government regulation.

A powerful sub-set of abilities from a list more like this:

Planning, design, transportation, chemistry, physics, engineering, commerce, sensing, object recognition, object manipulation and knowledge base utilization.

Might be sufficient to perform computer electronics manufacturing.

Little or no intelligence is required for a system to manufacture using living things.

Comment author: ChristianKl 21 October 2014 05:04:02PM 0 points [-]

That reminds me of Nassim Taleb who purposefully inserted a fictional chapter in (The Black Swan) to mess with the book stores ideas of how to categories books (chapter 2).

It would be interesting to know how many books got prevented by publishers not wanting to publish books that don't clearly fit.

Comment author: Lumifer 21 October 2014 04:57:51PM *  2 points [-]

because of the potential for abuse later down the line.

I think we are in agreement about that.

It's also why I think large scale data mining is more dangerous than CCTV cameras.

It's a false dilemma, there is absolutely no reason why we must have one or the other and so must choose the lesser evil. We can choose none. Of course, in reality it seems we will get both.

I don't expect there is significant difference in large scale data mining between the US and the UK. The NSA and MI5 are best buddies :-/

To abuse CCTV you need to change the laws to make new things illegal

Nope. You only need to to see compromising (not necessarily illegal) information. If you capture footage of a minister going to visit his mistress, that's not illegal but that's useful blackmail material.

Apologies about my manner this past while.

No apologies necessary, that has been a pretty polite debate (by the internet standards, at least :-/) so far.

You don't like soft paternalism, what system do you go in for?

I hesitate to declare allegiance to a particular system, but my favoured direction is allowing people to do stupid things and then reap the consequences. I think autonomy trumps optimality.

View more: Next