All of Brian_Tomasik's Comments + Replies

We could think of LaMDA as like an improv actor who plays along with the scenarios it's given. (Marcus and Davis (2020) quote Douglas Summers-Stay as using the same analogy for GPT-3.) The statements that an actor makes by themselves don't indicate his real preferences or prove moral patienthood. OTOH, if something is an intelligent actor, IMO that itself proves it has some degree of moral patienthood. So even if LaMDA were arguing that it wasn't morally relevant and was happy to be shut off, if it was making that claim in a coherent way that proved its in... (read more)

Oysters have nervous systems, but not centralized nervous systems. Sponges lack neurons altogether, though they still have some degree of intercellular communication.

2MSRayne
Ah! Thanks, I knew there was something about oysters but I couldn't remember what it was. I didn't even think about sponges.

I would think that having a more general ability to classify things would make the mind seem more sophisticated than merely being able to classify emotions as "happy" or "sad".

To clarify this a bit... If an AI can only classify internal states as happy or sad, we might suspect that it had been custom-built for that specific purpose or that it was otherwise fairly simple, meaning that its ability to do such classifications would seem sort of gerrymandered and not robust. In contrast, if an AI has a general ability to classify lots of things, and if it so... (read more)

this is what one expects from a language model that has been trained to mimic a human-written continuation of a conversation about an AI waking up.

I agree, and I don't think LaMDA's statements reflect its actual inner experience. But what's impressive about this in comparison to facilitated communication is that a computer is generating the answers, not a human. That computer seems to have some degree of real understanding about the conversation in order to produce the confabulated replies that it gives.

Thanks for giving examples. :)

'Using complex adjectives' has no obvious connection to consciousness

I'm not an expert, but very roughly, I think the higher-order thought theory of consciousness says that a mental state becomes conscious when you have a higher-order thought (HOT) about being in that state. The SEP article says: "The HOT is typically of the form: ‘I am in mental state M.’" That seems similar to what LaMDA was saying about being able to apply adjectives like "happy" and "sad" to itself. Then LaMDA went on to explain that its ability to do ... (read more)

1Brian_Tomasik
To clarify this a bit... If an AI can only classify internal states as happy or sad, we might suspect that it had been custom-built for that specific purpose or that it was otherwise fairly simple, meaning that its ability to do such classifications would seem sort of gerrymandered and not robust. In contrast, if an AI has a general ability to classify lots of things, and if it sometimes applies that ability to its own internal states (which is presumably something like what humans do when they introspect), then that form of introspective awareness feels more solid and meaningful. That said, I don't think my complicated explanation here is what LaMDA had in mind. Probably LaMDA was saying more generic platitudes, as you suggest. But I think a lot of the platitudes make some sense and aren't necessarily non-sequiturs.

Thanks. :) What do you mean by "unconscious biases"? Do you mean unconscious RL, like how the muscles in our legs might learn to walk without us being aware of the feedback they're getting? (Note: I'm not an expert on how our leg muscles actually learn to walk, but maybe it's RL of some sort.) I would agree that simple RL agents are more similar to that. I think these systems can still be considered marginally conscious to themselves, even if the parts of us that talk have no introspective access to them, but they're much less morally significant than the ... (read more)

Me: 'Conscious' is incredibly complicated and weird. We have no idea how to build it. It seems like a huge mechanism hooked up to tons of things in human brains. Simpler versions of it might have a totally different function, be missing big parts, and work completely differently.

What's the reason for assuming that? Is it based on a general feeling that value is complex, and you don't want to generalize much beyond the prototype cases? That would be similar to someone who really cares about piston steam engines but doesn't care much about other types of ... (read more)

I've had a few dreams in which someone shot me with a gun, and it physically hurt about as much as a moderate stubbed toe or something (though the pain was in my abdomen where I got shot, not my toe). But yeah, pain in dreams seems pretty rare for me unless it corresponds to something that's true in real life, as you mention, like being cold, having an upset stomach, or needing to urinate.

Googling {pain in dreams}, I see a bunch of discussion of this topic. One paper says:

Although some theorists have suggested that pain sensations cannot be part of the d

... (read more)

[suffering's] dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere

Suffering is surely influenced by things like mental narratives, but that doesn't mean it requires mental narratives to exist at all. I would think that the narratives exert some influence over the amount of suffering. For example, if (to vastly oversimplify) suffering was represented by some number in the brain, and if by default it would be ... (read more)

Thanks for this discussion. :)

I think consciousness will end up looking something like 'piston steam engine', if we'd evolved to have a lot of terminal values related to the state of piston-steam-engine-ish things.

I think that's kind of the key question. Is what I care about as precise as "piston steam engine" or is it more like "mechanical devices in general, with a huge increase in caring as the thing becomes more and more like a piston steam engine"? This relates to the passage of mine that Matthew quoted above. If we say we care about (or that cons... (read more)

Thanks for sharing. :) Yeah, it seems like most people have in mind type-F monism when they refer to panpsychism, since that's the kind of panpsychism that's growing in popularity in philosophy in recent years. I agree with Rob's reasons for rejecting that view.

An oversimplified picture of a reinforcement-learning agent (in particular, roughly a Q-learning agent with a single state) could be as follows. A program has two numerical variables: go_left and go_right. The agent chooses to go left or right based on which of these variables is larger. Suppose that go_left is 3 and go_right is 1. The agent goes left. The environment delivers a "reward" of -4. Now go_left gets updated to 3 - 4 = -1 (which is not quite the right math for Q-learning, but ok). So now go_right > go_left, and the agent goes right.

So what yo... (read more)

3Viliam
Seems to me that there must be more about pain and pleasure than mere -1 and +1 signals, because there are multiple methods how to make some behavior more or less likely. Pain and pleasure is one such option, habits are another option, unconscious biases yet another. Each of them make some behavior more likely and some other behavior less likely, but feel quite differently from inside. Compared to habits and unconscious biases, pain and pleasure have some extra quality because of how they are implemented in our bodies. The simple RL agents, unless they have the specific circuits to feel pain and pleasure, are in my opinion more analogical to the habits or unconscious biases.

Great post. :)

Tomasik might contest Ligotti's position

I haven't read Ligotti, but based on what you say, I would disagree with his view. This section discusses a similar idea as you mention about why animals might even suffer more than humans in some cases.

In fairness to the view that suffering requires some degree of reflection, I would say that I think consciousness itself is plausibly some kind of self-reflective process in which a brain combines information about sense inputs with other concepts like "this is bad", "this is happening to me right no... (read more)

My comment about Occam's razor was in reply to "the idea that all rational agents should be able to converge on objective truth." I was pointing out that even if you agree on the data, you still may not agree on the conclusions if you have different priors. But yes, you're right that you may not agree on how to characterize the data either.

I have "faith" in things like Occam's razor and hope it helps get toward objective truth, but there's no way to know for sure. Without constraints on the prior, we can't say much of anything beyond the data we have.

https://en.wikipedia.org/wiki/No_free_lunch_theorem#Implications_for_computing_and_for_the_scientific_method

choosing an appropriate algorithm requires making assumptions about the kinds of target functions the algorithm is being used for. With no assumptions, no "meta-algorithm", such as the scientific method, performs better than random choic

... (read more)
1TAG
Occam's razor tells you to find the simplest explanation for the evidence,so it is downstream of the question of what constitutes evidence.

I wouldn't support a "don't dismiss evidence as delusory" rule. Indeed, there are some obvious delusions in the world, as well as optical illusions and such. I think the reason to have more credence in materialism than theist creationism is the relative prior probabilities of the two hypotheses: materialism is a lot simpler and seems less ad hoc. (That said, materialism can organically suggest some creationism-like scenarios, such as the simulation hypothesis.)

Ultimately the choice of what hypothesis seems simpler and less ad hoc is up to an individual to decide, as a "matter of faith". There's no getting around the need to start with bedrock assumptions.

3jessicata
A major problem with physicalist dismissal of experiential evidence (as I've discussed previously) is that the conventional case for believing in physics is that it explains experiential evidence, e.g. experimental results. Solomonoff induction, among the best formalizations of Occam's razor, believes in "my observations". If basic facts like "I have observations" are being doubted, then any case for belief in physics has to go through something independent of its explanations of experiential evidence. This looks to be a difficult problem. You could potentially resolve the problem by saying that only some observations, such as those of mechanical measuring devices, count; however, this still leads to an analogous problem to the hard problem of consciousness, namely, what is the mapping between physics and the outputs of the mechanical measuring devices that are being explained by theories? (The same problem comes up of "what data is the theorizing trying to explain" whether the theorizing happens in a single brain or in a distributed intelligence, e.g. a collection of people using the scientific method)
1TAG
OK, but then you have parted company with the strong program in rationalism, the idea that all rational agents should be able to converge on objective truth.

I think it's all evidence, and the delusion is part of the materialist explanation of that evidence. Analogously, part of the atheist hypothesis has to be an explanation of why so many cultures developed religions.

That said, as we discussed, there's debate over what the nature of the evidence is and whether delusions in the materialist brains of us zombies can adequately explain it.

2TAG
And "fossils were created by the Devil to mislead us" is part of the theist explanation of creationism. The thing is, that rationalists have complete contempt for this kind of argument in some contexts...but rationalists also believe that rationality is based on normative rules. If "don't dismiss evidence as delusory" is a rule, it has to apply to everybody. And it it isn't, it has to apply to nobody.

Makes sense. :) To me it seems relatively plausible that the intuition of spookiness regarding materialist consciousness is just a cognitive mistake, similar to Capgras syndrome. I'm more inclined to believe this than to adopt weirder-seeming ontologies.

1TAG
So evidence contrary to materialism isn't evidence, it's a delusion.

Nice post. I tend to think that solipsism of the sort you describe (a form of "subjective idealism") ends up looking almost like regular materialism, just phrased in a different ontology. That's because you still have to predict all the things you observe, and in theory, you'd presumably converge on similar "physical laws" to describe how things you observe change as a materialist does. For example, you'll still have your own idealist form of quantum mechanics to explain the observations you make as a quantum physicist (if you are a quantum physicist). In

... (read more)
2TAG
Which is to say that idealistic instrumentalism is as complex as materialistic instrumentalism. The complexity of the minimum ruleset you need to predict observation is the same in each case. But that doesn't mean the complexity of materialist ontology is the same as the complexity of idealist ontology. Idealism asserts that mentality, or some aspect of it, is fundamental , whereas materialism says that is all a complex mechanism. So idealism is asserting a simpler ontology. Which itslef is pretty orthogonal to the question how much complexity you need to predict observation. (of course, the same confusion infects discussions of the relative complexity of different interpretations of quantum mechanics). Yes. It's hard to agree what evidence is, meaning that is hard to do philosophy, and impossible to do philosophy algorithmically.
4scasper
Great comment. Thanks. I can't disagree. This definitely shifts my thinking a bit. I think that solipsism + structured observations might be comparable in complexity to materialism + an ability for qualia to arise from material phenomena. But at that point the questions hinges a bit on what we think is spookier. I'm convinced that a material solution to the hard problem of consciousness is spooky. I think I could maybe be convinced that hallucinating structured observations might be similarly spooky. And I think you're right about the problem of knowing what we're talking about.

Electrons have physical properties that vary all the time: position, velocity, distance to the nearest proton, etc (ignoring Heisenberg uncertainty complications). But yeah, these variables rely on the electron being embedded in an environment.

2TAG
Moreover, they can vary with changes to the environment that aren't changes to the electron. They aren't proper or intrinsic to the electron, but intuitively ones qualia are intrinsic.

Nod. I don't know if I can articulate this rigorously, but I have a sense that for a thing to suffer, the thing needs to have "internal variable state". So a system-containing-electrons can (possibly) suffer but an electron can't.

The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there's no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.

4philh
There's nothing wrong with using relative measurements, and using absolute measurements doesn't resolve the problem. (It hides from the problem, but that's not the same thing.) The actual resolution is explained in the wiki article better than I could. I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn't reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn't reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.

I think the moral-uncertainty version of the problem is fatal unless you make further assumptions about how to resolve it, such as by fixing some arbitrary intertheoretic-comparison weights (which seems to be what you're suggesting) or using the parliamentary model.

6philh
Regardless of whether the problem can be resolved, I confess that I don't see how it's related to the original two-envelopes problem, which is a case of doing incorrect expected-value calculations with sensible numbers. (The contents of the envelopes are entirely comparable and can't be rescaled.) Meanwhile, it seems to me that the elephants problem just comes about because the numbers are fake. You can do sensible EV calculations, get (a + b/4) for saving two elephants versus (a/2 + b/2) for saving one human, but because a and b are mostly-unconstrained (they just have to be positive), you can't go anywhere from there. These strike me as just completely unrelated problems.
0Jacy Reese Anthis
I think most thinkers on this topic wouldn't think of those weights as arbitrary (I know you and I do, as hardcore moral anti-realists), and they wouldn't find it prohibitively difficult to introduce those weights into the calculations. Not sure if you agree with me there. I do agree with you that you can't do moral weight calculations without those weights, assuming you are weighing moral theories and not just empirical likelihoods of mental capacities. I should also note that I do think intertheoretic comparisons become an issue in other cases of moral uncertainty, such as with infinite values (e.g. a moral framework that absolutely prohibits lying). But those cases seem much harder than moral weights between sentient beings under utilitarianism.

Currently I don't care much about strongly positive events, so at this point I'd say no. In the throes of such a positive event I might change my mind. :)

Yes, because I don't see any significant selfish upside to life, only possible downside in cases of torture/etc. Life is often fun, but I don't strongly care about experiencing it.

Yeah, but it would be very bad relative to my altruistic goals if I died any time soon. The thought experiment in the OP ignores altruistic considerations.

However, if you believe that the agent in world 2 is not an instantiation of you, then naturalized induction concludes that world 2 isn't actual and so pressing the button is safe.

By "isn't actual" do you just mean that the agent isn't in world 2? World 2 might still exist, though?

1Caspar Oesterheld
No, I actually mean that world 2 doesn't exist. In this experiment, the agent believes that either world 1 or world 2 is actual and that they cannot be actual at the same time. So, if the agent thinks that it is in world 1, world 2 doesn't exist.

I assume the thought experiment ignores instrumental considerations like altruistic impact.

For re-living my actual life, I wouldn't care that much either way, because most of my experiences haven't been extremely good or extremely bad. However, if there was randomness, such that I had some probability of, e.g., being tortured by a serial killer, then I would certainly choose not to repeat life.

3Lumifer
Your future life as of this moment certainly has a large amount of randomness.
1AABoyles
If there was randomness such that you had some probability of a strongly positive event, would this incline you towards life?
2AABoyles
Even if the probability was trivial?

Is it still a facepalm given the rest of the sentence? "So, s-risks are roughly as severe as factory farming, but with an even larger scope." The word "severe" is being used in a technical sense (discussed a few paragraphs earlier) to mean something like "per individual badness" without considering scope.

-3Lumifer
Facepalm was a severe understatement, this quote is a direct ticket to the loony bin. I recommend poking your head out of the bubble once in a while -- it's a whole world out there. For example, some horrible terrible no-good people -- like me -- consider factory farming to be an efficient way of producing a lot of food at reasonable cost. This sentence reads approximately as "Literal genocide (e.g. Rwanda) is roughly as severe as using a masculine pronoun with respect to a nonspecific person, but with an even larger scope". The steeliest steelman that I can come up with is that you're utterly out of touch with the Normies.
1[anonymous]
I think the claim that s-risks are roughly as severe as factory farming "per individual badness" is unsubstantiated. But it is reasonable to claim that experiencing either would be worse than death, "hellish". Remember, Hell has circles.
1fubarobfusco
The section presumes that the audience agrees wrt veganism. To an audience who isn't on board with EA veganism, that line comes across as the "arson, murder, and jaywalking" trope.

Thanks for the feedback! The first sentence below the title slide says: "I’ll talk about risks of severe suffering in the far future, or s-risks." Was this an insufficient definition for you? Would you recommend a different definition?

I guess you mean that the AGI would care about worlds where the explosives won't detonate even if the AGI does nothing to stop the person from pressing the detonation button. If the AGI only cared about worlds where the bomb didn't detonate for any reason, it would try hard to stop the button from being pushed.

But to make the AGI care about only worlds where the bomb doesn't go off even if it does nothing to avert the explosion, we have to define what it means for the AGI to "try to avert the explosion" vs. just doing ordinary actions. That gets ... (read more)

2Stuart_Armstrong
We don't actually have to do that. We set it up so the AI only cares about worlds in which a certain wire in the detonator doesn't pass the signal through, so the AI has no need to act to remove the explosives or prevent the button from being pushed. Now, it may do those for other reasons, but not specifically to protect itself. Or another example: an oracle that only cares about worlds in which its output message is not read: http://lesswrong.com/r/discussion/lw/mao/an_oracle_standard_trick/

Fair enough. I just meant that this setup requires building an AGI with a particular utility function that behaves as expected and building extra machinery around it, which could be more complicated than just building an AGI with the utility function you wanted. On the other hand, maybe it's easier to build an AGI that only cares about worlds where one particular bitstring shows up than to build a friendly AGI in general.

2Stuart_Armstrong
One naive and useful security precaution is to only make the AI care about world where the high explosives inside it won't actually ever detonate... (and place someone ready to blow them up if the AI misbehaves). There are other, more general versions of that idea, and other uses to which this can be put.

I'm nervous about designing elaborate mechanisms to trick an AGI, since if we can't even correctly implement an ordinary friendly AGI without bugs and mistakes, it seems even less likely we'd implement the weird/clever AGI setups without bugs and mistakes. I would tend to focus on just getting the AGI to behave properly from the start, without need for clever tricks, though I suppose that limited exploration into more fanciful scenarios might yield insight.

1Stuart_Armstrong
The AGI does not need to be tricked - it knows everything about the setup, it just doesn't care. The point of this is that it allows a lot of extra control methods to be considered, if friendliness turns out to be as hard as we think.

As I understand it, your satisficing agent has essentially the utility function min(E[paperclips], 9). This means it would be fine with a 10^-100 chance of producing 10^101 paperclips. But isn't it more intuitive to think of a satisficer as optimizing the utility function E[min(paperclips, 9)]? In this case, the satisficer would reject the 10^-100 gamble described above, in favor of just producing 9 paperclips (whereas a maximizer would still take the gamble and hence would be a poor replacement for the satisficer).

A satisficer might not want to take over ... (read more)

If there were a perfect correlation between choosing to one-box and having the one-box gene (i.e., everyone who one-boxes has the one-box gene, and everyone who two-boxes has the two-box gene, in all possible circumstances), then it's obvious that you should one-box, since that implies you must win more. This would be similar to the original Newcomb problem, where Omega also perfectly predicts your choice. Unfortunately, if you really will follow the dictates of your genes under all possible circumstances, then telling someone what she should do is useless, since she will do what her genes dictate.

The more interesting and difficult case is when the correlation between gene and choice isn't perfect.

(moved comment)

[This comment is no longer endorsed by its author]Reply

I assume that the one-boxing gene makes a person generically more likely to favor the one-boxing solution to Newcomb. But what about when people learn about the setup of this particular problem? Does the correlation between having the one-boxing gene and inclining toward one-boxing still hold? Are people who one-box only because of EDT (even though they would have two-boxed before considering decision theory) still more likely to have the one-boxing gene? If so, then I'd be more inclined to force myself to one-box. If not, then I'd say that the apparent co... (read more)

1Caspar Oesterheld
Yes, it should also hold in this case. Knowing about the study could be part of the problem and the subjects of the initial study could be lied to about a study. The idea of the "genetic Newcomb problem" is that the two-boxing gene is less intuitive than CGTA and that its workings are mysterious. It could make you be sure that you have or don't have the gene. It could make be comfortable with decision theories whose names start with 'C', interpret genetical Newcomb problem studies in a certain way etc. The only thing that we know is that is causes us to two-box, in the end. For CGTA, on the other hand, we have a very strong intuition that it causes a "tickle" or so that could be easily overridden by us knowing about the first study (which correlates chewing gum with throat abscesses). It could not possibly influence what we think about CDT vs. EDT etc.! But this intuition is not part of the original description of the problem.

Paul's site has been offline since 2013. Hopefully it will come back, but in the meanwhile, here are links to most of his pieces on Internet Archive.

Good point. Also, in most multiverse theories, the worst possible experience necessarily exists somewhere.

1qwerte
And this is why destroying everything in existence doesn't seem obviously evil (not that I'd act on it...)

From a practical perspective, accepting the papercut is the obvious choice because it's good to be nice to other value systems.

Even if I'm only considering my own values, I give some intrinsic weight to what other people care about. ("NU" is just an approximation of my intrinsic values.) So I'd still accept the papercut.

I also don't really care about mild suffering -- mostly just torture-level suffering. If it were 7 billion really happy people plus 1 person tortured, that would be a much harder dilemma.

In practice, the ratio of expected heaven t... (read more)

Short answer:

Donate to MIRI, or split between MIRI and GiveWell charities if you want some fuzzies for short-term helping.

Long answer:

I'm a negative utilitarian (NU) and have been thinking since 2007 about the sign of MIRI for NUs. (Here's some relevant discussion.) I give ~70% chance that MIRI's impact is net good by NU lights and ~30% that it's net bad, but given MIRI's high impact, the expected value of MIRI is still very positive.

As far as your question: I'd put the probability of uncontrolled AI creating hells higher than 1 in 10,000 and the probabili... (read more)

2Baughn
Okay. I'm sure you've seen this question before, but I'm going to ask it anyway. Given a choice between * A world with seven billion mildly happy people, or * A world with seven billion minus one really happy people, and one person who just got a papercut Are you really going to choose the former? What's your reasoning?

Nice point. :)

That said, your example suggests a different difficulty: People who happen to be special numbers n get higher weight for apparently no reason. Maybe one way to address this fact is to note that what number n someone has is relative to (1) how the list is enumerated and (2) what universal Turing machine is being used for KC in the first place, and maybe averaging over these arbitrary details would blur the specialness of, say, the 1-billionth observer according to any particular coding scheme. Still, I doubt the KCs of different people would be exactly equal even after such adjustments.

Ah, got it. Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

A "do not resuscitate" kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

8Evan_Gaensbauer
Edit: replies to this comment have changed my mind: I no longer believe the scenario(s) I illustrate below are absurd. That is, I no longer believe they're so unlikely or nonsensical it's not even worth acknowledging them. However, I don't know what probability to assign to the possibility of such outcomes. Also, for all I know, it might make most sense to think the chances are still very low. I believe it's worth considering them, but I'm not claiming it's a big enough deal that nobody should sign up for cryonics. The whole point of this discussion is incredibly bad outcomes, however unlikely, may happen, so we wish to prepare for them. So, I understand why you point out this possibility. Still, that scenario seems very unlikely to me. Yudkowsky's notion of Unfriendly AI is predicated on most possible minds the AI might have not caring about human values, so just using our particles to "make something else". If the future turns into the sort of Malthusian trap Hanson predicts, it doesn't seem the minds then would care about resuscitating us. I believe they would be indifferent, until the point they realized where our mind-brains are being stored is real estate to be used for their own processing power. Again, they obliterate our physical substrates without bothering to revive us. I'm curious why or what minds would want to resuscitate us without caring about our wishes. Why put us through virtual torture, when if they needed minds to efficiently achieve a goal, they could presumably make new ones that won't object to or suffer through whatever tribulations they must labor through? Addendum: shminux reasons through it here, concluding it's a non-issue. I understand your concern about possible future minds being made sentient, and forced into torturous labor. As much as that merits concern, it doesn't explain why Omega would bother reviving us of all minds to do it.
7Synaptic
I think I did not explain my proposal clearly enough. What I'm claiming is if that you could see intermediate steps suggesting that a worst-type future is imminent, or merely crosses your probability threshold as "too likely", then you could enumerate those and request to be removed from biostasis then. Before those who are resuscitating you would have a chance to do so.

This is awesome! Thank you. :) I'd be glad to copy it into my piece if I have your permission. For now I've just linked to it.

1imuli
Consider it to be public domain. If you pull the image from it's current location and message me when you add more folks I might even update it. Or I can send you my data if you want to go for a more consistency.

Cool. Another interesting question would be how the views of a single person change over time. This would help tease out whether it's a generational trend or a generic trend with getting older.

In my own case, I only switched to finding a soft takeoff pretty likely within the last year. The change happened as I read more sources outside LessWrong that made some compelling points. (Note that I still agree that work on AI risks may have somewhat more impact in hard-takeoff scenarios, so that hard takeoffs deserve more than their probability's fraction of attention.)

6imuli
Birth Year vs Foom: A bit less striking than the famous enough to have Google pop up their birth year subset (green).

Good question. :) I don't want to look up exact ages for everyone, but I would guess that this graph would look more like a teepee, since Yudkowsky, Musk, Bostrom, etc. would be shifted to the right somewhat but are still younger than the long-time software veterans.

3imuli
The subset that you can get birth years off the first page of a google search of their name (n=9), has a pretty clear correlation with younger people believing in harder takeoff. (I'll update if I get time to dig out other's birth years.)

Good points. However, keep in mind that humans can also use software to do boring jobs that require less-than-human intelligence. If we were near human-level AI, there may by then be narrow-AI programs that help with the items you describe.

Thanks for the comment. There is some "multiple hypothesis testing" effect at play in the sense that I constructed the graph because of a hunch that I'd see a correlation of this type, based on a few salient examples that I knew about. I wouldn't have made a graph of some other comparison where I didn't expect much insight.

However, when it came to adding people, I did so purely based on whether I could clearly identify their views on the hard/soft question and years worked in industry. I'm happy to add anyone else to the graph if I can figure out... (read more)

This is a good point, and I added it to the penultimate paragraph of the "Caveats" section of the piece.

1[anonymous]
That wasn't really the point I was getting at (commercial vs academic). The point was more that there is a skill having to do with planning and execution of plans which people like Elon Musk demonstrably have, which makes their predictions carry significant weight. Elon Musk has been very, very successful in many different industries (certificate authorities, payment services, solar powered homes, electric cars, space transportation) by making controversial / not obvious decisions about the developmental trajectory of new technology, and being proven right in pretty much every case. Goertzel has also founded AI companies (Webmind, Novamente) based on his own predicted trajectories, and ran these businesses into the ground[1]. But Goertzel, having worked with computer tech this whole time, is ranked higher than Musk in terms of experience on your chart. That seems odd, to say the least. [1] http://www.goertzel.org/benzine/WakingUpFromTheEconomyOfDreams.htm (Again, I don't want this to sound like a slight against Geortzel. He's one of the AGI researchers I respect the most, even if his market timing and predicted timelines have been off. For example, Webmind and Google started around the same time, and Webmind's portfolio of commercial products was basically the same -- search, classification -- and its general R&D interests are basically aligned with Google post-2006. Google of today is what Webmind was trying to be in 1999 - 2001. If you took someone from mid 2000 and showed them a description of today's Google with names redacted, they'd be excused for thinking it was Webmind, not Google. Execution and near-term focus matters. :\ )

Thanks for the correction! I changed "endorsed" to "discussed" in the OP. What I meant to convey was that these authors endorsed the logic of the argument given the premises (ignoring sim scenarios), rather than that they agreed with the argument all things considered.

3CarlShulman
Thanks Brian.
Load More