All of G0W51's Comments + Replies

G0W5100

Panexistential risk is a good, intuitive, name.

G0W5100

True. Also, the Great Filter is more akin to an existential catastrophe than to existential risk, that is, the risk of an existential catastrophe.

G0W5140

Is there a term for a generalization of existential risk that includes the extinction of alien intelligences or the drastic decrease of their potential? Existential risk, that is, the extinction of Earth-originating intelligent life or the drastic decrease of its potential, does not sound nearly as harmful if there are alien civilizations that become sufficiently advanced in place of Earth-originating life. However, an existential risk sounds far more harmful if it compromises all intelligent life in the universe, or if there is no other intelligent life i... (read more)

1_rpd
I really like this distinction. The closest I've seen is discussion of existential risk from a non-anthropocentric perspective. I suppose the neologism would be panexistential risk.
0polymathwannabe
I think the term is Great Filter.
G0W5100

That sounds about right.

G0W5100

It's later, but, unless I am mistaken, the arrival of the intelligence explosion isn't that much later than when most people will retire, so I don't think that fully explains it.

2ChristianKl
I think it's often double. Retiring in 40 years and expecting the intelligence explosion in 80 years.
G0W5100

People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?

G0W5100

Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future? I understand that people spend less time worrying about the intelligence explosion than what would be socially optimal because the vast majority of its benefits will be in the very far future, which people care little about. However, it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future (within the next 100 years). Yet, hardly anyone worries about it. Why?

5ChristianKl
Why do you think they are in similar distance in the future? If you take the LW median of a likely arrival of the intelligence explosion that's later than when most people are going to retire. If you look at the general population most people consider the intelligence explosion even less likely.
6gjm
First: Most people haven't encountered the idea (note: watching Terminator does not constitute encountering the idea). Most who have have only a very hazy idea about it and haven't given it serious thought. Second: Suppose you decide that both pension savings and intelligence explosion have a real chance of making a difference to your future life. Which can you do more about? Well, you can adjust your future wealth considerably by changing how much you spend and how much you save, and the tradeoff between present and future is reasonably clear. What can you do to make it more likely that a future intelligence explosion will improve your life and less likely that it'll make it worse? Personally, I can't think of anything I can do that seems likely to have non-negligible impact, nor can I think of anything I can do for which I am confident about the sign of the impact they do have. (Go and work for Google and hope to get on a team working on AI? Probably unachievable, not clear I could actually help, and who knows whether anything they produce will be friendly? Donate to MIRI? There's awfully little evidence that anything they're doing is actually going to be of any use, and if at some point they decide they should actually start building AI systems to experiment with their ideas, who knows?, they might be dangerous. Lobby for government-imposed AI safety regulations? Unlikely to succeed, and if it did it might turn out to impede carefully done AI research more than it impedes actually dangerous AI research, not least because it turns out that one can do AI research in more than one of the world's countries. Try to build a friendly AI myself? Ha ha ha. Assassinate AI researchers? Aside from being illegal and immoral and dangerous, probably just as likely to stop someone having a crucial insight needed for friendly AI as to stop someone making something that will kill us all. Try to persuade other people to worry about unfriendly AI? OK, but they don't have any more
6[anonymous]
Because most people don't agree that 'it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future'.
G0W5100

I would like to improve my instrumental rationality and improve my epistemic rationality as a means to do so. Currently, my main goal is to obtain useful knowledge (mainly in college) in order to obtain resources (mainly money). I'm not entirely sure what I want to do after that, but whatever it is, resources will probably be useful for it.

G0W5100

Improving my rationality. Are you looking for something more specific?

0[anonymous]
Yes. Epistemic rationality or Instrumental rationality? If the former, what specific aspects of it are you looking to improve, if the latter, what specific goals are you looking to achieve.
G0W5100

How much should you use LW, and how? Should you consistently read the articles on Main? What about discussion? What about the comments? Or should a more case-by-case system be used?

2[anonymous]
Should is one of those sticky words that needs context. What are your goals for using LW?
G0W5100

What exactly do you suggest using to invest, then?

G0W5100

Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.

G0W5100

Presumably "employ the same strategy" should be interpreted loosely, as it seems problematic to give no consideration to agents who would use a slightly different allocation strategy.

Thanks for the idea. I will look into it.

G0W5180

What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV) and how much weight each agent will be given?

Nick Bostrom's Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocati... (read more)

Vaniver110

What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV), and how much weight each agent will be given?

I don't think anyone has a satisfactory solution to what is inherently a political question, and I think people correctly anticipate that analyzing it through the lens of politics will lead to unsatisfying discussions.

3Gurkenglas
Thinking of the prisoners-dilemma-with-access-to-sourcecode, an obvious strategy would be to allocate negentropy to agents that would employ the same strategy in proportion to the probability that they would have ended up in the position to allocate the universe's negentropy.
G0W5120

Is it okay to re-ask questions on open threads if they were not answered the last time the were ask on it? I had asked this question but received no answer, and I am concerened it would be spammy to re-ask.

2ChristianKl
LW isn't a place that has strict rules for issues like this. It operates on the principle that it's users are generally smart and make decent decisions. If a person makes posting decisions that the community disapproves of, he will get downvoted.
5philh
I'd say ask it at most three times consecutively, probably no more than two. But two is absolutely fine, especially since you previously asked it quite late in the thread.
1MrMind
I personally wouldn't mind if you re-asked. It might happen that I might know an answer this week and not the past one, and usually open threads get forgotten really quickly.
G0W5100

My gut says that the performance in a vehicle collision will probably bring the head to a halt against a relatively immobile object, so the hat won't do much of anything as the crushable bits are crushed too fast to be effective.

I don't see how the latter clause follows from the former. You said that in the drop test, the impact reduction was roughly 25%. This isn't huge, but I can't say it "won't do much of anything." Were you thinking of something else to support your claim?

0Dorikka
No other source, but keep in mind that helmets are tuned for a certain force level. Too durable and helmet does not reduce peak force as it does not crush. Too weak and it crushes quickly, again with little reduction in peak force. This should just empasize to use the 25% number here though since the forces are more representative. Redacting "won't do much of anything" except as implied by 25%, but keep in mind that if peak accelerations are much higher than the given case, the helmet will be less effective due to the above. This may or may not be the case in car crashes depending on speed.
G0W5120

What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV), and how much weight each agent will be given?

Nick Bostrom's Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advoca... (read more)

G0W5120

Don't worry, I don't mind math. Alas, I mainly have difficulty understanding why people act how they do, so I doubt mathematics will help much with that. I think I'm going to take the suggestion someone gave of reading more textbooks. A psychology course should also help.

G0W5100

Severity Index (I assume this is based on the head injury criterion?)

Actually, in an email they said the head of NOCSAE did the test, so presumably the NOCSAE Severity Index was used. An NOCSAE article says, "There is no measurable difference in safety of helmets with scores below the 1200 SI threshold." So in other words, in the test the hats did not protect against any significant damage, because no significant damage was done even without the hat. Despite this, the webpage said said that, "The Crasche hat reduces the severity of blunt ... (read more)

1Dorikka
Interesting - thanks for checking this. If the Severity Index is claiming no significant damage below 1200, I think it may be incorrect or may have a different criterion for severe damage. Some helmet standards seem to be fairly insensitive, only accounting for moderate or severe brain injury whereas MTBI can have long lasting effects. Yes, I discount Severity claims as the metric does not appear to give reasonable results. 188g is a crapload of linear acceleration, but metric puts it under threshold...I dont buy it, so am left to judge on peak linear accel instead (shame that rotational accel was not measured...)
G0W5120

I don't see how this would really help unless I am trying to do original research.

2IlyaShpitser
What are you trying to do? There are a lot of gotchas w/ causality. Lots of wikipedia info is wrong, etc. If your thought process is "I want to learn about causes of things, but this seems like an awful lot of math..." consider that you may need to internalize some (not all!) of this math before you can talk about causes properly at all. It's like physics. Physics is handy, but there's some math. It's probably a good idea to learn a bit of physics if you are interested in the physical world, even if you aren't interested in doing original physics research. ---------------------------------------- I can generally point you in the right direction, but this will take some work from you, also.
G0W5150

Where can one find information on the underlying causes of phenomena? I have noticed that most educational resources discuss superficial occurrences and trends but not their underlying causes. For example, this Wikipedia article discusses the happenings in the Somali Civil War but hardly discusses the underlying motivations of each side and why the war turned out how it did. Of course, such discussions are often opinionated and have no clear-cut answers, perhaps making Wikipedia a sub-optimal place for them.

I know LW might not be the best place to ask thi... (read more)

3Lumifer
In social sciences the "causes" depend on your preferred analysis framework and are often highly contentious. For a "deeper-level understanding" I'd recommend reading many viewpoints which disagree with each other.
1Dagon
Typically academic books and papers are the only places that really try to identify cause and effect at a level of abstraction that makes you think you understand. Be aware, of course, that neither they nor you can actually understand it - human behavior is complex enough that we can't model individual choices very well, let alone the sum of billions of individual choices that add up to societal "phenomena" like wars and demographic shifts and stock market blips.
5IlyaShpitser
Read about causal inference.
5Dahlen
What kind of phenomena are we talking about? You should specify if you're referring more narrowly to social and historical phenomena, because that's where the biggest gaps between what one can say on the surface about them and what actually drove them are. It's also a very murky area in regards to specifying causality. The only reasonably effective method I've tried for this is to first read the Wikipedia article, to get an overview of the objective facts, events, numbers and so on, then try to find press articles about the topic, which are less objective but include more details.
G0W5100

My gut says that the performance in a vehicle collision will probably bring the head to a halt...

Presumably, the impact would cause the pedestrian to fly back in roughly the same direction the car was moving during the impact, rather than come to a complete stop. That said, I don't really know enough about the tests to know if this would make a difference in efficacy. Could you link the exact data you received?

0Dorikka
The data is posted above, unlikely to get around to Dropboxing it so I can link (as it was from an email). I agree with you re body movement in a vehicle collision. However, at some point your body would stop. If your head hit something while your body was in motion, thr impacted object would likely have enough strength to bring the head to an abrupt halt. (Contrast with a knife being punched through paper mache - I would expect the force on the lnofe to be much lower than if hitting concrete, as it would go through the paper mache without much velocity change.)
G0W5100

Perhaps the endowment effect evolved because placing high value on an object you own signals to others that the object is valuable, which signals that you are wealthy, which can increase social status, which can increase mating prospects. I have not seen this idea mentioned previously, but I only skimmed parts of the literature.

G0W5100

Yes, but I'm looking to see if it increases existential risk more than it decreases it, and if the increase is significant.

G0W5100

Where exactly is the logical prior being used in the decision procedure? It doesn't seem like it would be used for calculating U, as U was implied to be computable. I don't see why it would be needed for p, as p could be computed from the complexity of the program, perhaps via kolmogorov complexity. Also, what is the purpose of us? Can't the procedure just set us to be whatever U outputs?

G0W5100

Oh, I think I see. Confidence is a feeling, while credence is a belief.

G0W5110

I find it interesting that you both are underconfident and realize you are underconfident. Have you tried adjusting for underconfidence like you would any other cognitive bias? (At least you need not adjust for overconfidence!)

2Fluttershy
I don't have much social confidence, but social confidence need not be related to credence calibration. I still end up giving somewhat overconfident answers on CFAR's credence calibration game, despite being shy. My above comment could have been clearer: the bit about "my perception of the quality of certain things that I have written" has more to do with my self-worth, and less to do with my ability to judge the quality of my own writing.
3ike
Someone thinking they are underconfident does not prove they are not overconfident.
G0W51-10

Alternatively, the site could let the users determine what it good. Users could "like" or "dislike" articles, and these likes and dislikes would affect the reputation of the publisher. The higher the publisher's reputation, the more likes, and the fewer dislikes and article has, the higher rank the article would get when being searched for, and articles with sufficiently low rankings would be hidden. Think Stack Exchange for science.

It could be expanded in many ways, for example by weighing likes and dislikes by high-status users more heavily than low-status ones, or by using numeric ratings instead.

G0W5110

Good job. Why hasn't this been published in a journal?

6Fluttershy
Thanks; this comment made me happy. Part of the answer is that I generally have low self-esteem, which negatively affects my perception of the quality of certain things that I have written. Another part of the answer was that I wrote this with the very specific goal of estimating values of the "years of life added vs. age at castration" curve at different points in mind, which seems much more narrow than the goal of doing basic science work, which is most of what journals publish. Additionally, many journals have publication fees, which I would have to pay out of pocket. Others charge readers access fees; I'd rather people be able to access my work freely. As things currently stand, I might still be able to mention this work during interviews as an example of a time when I noticed others didn't seem to be working on a certain problem and took action myself, if the interviewer didn't seem to be prejudiced against transhumanist or LGBT folks.
G0W5140

Oppression could cause an existential catastrophe if the oppressive regime is never ended.

G0W5150

I have heard (from the book Global Catastrophic Risks) that life extension could increase existential risk by giving oppressive regimes increased stability by decreasing how frequently they would need to select successors. However, I think it may also decrease existential risk by giving people a greater incentive to care about the far future (because they could be in it). What are your thoughts on the net effect of life extension?

-3[anonymous]
Is there anything that can't somehow be spun into increasing existential risk? The biggest existential risk is being alive at all in the first place.
8pcm
One of the stronger factors influencing the frequency of wars is the ratio of young men to older men. Life extension would change that ratio to imply fewer wars. See http://earthops.org/immigration/Mesquida_Wiener99.pdf. Stable regimes seem to have less need for oppression than unstable ones. So while I see some risk that mild oppression will be more common with life extension, I find it hard to see how that would increase existential risks.
3Username
Radical life extension might lead to overpopulation and wars that might escalate to existential risk level danger.
G0W5140

The book Global Catastrophic Risks states that it does not appear plausible that molecular manufacturing will not come into existence before 2040 or 2050. I am not at all an expert on molecular manufacturing, but this seems hard to believe, given how little work seems to be going into it. I couldn't find any sources discussing when molecular manufacturing will come into existence. Thoughts?

1[anonymous]
There are reasons very little work is going into it - the concept makes very little sense compared to manipulating biological systems or making systems that work similar to biological systems. See http://www.sciencemag.org/content/347/6227/1221.short or this previous post of mine: http://lesswrong.com/lw/hs5/for_fai_is_molecular_nanotechnology_putting_our/97rl
G0W5100

But other than self-importance, why don't people take it seriously? Is it otherwise just due to the absurdity and availability heuristics?

G0W5100

I think something else is going on. The responses to this question about the feasibility of strong AI mostly stated that it was possible, though selection bias is probably largely at play, as knowledgable people would be more likely to answer than the ignorant would be.

1MrMind
Surely AI is a concept that's more and more present in the Western culture, but only as fictional, as far as I can tell. No man in the street takes it seriously, as in "it's really starting to happen". Possibly the media are paving the way for a change in that, as the insurgence of AI related movies seems to suggest, but I would bet it's still an idea very far from their realm of possibilities. Also, once the reality of an AI would be estabilished, it would still be a jump to believe in the possibility of an intelligence superior to human's, a leap that for me is tiny but for many I suspect would not be so small (self-importance and all that).
G0W5110

Why don't people (outside small groups like LW) advocate the creation of superintelligence much? If it is Friendly, it would have tremendous benefits. If superintelligence's creation isn't being advocated out of fears of it being unFriendly, then why don't more people advocate FAI research? Is it just too long-term for people to really care about? Do people not think managing the risks is tractable?

4IffThen
FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn't heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal. My thought process until I started reading LessWrong and related sites was basically split between "scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks -- playing chess, obscure math problems, managing complicated data traffic" and "intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends". Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI. I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an 'intelligence' of any sort.
4[anonymous]
If you're not reading about futurism, it's unlikely to come up. There aren't any former presidential candidates giving lectures about it, so most people have never heard of it. Politics isn't about policy as Robin Hanson likes to say.
6MrMind
One answer could be that people don't really think that a superintelligence is possible. It doesn't even enter in their model of the world.
G0W5120

Despite there being multiple posts on recommended reading, there does not seem to be any comprehensive and non-redundant list stating what one ought to read. The previous lists do not seem to cover much non-rationality-related but still useful material that LWers might not have otherwise learned about (e.g. material on productivity, happiness, health, and emotional intelligence). However, there still is good material on these topics, often in the form of LW blog posts.

So, what is the cause of the absence of a single, comprehensive list? Such a list sounds... (read more)

0Vaniver
The short answer seems to be a combination of "tastes differ," "starting points differ," and "destinations differ."
6ChristianKl
The tricky thing is to summarize both recommendation for books and those against books. We had a book recommendation survey after Europe-LWCW and Thinking Fast and Slow got 5 people in favor and 4 against it. The top nonfiction recommendation were: Influence by Cialdini, Getting Things Done, Gödel Escher Bach and The Charisma Myth. Those four also got no recommendations against them.
G0W5120

See the post in the sequence "37 Ways That Words Can Be Wrong" called "Sneaking in Connotations. It seems to be roughly what you're talking about.

G0W5120

Here's a potential existential risk. Suppose a chemical is used for some task or made as a byproduct of another task, especially one that is spread throughout the atmosphere. Additionally, suppose it causes sterility, but it takes a very long time to cause sterility. Perhaps such a chemical could attain widespread use before its deleterious effects are discovered, and by then, it would have already sterilized everyone, potentially causes an existential catastrophe. I know this scenario for causing an existential catastrophe seems very small compared to other risks, but is it worthy of consideration?

1drethelin
There is a very short window for this to be a serious existential threat, on the order of a few decades. If mass sterilization does not happen soon, our technological ability to make babies using cloning and other technologies will outpace the need for genital-based reproduction.
tim100

On the face of it, I don't feel that this particular risk differentiates itself enough from "what if [insert subtle end-of-times scenario here]?" to be worthy of specific consideration. It's a lot of what ifs and perhapses.

6D_Malik
Interesting. Very small concentrations of the chemical would have to sterilize practically everyone they contacted - else it would just cause humanity to very rapidly evolve resistance, or maybe kill off the developed world. Reminds me of the decline in testosterone levels over the past couple decades, which might be due to endocrine-disrupting compounds in the water supply and in plastics and food, but which hasn't been enough to sterilize much of the population.
G0W5100

Perhaps it would be beneficial to introduce life to Mars in the hope that it could eventually evolve into intelligent life in the event that Earth becomes sterilized. There are some lifeforms on Earth that could survive on Mars. The outer space treaty would need to be amended to make this legal, though, as it currently prohibits placing life on Mars. That said, I find it doubtful that intelligent life ever would evolve from the microbes, given how extreme Mar's conditions are.

3Unknowns
If you want to establish intelligent life on Mars, the best way to do that is by establishing a human colony. Obviously this is unlikely to succeed but trying to evolve microbes into intelligent life is less likely by far.
G0W5140

What are some recommended readings for those who want to decrease existential risk? I know Nick Bostrom's book Superintelligence, How can I reduce existential risk from AI?, and MIRI's article Reducing Long-Term Catastrophic Risks from Artificial Intelligence are useful, but what else? What about non-AI-related existential risks?

G0W5100

I've been reading the discussion between Holden et al on the utility of charities aimed at directly decreasing existential risk, but the discussion seems to have ended prematurely. It (basically) started with this post, then went to this post. Holden made a comment addressing the post, but I think it didn't fully address the post and I don't think Holden's comment was fully addressed either. Is there any place that continues the discussion?

G0W5110

That said, I don't accept any of the arguments given here for why it's unacceptable to assign a very low probability to a proposition. I think there is a general confusion here between "low subjective probability that a proposition is correct" and "high confidence that a proposition isn't correct"; I don't think those two things are equivalent.

I don't think you've really explained why you don't accept the arguments in the post. Could you please explain why and how the difference between assigning low probability to something and havi... (read more)

G0W5100

Karnofsky has, as far as I know, not endorsed measures of charitable effectiveness that discount the utility of potential people.

Actually, according to this transcript on page four, Holden finds that the claim that the value of creating a life is "some reasonable" ratio of the value of saving a current life is very questionable. More exactly, the transcript sad:

Holden: So there is this hypothesis that the far future is worth n lives and this causing this far future to exist is as good as saving n lives. That I meant to state as an accurate

... (read more)
G0W5100

I don't think you were being obtuse. Your post wasn't bad per se, it was just off-topic as, unless I am failing to interpret it, it doesn't really add anything new to rationality or applied rationality.

Also, in case you had trouble locating the open thread, just click "Discussion" at the top of the page, then click the link under "Latest Open Thread" on the right of the page.

G0W5110

I don't think Less Wrong discussion is the best place for poetry, though someone please correct me if I am mistaken.

09eB1
I think it would be better posted to the open thread.
G0W5100

You're absolutely right. I'm not sure how I missed or forgot about reading that.

G0W5100

The article said the leverage penalty "[penalizes] hypotheses that let you affect a large number of people, in proportion to the number of people affected." If this is all the leverage penalty does, then it doesn't matter if it takes 3^^^3 atoms or units of computation, because atoms and computations aren't people.

That said, the article doesn't precisely define what the leverage penalty is, so there could be something I'm missing. So, what exactly is the leverage penalty? Does it penalize how many units of computation, rather than people, you ca... (read more)

2Eliezer Yudkowsky
G0W5120

What if the mugger says he will give you a single moment of pleasure that is 3^^^3 times more intense than a standard good experience? Wouldn't the leverage penalty not apply and thus make the probability of the mugger telling the truth much higher?

I think the real reason the mugger shouldn't be given money is that people are more likely to be able to attain 3^^^3 utils by donating the five dollars to an existential risk-reducing charity. Even though the current universe presumably couldn't support 3^^^3 utils, there is a chance of being able to create or ... (read more)

0Eliezer Yudkowsky
I don't think you can give me a moment of pleasure that intense without using 3^^^3 worth of atoms on which to run my brain, and I think the leverage penalty still applies then. You definitely can't give me a moment of worthwhile happiness that intense without 3^^^3 units of background computation.
G0W5110

The second problem can easily be explained by having your utility function not be linear in the number of non-destroyed universes.

G0W5110

Is Solomonoff induction a theorem for making optimal probability distributions or a definition of them? That is to say, has anyone proved that Solomonoff induction produces probability distributions that are "optimal," or was Solomonoff induction created to formalize what it means for a prediction to be optimal. In the former case, how could they define optimality?

(And another question: I posted this a couple days ago on the last open thread, but it was closed before I got a response. Is it okay to repost it?)

0MrMind
Ops, I answered there.
Load More