All of Artaxerxes's Comments + Replies

But ultimately, for the parts that really matter here, this is a matter of explaining, not of defeating

Of course, defeating people who are mistakenly doing the wrong thing could also work, no? Even if we take the assumption that people doing the wrong thing are merely making a mistake by their own lights to be doing so, it might be practically much more feasible to divert them away from doing it or otherwise prevent them from doing it, rather than to rely on successfully convincing them not to do it. 

Not all people are going to be equally amenable to ... (read more)

What kinds of reactions to and thoughts about the post did you have that you got a lot out of observing?

4Nicholas / Heather Kross
Non-exhaustive, and a maybe non-representatively useful selection: I realized how easily I can change my state-of-mind, down to emotions and similar things. Not instant/fully at-will, but more "I can see the X, let it pass over/through me, and then switch to Y if I want".

On the other hand, the potential resource imbalance could be ridiculously high, particularly if a rogue AI is caught early on it’s plot, with all the worlds militaries combined against them while they still have to rely on humans for electricity and physical computing servers. It’s somewhat hard to outthink a missile headed for your server farm at 800 km/h. ... I hope this little experiment at least explains why I don’t think the victory of brain over brawn is “obvious”. Intelligence counts for a lot, but it ain’t everything.

While this is a true and import... (read more)

So you are effectively a revolutionary.

I'm not sure about this label, how government/societal structures will react to eventual development of life extension technology remains to be seen, so any revolutionary action may not be necessary. But regardless of which label you pick, it's true that I would prefer not to be killed merely so others can reproduce. I'm more indifferent as to the specifics as to how that should be achieved than you seem to imagine - there are a wide range of possible societies in which I am allowed to survive, not just variations on those you described. 

I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The decision is between using those resources to support you vs using those resources to support someone else's child.

That's an example of something the resources could go towards, under some value systems, sure. Different value systems would suggest that different entities or purposes would make best moral use of those resources, of cours... (read more)

I don't think you have engaged with my core point so I"ll just state it again in a different way: continuous economic growth can support some mix of both reproduction and immortality, but at some point in the not distant future ease/speed of reproduction may outstrip economic growth, at which point there is a fundemental inescapable choice that societies must make between rentier immortality and full reproduction rights.

I think you may be confusing me for arguing for reproduction over immortality, or arguing against rentier existence - I am not. Instead I'

... (read more)
2jacob_cannell
Immortality is something that you can only have through the cooperation of civilization, so you when you ask: You implicitly are advocating for some civ structures over others - in particular you are advocating for something like a social welfare state that provides permanent payout to some set of privileged initial rentier citizens forever (but not new people created for whatever reasons ), or a capitalist/libertarian society with strong wealth protections, lack of wealth tax etc to support immortal rentiers (but new poor people may be out of luck). These two systems are actually very similar, differing mostly in how they decide who become the lucky chosen privileged rentiers. But those aren't the only or even the obvious choices. The system closer to the current would be one where the social welfare state provides payout for all citizens and allows new citizens to be created; thus the payouts must decline over time and can not provide true immortality to uncompetitive rentiers, and there are additionally various wealth taxes. So you are effectively a revolutionary.
9Tamsin Leake
allow me to jump in. this conversation feels like jacob_cannell saying "we must pick between current persons or to current-and-future persons", and Artaxerxes saying "as a current person, i pick current persons!", and then the discussion is about whether to favor one or the other. i feel like this is a good occasion to bring up my existential self-determination perspective. the thing that is special about current-persons is that we have control over which other persons get spawned. we get to choose to populate the future with nobody ("suicide"), next-steps-of-ourselves ("continue living"), new persons ("progeniture"), and any amount of variations of those (such as ressucitating old backups of ourselves, one person forking their life by spawning multiple and different next-step-of-themself, etc). (i'll be using "compute" as the universal resource, assuming everyone is uploaded, for simplicity) as things stand now, i think the allocation of compute ought to be something like: i want everyone now to start with a generally equal share of the future lightcone's compute, and then they get to choose what their quota of the universe's compute is spent on. instant-Artaxerxes would say "i want my quota spent on next-steps-of-me ! i want to continue living !", while jacob_cannell and other people like him would say "i think some of my quota of the universe's compute should be spent creating new persons, in addition to the next-step-of-me; and i bite the bullet that eventually this process might lead to sequences of steps-of-me to run out of quota from all those new persons." these two outcomes are merely different special cases of instant-persons choosing which next instant-persons get to have compute. in my opinion, what economic structure to have should be voluntary — if jacob_cannell wants to live in a voluntary society that allocates compute via a market, and Artaxerxes wants no part in that and just wants to use his quota to keep themself alive possibly until heat

Of course I have a moral opportunity cost. However, I personally believe that this opportunity cost is low, or at least it seems that way to me. I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The question of what to do about scarcity of resources seems like a potentially very scary one then for exactly the reasons that you bring up - I don't particularly think for example that a political zeit... (read more)

4jacob_cannell
The decision is between using those resources to support you vs using those resources to support someone else's child. The difference is between dividing up all future resources over current humans vs current and future humans (or posthumans). Why do only current humans get all the resources - for ever and ever? Humans routinely and reliably create children that they do not have the resources to support - and this is only easier for posthumans. Do you have more fundamental right to exist than other people's children? The state/society supporting them is not a long term solution absent unlimited economic growth. I don't think you have engaged with my core point so I"ll just state it again in a different way: continuous economic growth can support some mix of both reproduction and immortality, but at some point in the not distant future ease/speed of reproduction may outstrip economic growth, at which point there is a fundemental inescapable choice that societies must make between rentier immortality and full reproduction rights. We don't have immortality today, but we have some historical analogies, such as the slave powered 'utopia' the cavaliers intended for the south: a permanent rentier existence (for a few) with immortality through primogeniture and laws strongly favoring clan wealth consolidation and preservation (they survive today indirectly through the conservative ethos). They were crushed in the civil war by the puritans (later evolving into progressives) who favor reproduction over immortality. I think you may be confusing me for arguing for reproduction over immortality, or arguing against rentier existence - I am not. Instead I'm arguing simply that you haven't yet acknowledged the fundemental tradeoff and its consequences.

The "10 years at most" part of the prediction is still open, to be fair.

While this seems to me to be true, as a non-maximally competitive entity by various metrics myself I see it more as an issue to overcome or sidestep somehow, in order to enjoy the relative slack that I would prefer. It would seem distatefully molochian to me if someone were to suggest that I and people like me should be retired/killed in order to use the resources to power some more "efficient" entity, by whatever metrics this efficiency is calculated.

To me it seems likely that pursuing economic efficiencies of this kind could easily wipe out what I person... (read more)

5jacob_cannell
Naturally - you'd need to be suicidal not to fight for your existence, but that's not the future i'm suggesting. I think that is a bad example, for various reasons. Imagine a future where we have aligned AI and implemented some CEV or whatever - a posthuman utopia. But we still require energy, and so the system must somehow decide how to allocate that energy. Some of us will want more energy for various reasons - to expand our minds or what not. So there is still - always - some form of competition for resources. There is always scarcity. All you are really saying here is that your values - and your existence - deserves some non-trivial share of the future resources, and more importantly that you are more deserving of said resources than other potential uses - including other future beings that could exist. You have a moral opportunity cost. One simple obvious way to divide up the resource share is to simply preserve and stabilize the current wealth/power structure. Any human who makes it to the posthuman stage would then likely have enough to live off their wealth/investments (harvesting some share of GDP growth) essentially indefinitely - as the cost to run a human-scale brain would be low and declining as the civ expands. But that may require a political structure which is essentially conservative and anti-progressive in essence - as that system would essentially grant the initial posthumans a permanent rentier existence. The current more dominant 'progressive' political zeitgeist prefers turnover and various flavors of wealth taxes, which applied to posthumans would simply guarantee wealth decay and eventual death (or decline to some minimal government supported baseline) for non-competitive entities. In the long term the evolution of a civilization does seem to benefit from turnover - ie fresh minds being born - which due to the simple and completely unavoidable physics of energy costs necessarily implies indefinite economic growth or that some other mind

You also appeal to just open-ended uncertainty

I think it would be more accurate to say that I'm simply acknowledging the sheer complexity of the world and the massive ramifications that such a large change would have. Hypothesizing about a few possible downstream effects of something like life extension on something as far away from it causally as AI risk is all well and good, but I think you would need to put a lot of time and effort into it in order to be very confident at all about things like directionality of net effects overall. 

I would go as fa... (read more)

2DirectedEvolution
Personally I’d be shocked if longevity medicine resulted in a downsizing of the healthcare industry. Longevity medicine likely will displace some treatments for acute illness with various maintenance treatments to prevent onset of acute illness. There will be more monitoring, complex surgeries, all kinds of things to do. And the medical profession doesn’t overlap that well with AI research. It’s a service industry with a helping of biochem. People who do medicine typically hate math. AI is a super hot industry. If people aren’t going into it, it’s because they don’t have great fit. I don’t know enough about differential development arguments to respond to that bit right now. Overall, I agree that the issue is complex, but I think it’s tractable complex and we shouldn’t overestimate the number of major uncertainties. If in general it was too hard to predict the macro consequences of strategy X then it would not be possible to strategize. We clearly have a lot of confidence around here about the likelihood of AI doom. I think we need a good clean argument about why we can make confident predictions in certain areas and why we can make “massive complexity” arguments in others.
2DirectedEvolution
I thought I did respond to your human capital waste example. Can you clarify the mechanism you’re proposing? Maybe it wasn’t clear to me. With regard to the massive complexity argument, I think this points to a broader issue. Sometimes, we feel confident about the macroeconomic impact of X on Y. For example, people in the know seem pretty confident that the US insourcing the chip industry is bad for AI capability and thus good for AI safety. What is it that causes us to be confidently uncertain due to a “massive complexity” argument in the case of longevity, but mildly confident in the sign of the intervention in the case of chip insourcing? I don’t know your view on chip insourcing, but I think it’s relevant to the argument whether you’d also make a “massive complexity” argument for that issue or not. Edit: I misclicked submit too early. Will finish replying in another comment.

Strongly agree on life extension and the sheer scale of the damage caused by aging-related disease. Has always confused me somewhat that more EA attention hasn't gone towards this cause area considering how enormous the potential impact is and how well it has always seemed to perform to me on the important/tractable/neglected criteria.

An alternative to a tractability-and-neglect based argument is an importance-based argument. There's a lot of pessimism about the prospects for technical AI alignment. If serious life extension becomes a real possibility without depending on an AI singularity, that might convince AI capabilities researchers to slow down or stop their research and prioritize AI safety much more. Possibly, they might become more risk-averse, realizing that they no longer have to make their mark on humanity within the few decades that ordinary lifespans allow for a career. Po

... (read more)
2DirectedEvolution
I agree the argument needs fleshing out - only intended as a rough sketch. There are three possibilities: 1. Longevity research success -> AI capabilities researchers slow down b/c more risk-averse + achieved their immortality aims that motivated their AI research 2. Longevity research success -> no effect on AI capabilities researcher activity 3. Longevity research success -> Extends research career of AI capabilities researchers, accelerating AI discovery You also appeal to just open-ended uncertainty - even if we come up with strong confident predictions on these specific mechanisms, we still haven't moved the needle on predicting the effect of longevity research success on AI timelines. Here are a few quick responses. 1. Longevity research success would also extend the careers of AI safety researchers. A counterargument is that AI safety researchers are mostly young. In the very short term, this may benefit AI capabilities research more than AI safety research. Over time, that may flip. However with short AI timelines, longevity research is not an effective solution because it's extremely unlikely that convincing proof we've achieved longevity escape velocity within the next 10-20 years. If we all became immortal now and AI capabilities were to be invented soon, this aspect might be net bad for safety. If we became immortal in 20 years and AI capabilities would otherwise be invented in 40 years, now both the safety and capabilities researchers get the benefit of career extension.  2. Longevity research success may also make politicians and powerful people in the private sector (early beneficiaries of longevity research success) more risk-averse, making them regulate AI capabilities with more scrutiny. If they shut off the giant GPUs, it will be hard for capabilities research to succeed. It's even easier to imagine politicians + powerful businessmen allowing AI capabilities research to accelerate as a desperate longevity gamble than it is to imagine the

What you are looking for sounds very much like Vanessa Kosoy's agenda

As it so happens, the author of the post also wrote this overview post on Vanessa Kosoy's PreDCA protocol.

1Morpheus
Oops! Well, I did not carefully read the whole post to the end and that's what you get! Ok second try after reading the post carefully: I think I have been thinking something similar, and my best description of this desideratum is pragmatism. Something like "use a prior that works" in the worlds where we haven't already lost. It's easy to make toy models where alignment will be impossible. -> regret bounds for some prior where I don't know what it looks like yet.

Thanks for writing! I'm a big fan of utopian fiction, it's really interesting to hear idealised depictions of how people would want to live and how they might want the universe to look. The differences and variation between attempts is fascinating - I genuinely enjoy seeing how different people think different things are important, the different things they value and what aspects they focus on in their stories. It's great when you can get new ideas yourself about what you want out of life, things to aspire to. 

I wouldn't mind at all if writing persona... (read more)

Yes, I do expect that if we don't get wiped out that maybe we'll get somewhat bigger "warning shots" that humanity may be likely to pay more attention to. I don't know how much that actually moves the needle though.

Ok sure but extra resources and attention is still better than none. 

This isn't obvious to me, it might make things harder. Like how when Elon Musk read Superintelligence and started developing concerns about AI risk but the result was that he founded OpenAI and gave it a billion dollars to play with, regarding which I think you could make an argument that doing so accelerated timelines and reduced our chances of avoiding negative outcomes. 

ArtaxerxesΩ130

I'm fairly agnostic about how dumb we're talking - what kinds of acts or confluence of events are actually likely to be effective complete x-risks, particularly at relatively low levels of intelligence/capability. But that's besides the point in some ways, because whereever someone might place the threshold for x-risk capable AI, as long as you assume that greater intelligence is harder to produce (an assumption that doesn't necessarily hold, as I acknowledged), I think that suggests that we will be killed by something not much higher than that threshold o... (read more)

3Donald Hobson
So long as we assume the timescales of intelligence growth are slow compared to destroying the world timescales. If an AI is smart enough to destroy the world in a year (in the hypothetical where it had to stop self improving and do it now). A day of self improvement and they are smart enough to destroy the world in a week. Another day of self improvement and they can destroy the world in an hour. Another possibility is an AI that doesn't choose to destroy the world at the first available moment.  Imagine a paperclip maximizer. It thinks it has a 99% chance of destroying the world and turning everything into paperclips. And a 1% chance of getting caught and destroyed. If it waits for another week of self improvement, it can get that chance down to 0.0001%.  Suppose the limiting factor was compute budget. Making each AI 1% bigger than before means basically wasting compute running pretty much the same AI again and again. Making each AI about 2x as big as the last is sensible. If each training run costs a fortune, you can't afford to go in tiny steps.

The segment on superintelligence starts at 45:00, it's a rerun of a podcast from 2 years ago. Musk says it's a concern, Bill Nye commenting on Musk's comments about it afterwards says that we would just unplug it and is dismissive. Neil is similarly skeptical and half heartedly plays devils advocate but clearly agrees with Nye.

I'd even suspect that it's possible that it's even more open to being abused by assholes. Or at least, pushing in the direction of "tell" may mean less opportunity for asshole abuse in many cases.

I've heard good things about Dan Carlin's podcasts about history but I've never been sure which to listen to first. Is this a good choice, or does it assume you've heard some of his other ones, or perhaps are other podcasts better to listen to first?

5moridinamael
None of the series assume you've listened to previous ones. As long as you start at the beginning of, say, the Wrath of the Khans sequence, you won't be lost. The one about World War I might be a good place to start? They're all good, though.

Whose Goodreads accounts do you follow?

2ChristianKl
To link to a bunch of LW people (and two people I know from QS at the end): https://www.goodreads.com/user/show/11004626-gwern Gwern has read more than enough books ;) https://www.goodreads.com/user/show/27547209-tara https://www.goodreads.com/author/show/6475631.Kaj_Sotala https://www.goodreads.com/user/show/14875421-anna https://www.goodreads.com/user/show/1407758-michael-smith https://www.goodreads.com/user/show/35213338-eliezer-yudkowsky https://www.goodreads.com/author/show/7339967.Leah_Libresco https://www.goodreads.com/user/show/28602450-vladyslav-sitalo https://www.goodreads.com/user/show/2257756-jonah-sinick https://www.goodreads.com/user/show/1431750-nick https://www.goodreads.com/user/show/9829927-jazi-zilber My own Goodreads account: https://www.goodreads.com/user/show/17754535-christian-kleineidam

If you buy a Humble Bundle these days, it's possible to use their neat sliders to allocate all of the money you're spending towards charities of your choice via the PayPal giving fund, including Lesswrong favourites like MIRI, SENS and the Against Malaria Foundation. This appears to me to be a relatively interesting avenue for charitable giving, considering that it is (at least apparently) as effective per dollar spent as a direct donation would be to these charities.

Contrast this with buying games from the Humble Store, which merely allocates 5% of money... (read more)

Can anyone explain to me what non-religious spirituality means, exactly? I had always thought it was an overly vague to meaningless new age term in that context but I've been hearing people like Sam Harris use the term unironically, and 5+% of LW are apparently "atheist but spiritual" according to the last survey, so I figure it's worth asking to find out if I'm missing out on something not obvious. The wikipedia page describes a lot of distinct, different ideas when it isn't impenetrable, so that didn't help. There's one line there where it says... (read more)

0ChristianKl
Meditating to face the question of "Who am I?" is an activity that would be traditionally in the realm of spirituality. Sam Harris would be an example of a person who meditates and who's self image changed as a result. In the interview of Sam Harris with Andrew Sullivan, Andrew (who's a Christian) said that the Buddhist ideas that Sam Harris propagates where useful for Andrew's spiritual life. Sam Harris is a clear atheist and not Buddhist in the strict religious sense but Buddhist ideas did influence Sam Harris spiritual life (his sense of who he is at a basic level).
4WhySpace_duplicate0.9261692129075527
Moralistic therapeutic deism
6Lumifer
It's when you get high on magic mushrooms which allow you a glimpse of the True Authentic Spirituality of the Native People Who Live in Harmony with Nature and the Whole Cosmos. On a bit more serious note, apparently a large majority of humans have a need for something... spiritual. Living in a world made entirely of atoms and nothing else (and when you die, you return to dust and that's it) seems unsatisfying to them. If you kill religions you're left with a large void which gets colonised by a variety of things, from totalitarian ideologies to new-age woo.
0niceguyanon
I consider myself an atheist that dabbles with spirituality on occasion, mainly with drugs. Part of it is escapism no doubt, but I am also very deliberate and ceremonial about it, I'm trying to get more out of the experience than just feeling high. If you read more about the Psychonaut community where ever websites you can find them at, you would get some sort of feeling for what non-religious spirituality means. I wouldn't consider myself part of that community, I just pass by. sdr gave a pretty good go at it.
9skeptical_lurker
I think there are three main uses of which I am aware: 1) General sense of wonder and awe at real things: pantheistic 'the universe is god'; sacred geometry; nature worship. 2) Rituals, yoga, meditation without religious or paranormal baggage. 3) Paranormal beliefs that do not fit into an existing religious framework, possibly because you don't want to cause conflict between different religions so you believe in a non-denominational 'supreme being'.
4sdr
I won't speak to the content, but can wave towards the form: basically, there is a set of brain modules / neural pathways, which, when triggered by a set of thoughts, fills one with hope / drive / selflessness. Specifically for me, one of these thoughts include: | "That was humanity in the ancient days. There was so much wrong with the world that the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere. And yet... and yet..." .. "There was a threshold crossed somewhere," said the Confessor, "without a single apocalypse to mark it. Fewer wars. Less starvation. Better technology. The economy kept growing. People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from. They came even to me, in my time, and rescued me. Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it. Humanity finally got its act together." Three worlds collide How much this neural pathway is developed, and what specific form the actual software takes varies enormously between individuals. This is a problem with how atheism is being propagated currently: when you're telling a person "god does not exist", you're basically denying him the reality of this brain module, while at the same time taking away a core motivator, without substituting it with anything even barely close to it, motivation / qualia-wise. So, my import of people checking "non-religious spirituality", is that they both have this brain module somewhat developed, and there exists some thoughts by which they can readily trigger it.

This is a really good comment, and I would love to hear responses to objections of this flavour from Eliezer etc.

Saying "we haven't had a nuclear exchange with Russia yet, therefor our foreign policy and diplomatic strategy is good" is an obvious fallacy. Maybe we've just been lucky.

I mean it's less about whether or not it's good as much as it is trying to work out the likelihood of whether policies resulting from Trump's election are going to be worse. You can presuppose that current policies are awful and still think that Trump is likely to make things much worse.

Like, reading through Yudkowsky's stuff, his LW writings and HPMOR, there is the persistent sense that he is 2 guys.

One guy is like "Here are all of these things you need to think about to make sure that you are effective at getting your values implemented". I love that guy. Read his stuff. Big fan.

Other guy is like "Here are my values!" That guy...eh, not a fan. Reading him you get the idea that the whole "I am a superhero and I am killing God" stuff is not sarcastic.

It is the second guy who writes his facebook posts.

Yes, ... (read more)

LessWrong has made me if anything more able to derive excitement and joy from minor things, so if I were you I would check if LW is really to blame or otherwise find out if there are other factors causing this problem.

0Soothsilver
I keep doing that but it's kind of hard, and I can't easily get a proof of what's causing the problem.

You didn't link to your MAL review for Wind Rises!

0gwern
I haven't written one. I am still musing over it and The Princess Kaguya, and plan to rewatch them.

Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity by Holden Karnofsky. Somehow missed this when it was posted in May.

Compare, for example, Thoughts on the Singularity Institute (SI) one of the most highly upvoted posts ever on LessWrong.

Edit: See also Some Key Ways in Which I've Changed My Mind Over the Last Several Years

What's the worst case scenario involving climate change given that for some reason no large scale wars occur due to its contributing instability?

Climate change is very mainstream, with plenty of people and dollars working on the issue. LW and LW-adjacent groups discuss many causes that are thought to be higher impact and have more room for attention.

But I realised recently that my understanding of climate change related risks could probably be better, and I'm not easily able to compare the scale of climate change related risks to other causes. In particu... (read more)

0qmotus
I think many EAs consider climate change to be very important, but often just think that it receives a lot of attention already and solving it is difficult, and that there are therefore better things to focus on. Like 80 000 hours for example.
1Lumifer
The absolute worst case? Probably involves simultaneous and rapid release of the clathrates and the melting of the permafrost, a major disruption of the weather (in particular, precipitation) patterns across the globe, ocean currents -- notably the Gulfstream -- changing their course, etc. Ah, go read any horror fiction by environmentalists. They wrote a lot.
2turchin
Runaway global warming - small probability event with extinction level consequences. http://arctic-news.blogspot.ru/

Sure, but that doesn't change all the tax he evaded.

Not to mention all that tax evasion never actually got resolved.

0Pfft
She eventually gives him the carrot pen so he can delete the recording, no?

CGP Grey has read Bostrom's Superintelligence.

Transcript of the relevant section:

Q: What do you consider the biggest threat to humanity?

A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you're invested in the idea, and I've been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn't want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore

... (read more)

Took it!

It ended somewhat more quickly this time.

Typo question 42

Yes but I don't think it's logical conclusions apply for other reasons

Dawkins' Greatest Show on Earth is pretty comprehensive. The shorter the work as compared to that, the more you risk missing widely held misconceptions people have.

Not a guide, but I think the vocab you use matters a lot. Try tabooing 'rationality', the word itself mindkills some people straight to straw vulcan etc. Do the same with any other words that have the same effect.

2RainbowSpacedancer
Revisiting past conversations I think this is exactly what has been happening. When I mention rationality, reason, logic it becomes a logic v. emotion discussion. I'll taboo in future, thanks!

I recall being taught to argue towards the predetermined point of view in schools and extra-curriculum activities like debating. Is that counterproductive or suboptimal?

This has been talked about before. One suggestion is to not make it a habit.

Could you without intentionally listening to music for 30 days?

Can you rephrase this?

Yeah, I pretty much agree, but the important point to make is that any superintelligent ant hive hypotheses would have to be at least as plausible and relevant to the topic of the book as Hanson's ems to make it in. Note Bostrom dismisses brain-computer interfaces as a superintelligence pathway fairly quickly.

This interlude is included despite the fact that Hanson’s proposed scenario is in contradiction to the main thrust of Bostrom’s argument, namely, that the real threat is rapidly self-improving A.I.

I can't say I agree with your reasoning behind why Hanson's ideas are in the book. I think the book's content is written with accuracy in mind first and foremost, and I think Hanson's ideas are there because Bostrom thinks they're genuinely a plausible direction the future could go, especially in the circumstances where recursive self improving AI of the kind... (read more)

3moridinamael
I think that we are both right. Hypothetically, if there were some famous university professor who had written at length about the possibility of, I dunno, simulated superintelligent ant hives, then I think that Bostrom might have felt compelled to include a discussion of the "superintelligent ant hive hypothesis" in his book. He's striving for completeness, at least in terms of his coverage of high-level aspects of the A.I. Risk landscape. It would also be a huge slight to the theory's originator if he left out any reference to the "superintelligent ant hive hypothesis". And finally, Bostrom probably doesn't want to place himself in the position of arbiter of which ideas get to be taken seriously, when lots of people probably think of lots of parts of A.I. Risk as loony already. So, I don't think Bostrom was sitting in his office plotting how to make his book a weaponized credulity meme. But I also felt, from my own perspective, that the inclusion of the Hanson stuff was just a bit forced.

Announced? Orokamonogatari came out in October.

This is a great quote, but even moreso than Custers and Lees I feel like we need someone not so much on the front lines, but someone to win the whole war - maybe Lincoln, but my knowledge of the American Civil War is poor. Preventing death from most relevant causes (aging, infectious disease, etc.) seems within reach before the end of the century, as a conservative guess. Hastening winning that war means that society will no longer need so many generals, Lees, Custers or otherwise.

0WalterL
I don't think we'll prevent death from aging within the century. Normally I'd offer to bet you, but obviously that wouldn't work out in this case...

it's rather depressing that progress of this kind seems so impossible. Thanks for the link.

The videos you linked were already accounted for. The vid of the Superintelligence panel with Musk, Soares, Russell, Bostrom, etc. is the one that's been missing for so long.

There are still plenty of videos from EA Global nowhere to be found on the net. If anyone could point me in the direction of, for example, the superintelligence panel with Elon Musk, Nate Soares, Stuart Russell, and Nick Bostrom that'd be great.

Why has organisation of uploading these videos been so poor? I am assuming that the intention is not to hide away any record of what went on at these events. Only the EA Global Melbourne vids are currently easily findable.

This post discusses MIRI, and what they can do with funding of different levels.

What are you looking for, more specifically?

3timujin
I don't feel depressed at all. In the contrary, I am quite motivated, agitated and sort of happy.
Load More