All of Cheibriados's Comments + Replies

How do I put it, so as not to offend anyone... I think this is the right discussion for me to say that although I percieve this comment as positive, this definitely is not one I would wish to allocate my attention to, given the choice. I would have expected such posts to get downvoted. I suggest two separate systems of voting: one for positive fuzzy feelings, one for worthiness of attention. What I hope is that it would mitigate the reluctance to downvote (or to not upvote) stemming from the social nature of humans. I.e. we could continue not discouraging each other while still having a useful conversation.

8Raemon
Yeah. This is the sort of comment I'd consider tagging "off-topic" (probably while responding publicly to it to note that I am happy about the comment, so that the off-topic-ness clearly comes across like good management of people's attention rather than a mean rebuke)
In fact, their bodies are possibly so optimized for their current hunting strategy that higher intelligence might only trip them up.

It is much more likely that intelligence beyond this point simply costs too much relative to the benefit. Brains use a lot of energy.

2moridinamael
True. Can't ignore the fact that as far as we know hominids are the only animals that figured out fire, which is essentially a multiplier on the available nutrients we could access from a given unit of food.

"The order in which the conscion visits your person-slices makes no difference to what it’s like to be you". Then how does it make a difference? What does it even mean for a conscion to visit someone before someone else? If it makes no difference, then you should adapt the theory to reflect that. And then we are left with two sets of points of spacetime (those visited by the conscion and those not), which sounds rather epiphenomenal.

The fact that - unlike the case of the nuclear war where the quality of the threat was visible to politicians and the public alike - alignment seems to be a problem which not even all AI researchers understand is worth mentioning. That in itself probably excludes the possibility of a direct political solution. But even politics in the narrow sense can be utilized with a bit of creativity (e.g. by providing politicians a motivation more direct than saving the world, grounded on things they can understand without believing weird-sounding claims of cultish-looking folks).

1MichaelA
(Very late to this thread) The failure to recognise/understand/appreciate the problem does seem an important factor. And if it were utterly unchangeable, maybe that would mean all efforts need to just go towards technical solutions. But it's not utterly unchangeable; in fact, it's a key variable which "political" (or just "not purely technical") efforts could intervene on to reduce AI x-risk. E.g., a lot of EA movement building, outreach by AI safety researchers, Stuart Russell's book Human-Compatible, etc., is partly targeting at getting more AI researchers (and/or the broader public or certain elites like politicians) to recognise/understand/appreciate the problem. And by doing so, it could have other benefits like increasing the amount of technical work on AI safety, influencing policies that reduce risks of different AI groups rushing to the finish line and compromising on safety, etc. I think this was actually somewhat similar in the case of nuclear war. Of course, the basic fact that nuclear weapons could be very harmful is a lot more obvious than the fact AGI/superintelligence could be very harmful. But the main x-risk from nuclear war is nuclear winter, and that's not immediately obvious - it requires some quite modelling, and is something unlike things people have seen in their lifetimes, really. And according to Toby Ord in The Precipice (page 65): So in that case, technical work on understanding the problem was communicated to politicians (this communication being a non-technical intervention), and helped make the potential harms clearer to politicians, which helped lead to a political (partial) solution. Basically, I think that technical and non-technical interventions are often intertwined or support each other, and that we should see current levels of recognition that AI risk is a big deal as something we can and should intervene to change, not as something fixed.