Since the arguments that AI alignment is hard don't depend on any specifics about our level of intelligence shouldn't those same arguments convince a future AI to refrain from engaging in self-improvement? More specifically, if the argument that we should expect a more intelligent AI we build to have a...
A key assumption in most x-risk arguments for AI is that the ability of an agent to exert control over the world increases rapidly with intelligence. After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the...
This is crossposed from my blog. While I think the ideas here are solid I think the presentation still needs some work so I'd also appreciate comments on the presentation so I can turn this into a more polished essay, e.g., is the second section worth keeping and what should...
Introduction Most people on less wrong seem to be some kind of hedonic consequentialist. They think states with less suffering and more joy are better. Moreover, it is intuitive that if you can cause some improvement in human well-being to be achieved then (other things being equal) it is better...
The recent article on overcomingbias suggesting the Fermi paradox might be evidence our universe is indeed a simulation prompted me to wonder how one would go about gathering evidence for or against the hypothesis that we are living in a simulation. The Fermi paradox isn't very good evidence but there...