This is a difficult line to thread, since while I can't be sure which awakening experiences you're opposed to in particular (incidentally, see the later paragraphs re: variations between them), as a general category they seem to be the consequence of your intuitive world-model losing a mysterious "self" node to be replaced with a more gears-like representation of internal mental states and their mechanisms.
However, you might be able to make it more difficult to "look" in that direction by using vipassana-style meditations with limited time. This should lea...
I liked the parts about Moloch and human nature at the beginning, but the AI aspects seem to be unfounded anthropomorphism, applying human ideas of 'goodness' or 'arbitrarity [as an undesirable attribute]' despite the existence of anti-reasons for believing them applicable to non-human motivation.
...But I think another scenario is plausible as well. The way the world works is… understandable. Any intelligent being can understand Meditations On Moloch or Thou Art Godshatter. They can see the way incentives work, and the fact that a superior path
The main advantage of Intelligence Augmentation is that we know that our current minds are both generally or near-generally intelligent and more-or-less aligned with our values, and we also have some level of familiarity with how we think (edit: and likely must link our progress in IA to our understanding of our own minds, due to the neurological requirements).
So we can find smaller interventions that are certainly, or at least almost certainly, going to have no effect on our values, and then test them over long periods of time, using prior knowledge of hu...
"Like if we increased yearly economic growth by 5% (for example 2% to 2.1%), what effect would you expect that to have?"
From my personal experience, academics have a tendency and preference to work on superficially-beneficial problems; Manhattan Projects and AI alignment groups both exist (detrimental and non-obviously beneficial, respectively), but for the most part we have projects like eco-friendly technology and efficient resource allocation in specified domains.
Due to this, greater economic growth means more resources to bring to bear for other scient...
"Working on global poverty seems unlikely to be a way of increasing our chances of succeeding at alignment. If anything, this would likely increase both the number of future alignment and capacity researchers. So it's unlikely to significantly increase our chances."
A fair point regarding alignment (I hadn't thought about how it would affect AI researchers as well), but I was more thinking from the perspective of X-risk in general.
AI alignment is one issue that doesn't seem to be significantly affected either way by this, but we also have things like ...
One method would be to take advantage of low-hanging fruit not directly related to X-risk. Clearly motivation isn't enough to solve these problems (and I'm not just talking about alignment), so we should be trying to optimize all our resources, and that includes getting rid of major bottlenecks like [the imagined example of] hunger killing intelligent, benevolent potential-researchers in particular areas because of a badly-designed shipping route.
A real-life example of this would be the efforts of the Rationalist community to promote more efficient m...
As a Babble this is excellent, and many of these (e.g. optimizing income streams, motivating/participating-in groups) seem to be necessary prerequisites for being in a position to make progress on X-risk problems.
But I think the nature of such problems (as ones that have been attempted by many other individuals with at least some centralized organizations where these individuals share their experiences to avoid duplication of effort, that is) means that any undirected Babble will primarily encounter lines of inquiry that have already been addressed, ...
The main issue with AGI Alignment is that the AGI is more intelligent than us, meaning that making it stay within our values requires both perfect knowledge of our values and some understanding of how to constrain it to share them.
If this is truly an intractable problem, it still seems that we could escape the dilemma by focusing on efforts in Intelligence Augmentation, e.g. through Mind Uploading and meaningful encoding/recoding of digitized mind-states. Granted, it currently seems that we will develop AGI before IA, but if we could shift focus enough to reverse this trend, then AGI would not be an issue, as we ourselves would have superior intelligence to our creations.
To expand on this (though I only participated in the sense of reading the posts and a large portion of the comments), my reflective preference was to read through enough to have a satisfactorily-reliable view of the evidence presented and how it related to the reliability of data and analyses from the communities in question. And I succeeded in doing so (according to my model of my current self’s upper limitations regarding understanding of a complex sociological situation without any personally-observed data).
But I could feel that the above preference was...
This link seems to be assuming that one's prior internal state does not influence the initial mental representation of data in any way. I don't have any concrete studies to share refuting that, but let's consider a thought experiment.
Say someone really hates trees. Like 'trees are the scum of the earth, I would never be in any way associated with such disgusting things' hates trees. It's such a strong hate, and they've dwelled on it for so long (trees are quite common, after all, it's not like they can completely forget about them), that it's bled over int...
keeping in mind I haven’t gotten a chance to read the paper itself… the learning process is the main breakthrough, because it creates agents that can generalise to multiple problems. There are admittedly commonalities between the different problems (e.g. the physics), but the same learning process applied to something like board game data might make a “general board game player”, or perhaps even something like a “general logistical-pipeline-optimiser” on the right economic/business datasets. The ability to train an agent to succeed on such diverse proble
This seems to be distinct from List of Links, but they're similar enough that it might still be a merge candidate.
My initial ideas (e.g. cases where time are important) are pretty well captured by other comments, but in reviewing my thoughts I noticed some assumptions I was making, which might themselves qualify as additional requirements to eradicate trade:
A) I assumed that the skill-download feature includes knowledge downloading and no task requires more 'knowledge+skills in active use at a time' than the human brain can feasibly handle. If this is violated, specialization is still somewhat valuable despite free and presumably-unrestricted knowledge-sharing.
If you ...
My reading on that last point was that the government has incentive to declare the vaccines valid solutions to COVID-19 even if they haven’t been properly tested for efficacy and side effects, in the spirit of downplaying the risks of the epidemic. And similarly (in the spirit of steelmanning), the companies developing the virus need to do visibly better than their competitors and preferably come out before or simultaneously with them, for the sake of profits; incentives which also push towards incomplete/inadequate testing procedures.
However, my prior for...
Weighing in here because this is a suboptimality I've often encountered when speaking with math oriented interlocutors (including my past self):
The issue here is an engineering problem, not a proof problem. Human minds tend to require lots of cognitive resources to take provisional definitions for things that have either no definition or drastically different definitions in their minds outside this specific context.
Structuring your argument as a series of definitions is fine when making a proof in a mathematical language, since comprehensibility is not a ... (read more)