I don't really understand why this is at all important. Do you expect (or endorse) users to... vote on posts solely by reading a list of titles without clicking on them to refresh their memories of what the posts are about, and as a natural corollary of this, see who the authors of the posts are? What's the purpose of introducing inconveniences and hiding information when this information will very likely be found anyway?
I get the importance of marginalist thinking, and of pondering what incentives you are creating for the median and/or marginal voting participant, blah blah blah, but if there is ever a spot on the Internet where superficiality is at its lowest and the focus is on the essence above the form, the LW review process might well be it.
In light of that, this question just doesn't seem (to a rather outside observer like me) worth pondering all that much.
The 3 most important paragraphs, extracted to save readers the trouble of clicking on a link:
The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.
[...]
The accelerating race between the United States and China to lead the world in advancing AI makes this a pivotal moment. If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades.
[...]
These models, which will be trained on Anduril’s industry-leading library of data on CUAS threats and operations, will help protect U.S. and allied military personnel and ensure mission success.
I appreciate your response, and I understand that you are not arguing in favor of this perspective. Nevertheless, since you have posited it, I have decided to respond to it myself and expand upon why I ultimately disagree with it (or at the very least, why I remain uncomfortable with it because it doesn't seem to resolve my confusions).
I think revealed preferences show I am a huge fan of explanations of confusing questions that ultimately claim the concepts we are reifying are ultimately inconsistent/incoherent, and that instead of hitting our heads against the wall over and over, we should take a step back and ponder the topic at a more fundamental level first. So I am certainly open to the idea that “do I nonetheless continue living (in the sense of, say, anticipating the same kind of experiences)?” is a confused question.
But, as I see it, there are a ton of problems with applying this general approach in this particular case. First of all, if anticipated experiences are an ultimately incoherent concept that we cannot analyze without first (unjustifiably) reifying a theory-ladden framework, how precisely are we to proceed from an epistemological perspective? When the foundation of 'truth' (or at least, what I conceive of it to be) is based around comparing and contrasting what we expect to see with what we actually observe experimentally, doesn't the entire edifice collapse once the essential constituent piece of 'experiences' breaks down? Recall the classic (and eternally underappreciated) paragraph from Eliezer:
I pause. “Well . . .” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief,’ and the latter thingy ‘reality.’ ”
What exactly do we do once we give up on precisely pinpointing the phrases "I believe", "my [...] hypotheses", "surprised", "my predictions", etc.? Nihilism, attractive as it may be to some from a philosophical or 'contrarian coolness' perspective, is not decision-theoretically useful when you have problems to deal with and tasks to accomplish. Note that while Eliezer himself is not what he considers a logical positivist, I think I... might be?
I really don't understand what "best explanation", "true", or "exist" mean, as stand-alone words divorced from predictions about observations we might ultimately make about them.
This isn't just a semantic point, I think. If there are no observations we can make that ultimately reflect whether something exists in this (seems to me to be) free-floating sense, I don't understand what it can mean to have evidence for or against such a proposition. So I don't understand how I am even supposed to ever justifiably change my mind on this topic, even if I were to accept it as something worth discussing on the object-level.
Everything I believe, my whole theory of epistemology and everything else logically downstream of it (aka, virtually everything I believe), relies on the thesis (axiom, if you will) that there is a 'me' out there doing some sort of 'prediction + observation + updating' in response to stimuli from the outside world. I get that this might be like reifying ghosts in a Wentworthian sense when you drill down on it, but I still have desires about the world, dammit, even if they don't make coherent sense as concepts! And I want them to be fulfilled regardless.
And, moreover, one of those preferences is maintaining a coherent flow of existence, avoiding changes that would be tantamount to death (even if they are not as literal as 'someone blows my brains out'). As a human being, I have preferences over what I experience too, not just over what state the random excitations of quantum fields in the Universe are at some point past my expiration date. As far as I see, the hard problem of consciousness (i.e., the nature of qualia) has not been close to solved; any answer to it would have to give me a practical handbook for answering the initial questions I posed to jbash.
Edit: This comment misinterpreted the intended meaning of the post.
Practical CF, more explicitly: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the same conscious experience as that brain, in the specific sense of thinking literally the exact same sequence of thoughts in the exact same order, in perpetuity.
I... don't think this is necessarily what @EuanMcLean meant? At the risk of conflating his own perspective and ambivalence on this issue with my own, this is a question of personal identity and whether the computationalist perspective, generally considered a "reasonable enough" assumption to almost never be argued for explicitly on LW, is correct. As I wrote a while ago on Rob's post:
As TAG has written a number of times, the computationalist thesis seems not to have been convincingly (or even concretely) argued for in any LessWrong post or sequence (including Eliezer's Sequences). What has been argued for, over and over again, is physicalism, and then more and more rejections of dualist conceptions of souls.
That's perfectly fine, but "souls don't exist and thus consciousness and identity must function on top of a physical substrate" is very different from "the identity of a being is given by the abstract classical computation performed by a particular (and reified) subset of the brain's electronic circuit," and the latter has never been given compelling explanations or evidence. This is despite the fact that the particular conclusions that have become part of the ethos of LW about stuff like brain emulation, cryonics etc are necessarily reliant on the latter, not the former.
As a general matter, accepting physicalism as correct would naturally lead one to the conclusion that what runs on top of the physical substrate works on the basis of... what is physically there (which, to the best of our current understanding, can be represented through Quantum Mechanical probability amplitudes), not what conclusions you draw from a mathematical model that abstracts away quantum randomness in favor of a classical picture, the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections. As I have mentioned, that is a mere model that represents a very lossy compression of what is going on; it is not the same as the real thing, and conflating the two is an error that has been going on here for far too long. Of course, it very well might be the case that Rob and the computationalists are right about these issues, but the explanation up to now should make it clear why it is on them to provide evidence for their conclusion.
I recognize you wrote in response to me a while ago that you "find these kinds of conversations to be very time-consuming and often not go anywhere." I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it's totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the "everyday experience" thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don't think there is a "prosaic" (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post ("The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence"), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me:
jbash: These various ideas about identity don't seem to me to be things you can "prove" or "argue for". They're mostly just definitions that you adopt or don't adopt. Arguing about them is kind of pointless.
sunwillrise: I absolutely disagree. The basic question of "if I die but my brain gets scanned beforehand and emulated, do I nonetheless continue living (in the sense of, say, anticipating the same kinds of experiences)?" seems the complete opposite of pointless, and the kind of conundrum in which agreeing or disagreeing with computationalism leads to completely different answers.
Perhaps there is a meaningful linguistic/semantic component to this, but in the example above, it seems understanding the nature of identity is decision-theoretically relevant for how one should think about whether WBE would be good or bad (in this particular respect, at least).
All of these ideas sound awesome and exciting, and precisely the right kind of use of LLMs that I would like to see on LW!
It's looking like the values of humans are far, far simpler than a lot of evopsych literature and Yudkowsky thought, and related to this, values are less fragile than people thought 15-20 years ago, in the sense that values generalize far better OOD than people used to think 15-20 years ago
I'm not sure I like this argument very much, as it currently stands. It's not that I believe anything you wrote in this paragraph is wrong per se, but more like this misses the mark a bit in terms of framing.
Yudkowsky had (and, AFAICT, still has) a specific theory of human values in terms of what they mean in a reductionist framework, where it makes sense (and is rather natural) to think of (approximate) utility functions of humans and of Coherent Extrapolated Volition as things-that-exist-in-the-territory.
I think a lot of writing and analysis, summarized by me here, has cast a tremendous amount of doubt on the viability of this way of thinking and has revealed what seem to me to be impossible-to-patch holes at the core of these theories. I do not believe "human values" in the Yudkowskian sense ultimately make sense as a coherent concept that carves reality at the joints; I instead observe a tremendous number of unanswered questions and apparent contradictions that throw the entire edifice in disarray.
But supplementing this reorientation of thinking around what it means to satisfy human values has been "prosaic" alignment researchers pivoting more towards intent alignment as opposed to doomed-from-the-start paradigms like "learning the true human utility function" or ambitious value learning, a recognition that realism about (AGI) rationality is likely just straight-up false and that the very specific set of conclusions MIRI-clustered alignment researchers have reached about what AGI cognition will be like are entirely overconfident and seem contradicted by our modern observations of LLMs, and ultimately an increased focus on the basic observation that full value alignment simply is not required for a good AI outcome (or at the very least to prevent AI takeover). So it's not so much that human values (to the extent such a thing makes sense) are simpler, but more so that fulfilling those values is just not needed to nearly as high a degree as people used to think.
Mainly, minecraft isn't actually out of distribution, LLMs still probably have examples of nice / not-nice minecraft behaviour.
Is this inherently bad? Many of the tasks that will be given to LLMs (or scaffolded versions of them) in the future will involve, at least to some extent, decision-making and processes whose analogues appear somewhere in their training data.
It still seems tremendously useful to see how they would perform in such a situation. At worst, it provides information about a possible upper bound on the alignment of these agentized versions: yes, maybe you're right that you can't say they will perform well in out-of-distribution contexts if all you see are benchmarks and performances on in-distribution tasks; but if they show gross misalignment on tasks that are in-distribution, then this suggest they would likely do even worse when novel problems are presented to them.
a lot of skill ceilings are much higher than you might think, and worth investing in
The former doesn't necessarily imply the latter in general, because even if we are systematically underestimating the realistic upper bound for our skill level in these areas, we would still have to deal with diminishing marginal returns to investing in any particular one. As a result, I am much more confident of the former claim being correct for the average LW reader than of the latter. In practice, my experience tells me that you often have "phase changes" of sorts, where there's a rather binary instead of continuous response to a skill level increase: either you've hit the activation energy level, and thus unlock the self-reinforcing loop of benefits that flow from the skill (once you can apply it properly and iterate on it or use it recursively), or you haven't, in which case any measurable improvement is minimal. It's thus often more important to get past the critical point than to make marginal improvements either before or after hitting it.
On the other hand, many of the skills you mentioned afterwards in your comment seem relatively general-purpose, so I could totally be off-base in these specific cases.
The document seems to try to argue that Uber cannot possibly become profitable. I would be happy to take a bet that Uber will become profitable within the next 5 years.
Ah, gotcha. Yes, that seems reasonable.