Obviously. That's why it's connected to this blog post.
I'm not saying that it looks like you're copying your views, I'm saying that the updates look like movements towards believing in a certain sort of world: the sort of world where it's natural to be optimistically working together with other people on project that are fulfilling because you believe they'll work. (This is a super empathizable-with movement, and a very common movement to make. Also, of course this is just one hypothesis.) For example, moving away from theory and "big ideas", as well as moving towards incremental / broadly-good-seeming progres...
I note that almost all of these updates are (weakly or strongly) predicted by thinking of you as someone who is trying to harmonize better with a nice social group built around working together to do "something related to AI risk".
How are you telling the difference between "evolution aligned humans to this thing that generalized really well across the distributional shift of technological civilization" vs. "evolution aligned humans to this thing, which then was distorted / replaced / cut down / added to by the distributional shift of technological civilization"?
Isn't a major point of purifiers to get rid of pollutants, including tiny particles, that gradually but cumulatively damage respiration over long-term exposure?
From Owen's post: "I’d suggested her as a candidate earlier in the application process, but was not part of their decision-making process". "Unrelated job offer" is a bad description of that. I don't see the claim about hosting in the post, but that would a little soften things if true.
Anyway, it's not a random blog post! If it was a post about how many species of flowers there are or whatever, then my comment wouldn't make sense. But it's not random! It's literally about acting wholesomely! His very unwholesome behavior is very relevant to a post he's making to the forum of record about what wholesome behavior is!
It makes sense, but I think it's missing that adults who try to want in the current social world get triggered and/or traumatized as fuck because everyone else is behaving the way you describe.
I specifically think it's well within the human norm, i.e. that most of the things I read are written by a person who has done worse things, or who would do worse things given equal power. I have done worse things, in my opinion. There's just not a blog post about them right now.
I think that's not a great characterization of what happened. IIRC, Owen was not the person who "flew her out" (she was flown out for an unrelated job offer), he did not "surprise her" in the relevant sense (he was asked whether he could host her by other people), and they were in-general pretty close and had talked about adjacent stuff already.
Overall, my sense is Owen did mess up with a bunch of this stuff, but I don't think it makes sense for that to follow him around to all random blogposts he writes. In-general posts on LW are pseudonymous...
I agree, but he should be more forthcoming!
@Zach Stein-Perlman @habryka Since I guess you don't understand what I'm saying: If someone's going to read an essay about a topic that's entwined with soulcrafting, and that essay is written by someone who has some amount of poison in them, then the reader should be aware of this. Care to say what you disagree with about that?
Speaking for myself, I don't agree with any of it. From what I have read, I don't agree that the author's personal issues demonstrate "some amount of poison in them" outside the human norm, or in some way that would make me automatically skeptical of anything they said "entwined with soulcrafting." And I certainly don't agree that a reader "should be aware" of nonspecific problems that an author has which aren't even clearly relevant to something they wrote. I would give the exact opposite advice -- to try to focus on the ideas first before involving preconceptions about the author's biases.
I hope owencb won't let this prevent him from continuing to post on this topic.
climate?
My guess is that a good way to start is to write a short or medium length post that talks about one thing that seems really interesting to you, that LessWrong readers probably haven't heard about / thought about.
The standard that you seem to be suggesting is Kafkaesque. Someone accuses you of something, you prove them false, but that doesn't count because of strategic meanings of words. What?
But imagine this from the other side of a conflict. There's a social norm:
Don't isolate people (e.g. because it makes them vulnerable, e.g. to abuse).
Now a hypothetical (cartoonishly explicit) bad actor comes along and says "Aha, I know what to do, I will use my soft power to isolate my employee, but only from some people, and that way I'm not "isolating" them, but I ca...
Any of them. My point is that "climb!" is kind of like a message about the territory, in that you can infer things from someone saying it, and in that it can be intended to communicate something about the territory, and can be part of a convention where "Climb!" means "There's a bear!" or whatever; but still, "Climb!" is, besides being an imperative, a word that's being used to bundle actions together. Actions are kinda part of the territory, but as actions they're also sort of internal to the speaker (in the same way that a map is also part of the territ...
If someone wants to be classified as "... has XY chromosomes, is taller-on-average, has a penis..." and they aren't that, then it's a pathological preference, yeah. But categories aren't just for describing territory, they're also for coding actions. If a human says "Climb!" to another human, is that a claim about the territory? You can try to infer a claim about reality, like "There's something in reality that makes it really valuable for you to climb right now, assuming you have the goals that I assume you have".
If someone says "call me 'he' ", it could ...
On reflection, this post seems subtly but deeply deranged, assuming this is true:
People living 50, or 100, or 200 years ago didn't have nearly this much trouble dating.
If that's true, then all this stuff is besides the point, and the question is what changed.
categories are useful insofar as they compress information by "carving reality at the joints";
I think from context you're saying "...are only useful insofar...". Is that what you're saying? If so, I disagree with the claim. Compressing information is a key way in which categories are useful. Another key way in which categories are useful is compressing actions, so that you can in a convenient way decide and communicate about e.g. "I'm gonna climb that hill now". More to the point, calling someone "he" is mixing these two things together: you're both kin...
Sorry, the 159-word version leaves out some detail. I agree that categories are often used to communicate action intentions.
The academic literature on signaling in nature mentions that certain prey animals have different alarm calls for terrestrial or aerial predators, which elicit for different evasive maneuvers: for example, vervet monkeys will climb trees when there's a leopard or hide under bushes when there's an eagle. This raises the philosophical question of what the different alarm calls "mean": is a barking vervet making the denotative statement, ...
You can't just use redefinitions to turn trans women similar to cis women.
What does this mean? It seems like if the original issue is something about whether to call an XY-er "she" if the XY-er asks for that, then, that's sort of like a redefinition and sort of not like a redefinition... Is the claim something like:
Eliezer wants to redefine "woman" to mean "anyone who asks to be called 'she' ". But there's an objective cluster, and just reshuffling pronouns doesn't make someone jump from being typical of one cluster to typical of the other.
...Trans wo
Are you claiming that Zack is claiming that there's no such thing as gender? Or that there's no objective thing? Or that there's nothing that would show up in brain scans? I continue to not know what the basic original object-level disagreement is!
Ok. (I continue to not know what the basic original object-level disagreement is!)
I certainly haven't read even a third of your writing about this. But... I continue to not really get the basic object-level thing. Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? Or is that not a crux for anything?
Separately, isn't the obvious correct position simply: there's a bunch of objective stuff about the differences between men and women; there's uncertainty about exactly how these clusters overlap / are violated in real life, e.g. as described in the previous par...
"that person, who wants to be treated in the way that people usually treat men"
Incidentally, one of the things I dislike about this framing is that gender stereotypes / scripts "go both ways". That is, it should be not just "treated like a man" but also "treat people like men do."
Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? Or is that not a crux for anything?
Focusing on brains seems like the wrong question to me. Brains matter due to their effect on psychology, and psychology is easier to observe than neurology.
Even if psychology is similar in some ways, it may not be similar in the ways that matter though, and in fact the ways that matter need not be restricted to psychology. Even if trans women are psychologically the same as cis women, trans ...
I continue to not really get the basic object-level thing. Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains?
That's a bit like saying that it's "factually unknown" whether there's an invisible dragon in the garage.
Neuroscientists measure a lot of things about brains and if you need to define "develop like female brains" in a way that doesn't show up in any metric that neuroscientists can measure, and it's therefore "factually unknown".
...Or is that not a crux for any
It's not just epistemic confusion that can be most easily corrected with good evidence and arguments. That's what I think we're talking about.
But these people are in control of most institutions in our society. It's not a small problem.
I totally agree with what you say! ... And that's why I'm on the side of those against the system of conflict between groups of people with common interests amongst themselves, against the side of those in favor of that system.
That taking sides in this way, is paradoxical (cf. the paradox of intolerance), is why I asked:
How can those against the class system gain appropriate class consciousness without being thereby destroyed?
A key aspect of that is to not look away from the fact that there is a class struggle between those in favor of class struggle a...
Jesus christ. Savages on lesswrong.
Well, I wrote about this here: https://www.lesswrong.com/posts/tMtMHvcwpsWqf9dgS/class-consciousness-for-those-against-the-class-system
But the internet loves to downvote without explaining why...
Ooh. That makes a lot of sense and is even better... I simply didn't realize there were inline reacts! Kudos.
IDK the reasons.
I guess there's a reason for not having it on top-level posts, but I miss having it on top-level posts.
"Trust" is like "invest". It's an action-policy; it's related to beliefs, such as "this person will interpret agreements reasonably", "this person will do mostly sane things", "this person won't breach contracts except in extreme circumstances", etc., but trust is the action-policy of investing in plans that only make sense if the person has those properties.
Overall feels like it's ok, but very frustrating because it feels like it could be so much better. But I don't think this is mainly about the software of LW; it's about culture more broadly in decay (or more precisely, all the methods of coordinating on visions having been corrupted and new ones not gaining steam while defending boundaries).
A different thing: This is a problem for everyone, but: stuff gets lost. https://www.lesswrong.com/posts/DtW3sLuS6DaYqJyis/what-are-some-works-that-might-be-useful-but-are-difficult It's bad, and there's a worldwide problem of indexing the Global Archives.
I appreciate these views being stated clearly, and at once feel a positive feeling toward the author, and also am shaking my head No. As others have pointed out, the mistake theory here is confused.
I think it's not exactly wrong. The way in which it's right is this:
If people doing AGI research understood what we understand about the existential risk of AGI, most of them would stop, and AGI research would go much slower.
In other words, most people are amenable to reason on this point, in the sense that they'd respond to reasons to not do something that ...
What do you mean? Surely they aren't offering this for anyone who writes anything manicly. It would be nice if someone volunteered for doing that service more often though.
I think you're right that it will take work to parse; it's definitely taking me work to parse! Possibly what you suggest would be good, but it sounds like work. I'll see what I think after the dialogue.
The analogy from historical evolution is the misalignment between human genes and human minds, where the rise of the latter did not result in extinction of the former. It plausibly could have, but that is not what we observe.
The analogy is that the human genes thing produces a thing (human minds) which wants stuff, but the stuff it wants is different from what what the human genes want. From my perspective you're strawmanning and failing to track the discourse here to a sufficient degree that I'm bowing out.
For evolution in general, this is obviously pattern measure, and truly can not be anything else.
This sure sounds like my attempt elsewhere to describe your position:
...There's no such thing as misalignment. There's one overarching process, call it evolution or whatever you like, and this process goes through stages of creating new things along new dimensions, but all the stages are part of the overall process. Anything called "misalignment" is describing the relationship of two parts or stages that are contained in the overarching process. The overarchin
I'm saying that you, a bio-evolved thing, are saying that you hope something happens, and that something is not what bio-evolution wants. So you're a misaligned optimizer from bio-evolution's perspective.
A different way to maybe triangulate here: Is misalignment possible, on your view? Like does it ever make sense to say something like "A created B, but failed at alignment and B was misaligned with A"? I ask because I could imagine a position, that sort of sounds a little like what you're saying, which goes:
...There's no such thing as misalignment. There's one overarching process, call it evolution or whatever you like, and this process goes through stages of creating new things along new dimensions, but all the stages are part of the overall process. Anyth
The original argument that your OP is responding to is about "bio evolution". I understand the distinction, but why is it relevant? Indeed, in the OP you say:
For the evolution of human intelligence, the optimizer is just evolution: biological natural selection. The utility function is fitness: gene replication count (of the human defining genes).
So we're talking about bio evolution, right?
I'm saying that the fact that you, an organism built by the evolutionary process, hope to step outside the evolutionary process and do stuff that the evolutionary process wouldn't do, is misalignment with the evolutionary process.
The search process is just searching for designs that replicate well in environment.
This is a retcon, as I described here:
If you run a big search process, and then pick a really extreme actual outcome X of the search process, and then go back and say "okay, the search process was all along a search for X", then yeah, there's no such thing as misalignment. But there's still such a thing as a search process visibly searching for Y and getting some extreme and non-Y-ish outcome, and {selection for genes that increase their relative frequency in the gene pool} is an example.
Ok so the point is that the vast vast majority of optimization power coming from {selection over variation in general} is coming more narrowly from {selection for genes that increase their relative frequency in the gene pool} and not from {selection between different species / other large groups}. In arguments about misalignment, evolution refers to {selection for genes that increase their relative frequency in the gene pool}.
If you run a big search process, and then pick a really extreme actual outcome X of the search process, and then go back and say "ok...
Of course - and we'd hope that there is some decoupling eventually! Otherwise it's just be fruitful and multiply, forever.
This "we'd hope" is misalignment with evolution, right?
Say you have a species. Say you have two genes, A and B.
Gene A has two effects:
A1. Organisms carrying gene A reproduce slightly MORE than organisms not carrying A.
A2. For every copy of A in the species, every organism in the species (carrier or not) reproduces slightly LESS than it would have if not for this copy of A.
Gene B has two effects, the reverse of A:
B1. Organisms carrying gene B reproduce slightly LESS than organisms not carrying B.
B2. For every copy of B in the species, every organism in the species (carrier or not) reproduces slightly MORE than ...
At some point the post was negative karma, I think; without anyone giving any indication as to why. A savage would be someone unable to think, which is evidenced by downvoting important antimemes without discussion.