Wiki Contributions

Comments

I would say "the thing that contains the inheritance particles" rather than "the inheritance particle". "Particulate inheritance" is a technical term within genetics and it refers to how children don't end up precisely with the mean of their parents' traits (blending inheritance), but rather with some noise around that mean, which particulate inheritance asserts is due to the genetic influence being separated into discrete particles with the children receiving random subsets of their parent's genes. The significance of this is that under blending inheritance, the genetic variation between organisms within a species would be averaged away in a small number of generations, which would make evolution by natural selection ~impossible (as natural selection doesn't work without genetic variation).

Isn't singular learning theory basically just another way of talking about the breadth of optima?

The tricky part is, on the margin I would probably use various shortcuts, and it's not clear where those shortcuts end short of just getting knowledge beamed into my head.

I already use LLMs to tell me facts, explain things I'm unfamiliar with, handle tedious calculations/coding, generate simulated data/brainstorming and summarize things. Not much, because LLMs are pretty bad, but I do use them for this and I would use them more on the margin.

isn't a reference frame; rather, if is a world then aka are the reference frames for .

Essentially when dealing with generalized reference frames that contain answers to questions such as "who are you?", the possible reference frames are going to depend on the world (because you can only be a real person, and which real people there are depends on what the world is). As such, "reference frames" don't make sense in isolation, rather one needs a (world, reference frame) pair, which is what I call an "interpretation".

An idea I've been playing with recently:

Suppose you have some "objective world" space . Then in order to talk about subjective questions, you need a reference frame, which we could think of as the members of a fiber of some function , for some "interpretation space" .

The interpretations themselves might abstract to some "latent space" according to a function . Functions of would then be "subjective" (depending on the interpretation they arise from), yet still potentially meaningfully constrained, based on . In particular if some structure in lifts homomorphically up through and down through , you get exactly the same structure in . (And these obviously compose nicely since they're just spans, so far.)

The key question is what kind of space/algebra to preserve. I can find lots of structures that work well for particular abstractions, but it seems like the theory would have to be developed separately for each type of structure, as I don't see any overarching one.

On the one hand, I do see your point that in some cases it's important not to make people think I'm referring to malfunctioning sensors. On the other hand, malfunctioning sensors would be an instance of the kind of thing I'm talking about, in the sense that information from a malfunctioning sensor is ~useless for real-world tasks (unless you don't realize it's malfunctioning, in which case it might be cursed).

I'll think about alternative terms that clarify this.

But the way to resolve definitional questions is to come up with definitions that make it easier to find general rules about what happens. This illustrates one way one can do that, by picking edge-cases so they scale nicely with rules that occur in normal cases. (Another example would be 1 as not a prime number.)

I guess to expand:

If you use a Markov chain to transduce another Markov chain, the belief state geometry should kind of resemble a tensor of the two Markov chains, but taking some dependencies into account.

However, let's return to the case of tensoring two independent variables. If the neural network is asked to learn that, it will presumably shortcut by representing them as a direct sum.

Due to the dependencies, the direct sum representation doesn't work if you are transducing it, and arguably ideally we'd like something like a tensor. But in practice, there may be a shortcut between the two, where the neural network learns some compressed representation that mixes the transducer and the base together.

(A useful mental picture for understanding why I care about this: take the base to be "the real world" and the transducer to be some person recording data from the real world into text. Understanding how the base and the transducer relate to the learned representation of the transduction tells you something about how much the neural network is learning the actual world.)

You're wrong about the dynamic portrayed in The Fountainhead. I suspect you might also be wrong about the dynamic portrayed in the other of Ayn Rand's books, though I don't know for sure as I haven't read them.

Intuitively, you'd measure altruism by the sum of one's contributions over all the people one is contributing to. In practice, you could measure this by wealth (which'd seem sensible because people pay for what they want), unique regard for the poor and weak (also would seem sensible because the poor and weak have less resources to communicate their needs) and reputation (also would seem sensible because of Aumann's agreement theorem). But then Ayn Rand shows conditions where these seemingly-sensible measurements fail catastrophically.

Consider, for instance, the case of Gail Wynand; he sought wealth, thinking it would grant him power. But because economic inequality was relatively low, the only way to earn wealth is to appeal to a lot of people, i.e. he had to be high in the preference ranking obtained by summing many different people's preference rankings together.

If you sum together a bunch of variables, the resulting scores will become an extremely accurate measure of the general factor(s) which contribute positively to all of these variables. Because it is the general factor shared across common people, you could abstract this as "the common man". In seeking wealth, one becomes forced to follow this general factor, as Gail Wynand did.

Now, what is this general "the common man" factor? It's people's opinions, based on whatever they care about, from sex to justice to whatever. But notably, nobody has time to get properly informed about everything, and a lot of people don't have the skills to make sound judgments on things, so the the general factor underlying people's opinions consists of the most superficial, ignorant positions people could take.

Gail Wynand sought wealth because he thought it would grant him power to escape the control of the common man. However, the only way to gain and keep wealth was to submit absolutely to the common man's judgement, as he found out the hard way.

Now let's go back to Peter Keating. A major theme of the book is that there were two things he was torn between seeking; his own desires (especially Catherine Halsey but also to some extent his own passions in painting and in architecture) versus his reputation (among strangers, though the book shows how his mother kept particular tabs on it).

The issue Peter Keating faced wasn't that he liked honestly-earned wealth and disliked dishonestly-earned wealth. It was partly the same as that of Gail Wynand - that reputation is based on people's most superficial, ignorant judgments. But it was deeper than that, because Gail Wynand had an awareness that people's judgments were trash and mostly let them judge him as being trash, whereas Peter Keating respected their judgments and tried to change himself to fit their judgments.

I think this may have lead him to practice dissociating from his own desires and instead go with other's judgement. Though I don't remember reading much of this process in the story, perhaps because it was already far along in the beginning of the story (as shown with his relationship with his mother). Either way, we see how despite liking Catherine Halsey, he ends up getting married to Dominique Francon (who he's also attracted to, but differently), under the assumption that this is better for his reputation.

But the problem was, there was an endless line of men aiming to impress Dominique Francon by deferring to her desires and by building a strong reputation. If Dominique really wanted that, then Peter Keating would have had much tougher competition than he really had. But instead what Dominique wanted was someone who had his own strength in judgment, and while Peter Keating showed some potential in that (at least in recognizing the social game and standing up to her jabs), he eventually degenerated fully into just deferring to social judgments, and Dominique lost interest in him.

And ultimately this was also the way Peter Keating lost everything else. In competing for bending to other's judgment, there were younger people with less historical baggage who could do so better. He wasn't particularly good at anything, and so he ended up supporting and building his life around an ideology which said that being good is evil, such that his lack of goodness became a virtue.

That's the distinction between Rand's heroes and villains. The heroes want to get rich by means of doing genuinely good work that other people will have a genuine self-interest in paying for. The villains want to wield power by means of psychological manipulation, guilt-tripping and blackmailing the people who can do good work into serving their own parasites and destroyers.

One important thing to notice is that Howard Roark used some means that would be considered quite immoral by ordinary standards. He blew up the Cortlandt, he raped Dominique Francon (sort of, it seems to me that Ayn Rand has a strange understanding of rape, but that's beside the point), he poached Austen Heller from John Erik Snyte, and he often socially put people in situations he knew they could not handle.

The difference between Howard Roark and Peter Keating isn't that Howard Roark wants to use good means and Peter Keating wants to use scummy means. The difference is that Howard Roark has a form of contribution that he cares about making and which he himself is able to judge the quality of, whereas Peter Keating doesn't chase much object-level preference himself, but instead tries to be good as per other's judgment.

Something people have occasionally noticed about my intellectual style is that I like to win arguments. I take pride and pleasure in pointing out flaws in other people's work in the anticipation of the audience appreciating how clever I am for finding the hole in someone's reasoning.

...

Overall, when I look at the world of discourse I see, the moral I draw is not that that collaborative truth-seeking is bad.

It's that collaborative truth-seeking doesn't exist. The people claiming to be collaborative truth-seekers are lying. Given that everyone wants to be seen as right, the question is: are you going to try to be seen as right by means of providing valid evidence and reasoning, or by—other means?

Or to put it another way: the commenter who admits they care for status is all right. They're usually worth the status they earn. They won't lie for it. They don't have to. But watch out for the commenter who yells too loudly how much they scorn status. Watch out particularly for the one who yells that others must scorn it. They're after something much worse than status.

This seems more similar to Peter Keating than to Howard Roark. Perhaps most similar to Gail Wynand, except Gail Wynand at least had a sort of contempt for their judgment. You'd still be dancing for the judgment of people who are not really paying attention.

Load More