Thank you. You phrased the concerns about "integrating with a bigger picture" better than I could. To temper the negatives, I see at least two workable approaches, plus a framing for identifying more workable approaches.
What interfaces are you planning to provide that other AI safety efforts can use? Blog posts? Research papers? Code? Models? APIs? Consulting? Advertisements?
Ah. Thank you, that is perfectly clear. The Wikipedia page for Scalar Field makes sense with that too. A scalar field is a function that takes values in some canonical units, and so it transforms only on the right of f under a perspective shift. A vector field (effectively) takes values both on and in the same space, and so it transforms both on the left and right of v under a perspective shift.
I updated my first reply to point to yours.
Reading the wikipedia page on scalar field, I think I understand the confusion here. Scalar fields are supposed to be invariant under changes in reference frame assuming a canonical coordinate system for space.
Take two reference frames P(x) and G(x). A scalar field S(x) needs to satisfy:
Meaning the inference of S(x) should not change with reference frame. A scalar field is a vector field that commutes with perspective transformations. Maybe that's what y...
Interesting. That seems to contradict the explanation for Lie Algebras, and it seems incompatible with commutators in general, since with commutators all operators involved need to be compatible with both composition and precomposition (otherwise AB - BA is undefined). I guess scalar fields are not meant to be operators? That doesn't quite work since they're supposed used to describe energy, which is often represented as an operator. In any case, I'll have to keep that in mind when reading about these things.
Thanks for the explanation. I found this post that connects your explanation to an explanation of the "double cover." I believe this is how it works:
EDIT: This post is incorrect. See the reply chain below. After correcting my misunderstanding, I agree with your explanation.
The difference you're describing between vector fields and scalar fields, mathematically, is the difference between composition and precomposition. Here it is more precisely:
In the 2D matrix representation, the basis element corresponding to the real part of a quaternion is the identity matrix. So scaling the real part results in scaling the (real part of the) diagonal of the 2D matrix, which corresponds to a scaling operation on the spinor. It incidentally plays the same role on 3D objects: it scales them. Plus, it plays a direct role in rotations when it's -1 (180 degree rotation) or 1 (0 degree rotation). Same as with i, j, and k, the exact effect of changing the real part of the quaternion isn't obvious from inspection whe...
I don't know why other people say it, but I can explain why it's nice to say it.
Logic and reason indicate the robustness of a claim, but you can have lots of robust, mutually-contradictory claims. A robust claim is one that contradicts neither itself nor other claims it associates with. The other half is how well it resonates with people. Resonance indicates how attractive a claim is through authority, consensus, scarcity, poetry, or whatever else.
Survive and spread through robustness and resonance. That's what a strong claim does. You can state that you'll only let a claim spread into your mind if it's true, but the fact that it's so...
That was a fascinating post about the relationship with Berkeley. I wonder how the situation has changed in the last two years since people became more cognizant of the problem. Note that some of the comments there refute your idea that the community never had enough people for multiple hubs. NYC and Melbourne in particular seemed to have plenty of people, but they dissipated after core members repeatedly got recruited by Berkeley.
It seems like Berkeley was overtly trying to eat other communities, but EA did it just by being better at a thing many Rational...
Your understanding is correct. Your Petrov Day strategy is the only thing I believe causes harm in your post.
I'll see if I can figure out what exactly was frustrating about the post, but I can't make promises on my ability to introspect to that level or on my ability to remember the origins of my feelings last night.
These are the things I can say with high certainty:
I did -2. It wasn't punishment, and definitely not for saying social penalty. I think social penalties are perfectly fine approaches for some problems, particularly ones where fuzzy coordination yields value greater than the complexity it entails.
I do feel frustration, but definitely not anger. The frustration is over the tenuous connection, which in my mind leads to a false sense of understanding.
I feel relatively new to LW so I'm still trying to figure out when I give a -1 and when I give a -2. I felt that the tenuous connection in combination with the net-negative advice warranted a -2.
EDIT: I undid my -2 in light of this comment thread.
Do you think it makes more sense for you to punish the perpetrator after you're dead or after they're dead?
Replication is a decent strategy until secrets get involved, and this world runs on a lot of secrets that people will not back up. Even when it comes to publicly accessible things, there's a very thick and very ambiguous line between private data and public data. See, for example, the EU's right to be forgotten. This is a minor issue post-nuke, but it means gathering support for a backup effort will be difficult.
Access control is a decent strategy onc...
In light of some of the comments on the supposed impossibility of relocating a hub, I figured I'd suggest a strategy. This post says nothing about the optimality of creating/relocating a hub, it only suggests a method for doing so. I'm obviously not an experienced hub relocator in real life, but evidently, I'll play one on the internet for the sake of discussion. Please read these as an invitation to brainstorm.
We could pick a second hub instead of a new first hub. We don't need consensus or even a plurality. We just need critical mass in a location other than Berkeley. Preferably that new location would cater to a group that's not well-served by Berkeley so we can get more total people into a hub. If we're being careful, we should worry about Berkeley losing its critical mass as a result of the second hub, however, I don't think that's a likely outcome.
There's some loss from splitting people across two hubs rather than getting everyone into one hub. However, I s...
I think NYC was long a 'second hub', and there were a bunch of third-tier hubs, but I think the relationships between the hubs never really worked out to make a happy global community. Here's a post about some previous context. I also suspect that the community has never really had enough people or commitment to have 'critical mass' for multiple hubs, and this is part of the problem.
I think there are some systems that have successfully figured this out. I am optimistic about a bunch of current EA student groups at top universities, many of which I visited ...
Understood. I do think it's significant though (and worth pointing out) that a much simpler definition yields all of the same interesting consequences. I didn't intend to just disagree for the sake of getting clearer terminology. I wanted to point out that there seems to be a simpler path to the same answers, and that simpler path provides a new concept that seems to be quite useful.
This can turn into a very long discussion. I'm okay with that, but let me know if you're not so I can probe only the points that are likely to resolve. I'll raise the contentious points regardless, but I don't want to draw focus on them if there's little motivation to discuss them in depth.
I agree that a split in terminology is warranted, and that "defect" and "cooperate" are poor choices. How about this:
The "expected coalition strategy" is, let's say, "no one gets any". By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?
In my view, yes. If we agreed that no one should get any resources, then it's a violation for you to get resources or for you to deceive me into getting resources.
I think the difference is in how the two of us view a strategy. In my view, it's perfectly acceptable for the coalition strategy to include a clause like "it's okay to do X if i...
I think your focus on payoffs is diluting your point. In all of your scenarios, the thing enabling a defection is the inability to view another player's strategy before committing to a strategy. Perhaps you can simplify your definition to the following:
You can define a function that assigns a strategy to every possible coalition. Given an expected coalition strategy C, if the payoff for any sub-coalition strategy SC is greater than th...
"That's a good point, but I think you're behind on some of the context on what we were discussing. Can you try to get more of a feel for the conversation before joining it?"
I don't see how your comment contradicts the part you quoted. More pressure doesn't lead to more change (in strategy) if resistance increases as well. That's consistent with what /u/SquirrelInHell stated.
That mass corresponds to "resistance to change" seems fairly natural, as does the correspondence between "pressure to change" and impulse. The strange part seems to be the correspondence between "strategy" and velocity.'' Distance would be something like strategy * time.
Does a symmetry in time correspond to a conservation of energy? Is energy supposed to correspond to resistance? Maybe, though that's a little hard to interpret, so it's a little difficult to apply Lagrangian or Hamiltonian mechanics. The interpretation of energ...
The process you went through is known in other contexts as decategorification. You attempted to reduce the level of abstraction, noticed a potential problem in doing so, and concluded that the more abstract notion was not as well-conceived as you imagined.
If you try to enumerate questions related to a topic (Evil), you will quickly find that you (1) repeatedly tread the same ground, (2) are often are unable to combine findings from multiple questions in useful ways, and (3) are often unable to identify questions worth answering, let alone a hierarchy that ...
"But hold up", you say. "Maybe that's true for special cases involving competing subagents, ..."
I don't see how the existence of subagents complicates things in any substantial way. If the existence of competing subagents is a hindrance to optimality, then one should aim to align or eliminate subagents. (Isn't this one of the functions of meditation?) Obviously this isn't always easy, but the goal is at least clear in this case.
It is nonsensical to treat animal welfare as a special case of happiness and suffering. This is because ani...
I meant it as "This seems like a clear starting point." You're correct that I think it's easy to not get lost with those two starting points.
I'm my experience with other fields, it's easy to get frustrated and give up. Getting lost is quite a bit more rare. You'll have to click through a hundred dense links to understand your first paper in machine learning, as with any other field. If you can trudge through that, you'll be fine. If you can't, you'll at least know what to ask.
Also, are you not curious about how much initiative people have regarding the topics they want to learn?
A question for people asking for machine learning tutors: have you tried just reading through OpenAI blog posts and running the code examples they embed or link? Or going through the TensorFlow tutorials?
Yes. I follow authors, I ask avid readers similar to me for recommendations, I observe best-of-category polls, I scan through collections of categorized stories for topics that interest me, I click through "Also Liked" and "Similar" links for stories I like. My backlog of things to read is effectively infinite.
I see. Thanks for the explanation.
How so? I thought removing the border on each negation was the right way.
I gave an example of where removing the border gives the wrong result. Are you asking why "A is a subset of Not Not A" is true in a Heyting algebra? I think the proof goes like this:
May...
I guess today I'm learning about Heyting algebras too.
I don't think that circle method works. "Not Not A" isn't necessarily the same thing as "A" in a Heyting algebra, though your method suggests that they are the same. You can try to fix this by adding or removing the circle borders through negation operations, but even that yields inconsistent results. For example, if you add the border on each negation, "A or Not A" yields 1 under your method, though it should not in a Heyting algebra. If you remove the border on each negat...
I am, and thanks for answering. Keep in mind that there are ways to make your intuition more reliable, if that's a thing you want.
Fair enough. I have a question then. Do you personally agree with Bob?
Algebraic reasoning is independent of the number system used. If you are reasoning about utility functions in the abstract and if your reasoning does not make use of any properties of numbers, then it doesn't matter what numbers you use. You're not using any properties of finite numbers to define anything, so the fact of whether or not these numbers are finite is irrelevant.
The original post doesn't require arbitrarily fine distinctions, just 2^trillion distinctions. That's perfectly finite.
Your comment about Bob not assigning a high utility value to anything is equivalent to a comment stating that Bob's utility function is bounded.
It can make sense to say that a utility function is bounded, but that implies certain other restrictions. For example, bounded utility functions cannot be decomposed into independent (additive or multiplicative, these are the only two options) subcomponents if the number of subcomponents is unknown. Any utility function that is summed or multiplied over an unknown number of independent (e.g.) societies must be unbounded*. Does that mean you believe that utility functions can't be aggregated over independent societies or that no two societies can contribute...
Also it's unclear to me what the connection is between this part and the second.
My bad, I did a poor job explaining that. The first part is about the problems of using generic words (evolution) with fuzzy decompositions (mates, predators, etc) to come to conclusions, which can often be incorrect. The second part is about decomposing those generic words into their implied structure, and matching that structure to problems in order to get a more reliable fit.
I don't believe that "I don't know" is a good answer, even if it's often the correct one...
The cell example is an example of evolution being used to justify contradictory phenomena. The exact same justification is used for two opposing conclusions. If you thought there was nothing wrong with those two examples being used as they were, then there is something wrong with your model. They literally use the exact same justification to come to opposing conclusions.
The second set of explanations have fewer, more reliably-determinable dependencies, and their reasoning is more generally applicable.
That is correct, they have zero prediction and compressi...
Would your answer change if I let you flip the coin until you lost? Based on your reasoning, it should not. Despite it being an effectively-guaranteed extinction, the infinitesimal chance is overwhelmed by the gains in the case of infinitely many good coin flips.
I would not call the Kelly strategy risk-averse. I imagine that word to mean "grounded in a fantasy where risk is exaggerated". I would call the second strategy risk-prone. The difference is that the Kelly strategy ends up being the better choice in realistic cases, whereas the second str...
Dagon: You can artificially bound utility to some arbitrarily low "bankruptcy" point. The lack of a natural one isn't relevant to the question of whether a utility function makes sense here. On treating utility as a resource, if you can make decisions to increase or decrease utility, then you can play the game. Your basic assumption seems to be that people can't meaningfully make decisions that change utility, at which point there is no point in measuring it, as there's nothing anyone can do about it.
The point of unintuitive high utilities and upper-bounded utilities I believe deserves another post.
Regarding the Buckingham Pi Theorem (BPT), I think I can double my recommendation that you try to understand the Method of Lagrange Multipliers (MLM) visually. I'll try to explain in the following paragraph knowing that it won't make much sense on first reading.
For the Method of Lagrange Multipliers, suppose you have some number of equations in n variables. Consider the n-dimensional space containing the set of all solutions to those equations. The set of solutions describes a k-dimensional manifold (meaning the surface of the manifold forms a k-dimensiona...
See my response below to WhySpace on getting started with group theory through category theory. For any space-oriented field, I also recommend looking at the topological definition of a space. Also, for any calculus-heavy field, I recommend meditating on the Method of Lagrange Multipliers if you don't already have a visual grasp of it.
I don't know of any resource that tackles the problem of developing models via group theory. Developing models is a problem of stating and applying analogies, which is a problem in category theory. If you want to understand t...
Or is it that a true sophisticate would consider where and where not to apply sophistry?
Information on the discussion board is front-facing for some time, then basically dies. Yes, you can use the search to find it again, but that becomes less reliable as discussion of TAPs increases. It's also antithetical to the whole idea behind TAP.
The wiki is better suited for acting as a repository of information.
I don't understand what point you're making with the computer, as we seem to be in complete agreement there. Nothing about the notion of ideals and definitions suggests that computers can't have them or their equivalent. It's obvious enough that computers can represent them, as you demonstrated with your example of natural numbers. It's obvious enough that neurons and synapses can encode these things, and that they can fire in patterned ways based on them because... well that's what neurons do, and neurons seem to be doing to bulk of the heavy lifting as f...
"Group" is a generalization of "symmetry" in the common sense.
I can explain group theory pretty simply, but I'm going to suggest something else. Start with category theory. It is doable, and it will give you the magical ability of understanding many math pages on Wikipedia, or at least the hope of being able to understand them. I cannot overstate how large an advantage this gives you when trying to understand mathematical concepts. Also, I don't believe starting with group theory will give you any advantage when trying to understand cat...
The distinction between "ideal" and "definition" is fuzzy the way I'm using it, so you can think of them as the same thing for simplicity.
Symmetry is an example of an ideal. It's not a thing you directly observe. You can observe a symmetry, but there are infinitely many kinds of symmetries, and you have some general notion of symmetry that unifies all of them, including ones you've never seen. You can construct a symmetry that you've never seen, and you can do it algorithmically based on your idea of what symmetries are given a bit of t...
Fair enough, though I disagree with the idea of using the discussion board as a repository of information.
Is there ever a case where priors are irrelevant to a distinction or justification? That's the difference between pure Bayesian reasoning and alternatives.
OP gave the example of the function of organs for a different purpose, but it works well here. To a pure Bayesian reasoner, there is no difference between saying that the heart has a function and saying that the heart is correlated with certain behaviors, because priors alone are not sufficient to distinguish the two. Priors alone are not sufficient to distinguish the two because the distinction has to d...
I think you're missing an important edge case where all of your resolved subsystems are in agreement that their collective desires are simultaneously compatible and unattainable without enormous amounts of motivation, which is something that an arms race can provide. Adaptation isn't just about spinning cycles and causing stress. It does have actual tangible outcomes, and not all of those outcomes are bad. Though I think for most people, your advice is probably close enough to the right advice.