Wiki Contributions

Comments

Sorted by
sen127

Thank you. You phrased the concerns about "integrating with a bigger picture" better than I could. To temper the negatives, I see at least two workable approaches, plus a framing for identifying more workable approaches.

  • Enable other safety groups to use and reproduce Conjecture's research on CogEms so those groups can address more parts of the "bigger picture" using Conjecture's findings. Under this approach, Conjecture becomes a safety research group, and the integration work of turning that research into actionable safety efforts becomes someone else's task.
  • Understand the societal motivations for taking short-term steps toward creating dangerous AI, and demonstrate that CogEms are better suited for addressing those motivations, not just the motivations of safety enthusiasts, and not just hypothetical motivations that people "should" have. To take an example, OpenAI has taken steps towards building dangerous AI, and Microsoft has taken another dangerous step of attaching a massive search database to it, exposing the product to millions of people, and kicking off an arms race with Google. There were individual decision-makers involved in that process, not just as "Big Company does Bad Thing because that's what big companies do." Why did they make those decisions? What was the decision process for those product managers? Who created the pitch that convinced the executives? Why didn't Microsoft's internal security processes mitigate more of the risks? What would it have taken for Microsoft to have released a CogEm instead of Sydney? The answer is not just research advances. Finding the answers would involve talking to people familiar with these processes, ideally people that were somehow involved. Once safety-oriented people understand these things, it will be much easier for them to replace more dangerous AI systems with CogEms.
  • As a general framework, there needs to be more liquidity between the safety research and the high-end AI capabilities market, and products introduce liquidity between research and markets. Publishing research addresses one part of that by enabling other groups to productize that research. Understanding societal motivations addresses another part of that, and it would typically fall under "user research." Clarity on how others can use your product is another part, one that typically falls under a "go-to-market strategy." There's also market awareness & education, which helps people understand where to use products, then the sales process, which helps people through the "last mile" efforts of actually using the product, then the nebulous process of scaling everything up. As far as I can tell, this is a minimal set of steps required for getting the high-end AI capabilities market to adopt safety features, and it's effectively the industry standard approach.

As an aside, I think CogEms are a perfectly valid strategy for creating aligned AI. It doesn't matter if most humans have bad interpretability, persuadability, robustness, ethics, or whatever else. As long as it's possible for some human (or collection of humans) to be good at those things, we should expect that some subclass of CogEms (or collection of CogEms) can also be good at those things.

sen20

What interfaces are you planning to provide that other AI safety efforts can use? Blog posts? Research papers? Code? Models? APIs? Consulting? Advertisements?

sen30

Ah. Thank you, that is perfectly clear. The Wikipedia page for Scalar Field makes sense with that too. A scalar field is a function that takes values in some canonical units, and so it transforms only on the right of f under a perspective shift. A vector field (effectively) takes values both on and in the same space, and so it transforms both on the left and right of v under a perspective shift.

I updated my first reply to point to yours.

sen10

Reading the wikipedia page on scalar field, I think I understand the confusion here. Scalar fields are supposed to be invariant under changes in reference frame assuming a canonical coordinate system for space.

Take two reference frames P(x) and G(x). A scalar field S(x) needs to satisfy:

  • S(x) = P'(x)S(x)P(x) = G'(x)S(x)G(x)
  • Where P'(x) is the inverse of P(x) and G'(x) is the inverse of G(x).

Meaning the inference of S(x) should not change with reference frame. A scalar field is a vector field that commutes with perspective transformations. Maybe that's what you meant?

I wouldn't use the phrase "transforms trivially" here since a "trivial transformation" usually refers to the identity transformation. I wouldn't use a head tilt example either since a lot of vector fields are going to commute with spatial rotations, so it's not good for revealing the differences. And I think you got the association backwards in your original explanation: scalar fields appear to represent quantities in the underlying space unaffected by head tilts, and so they would be the ones "transforming in the opposite direction" in the analogy since they would remain fixed in "canonical space".

sen10

Interesting. That seems to contradict the explanation for Lie Algebras, and it seems incompatible with commutators in general, since with commutators all operators involved need to be compatible with both composition and precomposition (otherwise AB - BA is undefined). I guess scalar fields are not meant to be operators? That doesn't quite work since they're supposed used to describe energy, which is often represented as an operator. In any case, I'll have to keep that in mind when reading about these things.

sen30

Thanks for the explanation. I found this post that connects your explanation to an explanation of the "double cover." I believe this is how it works:

  • Consider a point on the surface of a 3D sphere. Call it the "origin".
  • From the perspective of this origin point, you can map every point of the sphere to a 2D coordinate. The mapping works like this: Imagine a 2D plane going through the middle of the sphere. Draw a straight line (in the full 3D space) from the selected origin to any other point on the sphere. Where the line crosses the plane, that's your 2D vector representation of the other point. Under this visualization, the origin point should be mapped to a 2D "point at infinity" to make the mapping smooth. This mapping gives you a one-to-one conversion between 2D coordinate systems and points on the sphere.
  • You can create a new 2D coordinate system for sphere surface points using any point on the sphere as the origin. All of the resulting coordinate systems can be smoothly deformed into one another. (Points near the origin are always large, points on the opposite side of the sphere are always close to the 0,0,0, and the changes are smooth as you move the origin smoothly.)
  • Each choice of origin on the surface of the sphere (and therefore each 2D coordinate system) corresponds to two unit-length quaternions. You can see this as follows. Pick any choice of i,j,k values from a unit quaternion. There are now either 1 or 2 choices for what the real component of that quaternion might have been. If i,j,k alone have unit length, then there's only one choice for the real component: zero. If i,j,k alone do not have unit length, then there are two choices for the real component since either a positive or a negative value can be used to make the quaternion unit length again.
  • Take the set of unit quaternions that have a real component close to zero. Consider the set of 2D coordinate systems created from those points. In this region, each coordinate system corresponds to two quaternions EXCEPT at the points where the quaternion's real component is 0. This exceptional case prevents a one-to-one mapping between coordinate transformations and quaternion transformations.
  • As a result, there's no "smooth" way to reduce the two-to-one mapping from quaternions to coordinate systems down to a one-to-one mapping. Any mapping would require either double-counting some quaternions or ignoring some quaternions. Since there's a one-to-one mapping between coordinate systems and candidate origin points on the surface of the sphere, this means there is also no one-to-one mapping between quaternions and points on the sphere.
  • No matter what smooth mapping you choose from SU(2), unit quaternions, to SO(3), unit spheres, the mapping must do the equivalent of collapsing distinctions between quaternions with positive and negative real components. And so the double cover corresponds to the two sets of covers: one of positive-real-component quaternions over the sphere, and one of the negative-real-component quaternions over the sphere. Within each cover, there's a smooth one-to-one conversion between quaternion-coordinates mappings, but across covers there is not.
sen1-2

EDIT: This post is incorrect. See the reply chain below. After correcting my misunderstanding, I agree with your explanation.

The difference you're describing between vector fields and scalar fields, mathematically, is the difference between composition and precomposition. Here it is more precisely:

  • Pick a change-of-perspective function P(x). The output of P(x) is a matrix that changes vectors from the old perspective to the new perspective.
  • You can apply the change-of-perspective function either before a vector field V(x) or after a vector field. The result is either V(x)P(x) or P(x)V(x).
  • If you apply P(x) before, the vector field applies a flow in the new perspective, and so its arrows "tilt with your head."
  • If you apply P(x) after, the vector field applies a flow in the old perspective, and so the arrows don't tilt with your head.
  • You can do replace the vector field V(x) with a 3-scalar field and see the same thing.

Since both composition and precomposition apply to both vector fields and scalar fields in the same way, that can't be something that makes vector fields different from scalar fields.

As far as I can tell, there's actually no mathematical difference between a vector field in 3D and a 3-scalar field that assigns a 3D scalar to each point. It's just a choice of language. Any difference comes from context. Typically, vector fields are treated like flows (though not always), whereas scalar fields have no specific treatment.

Spinors are represented as vectors in very specific spaces, specifically spaces where there's an equivalence between matrices and spatial operations. Since a vector is something like the square root of a matrix, a spinor is something like the square root of a spatial operation. You get Dirac Spinors (one specific kind of spinor) from "taking the square root of Lorentz symmetry operations," along with scaling and addition between them.

As far as spinors go, I think I prefer your Lorentz Group explanation for the "what" though I prefer my Clifford Algebra one for the "how". The Lorentz Group explanation makes it clear how to find important spinors. For me, the Clifford Algebra makes it clear how the rest of the spinors arise from those important spinors, and it makes it clear that they're the "correct" representation when you want to sum spatial operations, as you would with wavefunctions. It's interesting that the intuition doesn't transfer as I expected. I guess the intuition transfer problem here is more difficult than I expected.

Note: Your generalization only accounts for unit vectors, and spinors are NOT restricted to unit vectors. They can be scaled arbitrarily. If they couldn't, ψ†ψ would be uniform at every point. You probably know this, but I wanted to make it explicit.

sen10

In the 2D matrix representation, the basis element corresponding to the real part of a quaternion is the identity matrix. So scaling the real part results in scaling the (real part of the) diagonal of the 2D matrix, which corresponds to a scaling operation on the spinor. It incidentally plays the same role on 3D objects: it scales them. Plus, it plays a direct role in rotations when it's -1 (180 degree rotation) or 1 (0 degree rotation). Same as with i, j, and k, the exact effect of changing the real part of the quaternion isn't obvious from inspection when it's summed with other non-zero components. For example, it's hard to tell by inspection what the 2 or the 3j is doing in the quaternion 2+3j.

In total, quaternions represent both scaling, rotating, and any mix of the two. I should have been clearer about that in the post. Spinors for quaternions do include any "state changes" resulting from the real part of the quaternion as well as any changes resulting from i, j, and k components, so the spinor does use all degrees of freedom.

The change in representation between 2-quaternion and 4-complex spinors is purely notational. It doesn't affect any of the math or underlying representations. Since a quaternion operation can be represented by a 2x2 complex matrix, you can represent a 2-quaternion operation as the tensor product of two 2x2 complex matrices, which would give you a 4x4 complex matrix. That's where 4x4 gamma matrices come from-- each is a tensor products of two 2x2 Pauli matrices. For all calculations and consequences, you get the exact same answers whether you choose to represent the operations and spinors as quaternions or complex numbers.

sen32

I don't know why other people say it, but I can explain why it's nice to say it.

  • log P(x) behaves nicely in comparison to P(x) when it comes to placing iterated bets. When you maximize P(x), you're susceptible to high risk high reward scenarios, even when they lead to failure with probability arbitrarily close to 1. The same is not true when maximizing log P(x). I'm cheating here since this only really makes sense when big-P refers to "principal" (i.e., the thing growing or shrinking with each bet) rather than "probability".
  • p(x) doesn't vary linearly with the controls we typically have, so calculus intuition tends to break down when used to optimize p(x). Log p(x) does usually vary linearly with the controls we typically have, so we can apply more calculus intuition to optimizing it. I think this happens because of the way we naturally think of "dimensions of" and "factors contributing to" a probability and the resulting quirks of typical maximum entropy distributions.
  • Log p(x) grows monotonically with p(x) whenever x is possible, so the result is the same whether you argmax log p(x) or p(x).
  • p(x) is usually intractable to calculate, but there's a slick trick to approximate it using the Evidence Based Lower Bound, which requires dealing with log p(x) rather than p(x) directly. Saying log p(x) calls that trick to mind more easily than saying just p(x).
  • All the cool papers do it.
sen50

Logic and reason indicate the robustness of a claim, but you can have lots of robust, mutually-contradictory claims. A robust claim is one that contradicts neither itself nor other claims it associates with. The other half is how well it resonates with people. Resonance indicates how attractive a claim is through authority, consensus, scarcity, poetry, or whatever else.

Survive and spread through robustness and resonance. That's what a strong claim does. You can state that you'll only let a claim spread into your mind if it's true, but the fact that it's so common for two such people to hold contradictory claims indicates that their real metric is much weaker than truth. I'll posit that the real metric in such scenarios is robustness.

Not all disagreements will separate cleanly into true/false categorizations. Godel proved that one.

Load More