All of DaemonicSigil's Comments + Replies

This was a fun little exercise. We get many "theory of rationality" posts on this site, so it's very good to also have some chances to practice figuring out confusing things also mixed in. The various coins each teach good lessons about ways the world can surprise you.

Anyway, I think this was an underrated post, and we need more posts in this general category.

Running parallel to the spin axis would be fine, though.

3mako yass
I guess since it sounds like they're going to be about a km long and 20 stories deep there'll be enough room for a nice running track with minimal upspin/downspin sections.

Anthropic shadow isn't a real thing, check this post: https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively

Also, you should care about worlds proportional to the square of their amplitude.

5Christopher King
It's actually interesting to consider why this must be the case. Without it, I concede that maybe some sort of Quantum Anthropic Shadow could be true. I'm thinking it would lead to lots of wacky consequences.
2avturchin
I know this post and have two problems with it: what they call 'anthropic shadow" is not proper term as Bostrom defined anthropic shadow as underestimation of past risks based on the fact of survival in his article this the same name. But it's ok.  The more serious problem is that quantum immortality and angel immortality eventually merges: for example, if we survive 10 LHC failures because of QI, we most likely survive only on those timeline where some alien stops LHC. So both QI and angel immortality can be true and support one another and there is no contradiction.     

Thanks for making the game! I also played it, just didn't leave a comment on the original post. Scored 2751. I played each location for an entire day after building an initial food stockpile, and so figured out the timing of Tiger Forest and Dog Valley. But I also did some fairly dumb stuff, like assuming a time dependence for other biomes. And I underestimated Horse Hills, since when I foraged it for a full day, I got unlucky and only rolled a single large number. For what it's worth, I find these applet things more accessible than a full-on D&D.Sci (... (read more)

Have to divide by number of airships, which probably makes them less safe than planes, if not cars. I think the difficulty is mostly with having a large surface-area exposed to the wind making the ships difficult to control. (Edit: looking at the list on Wikipedia, this is maybe not totally true. A lot of the crashes seem to be caused by equipment failures too.)

Are those things that you care about working towards?

No, and I don't work on airships and have no plans to do so. I mainly just think it's an interesting demonstration of how weak electrostatic forces can be.

6Measure
I think this is mostly about how weak air is against dielectric breakdown.
2bhauth
On the other hand, the hydrogen pushing against the airship membrane is also an electrostatic force.

Yep, Claude sure is a pretty good coder: Wang Tile Pattern Generator

This took 1 initial write and 5 change requests to produce. The most manual effort I had to do was look at unicode ranges and see which ones had distinctive-looking glyphs in them. (Sorry if any of these aren't in your computer's glyph library.)

I've begun worshipping the sun for a number of reasons. First of all, unlike some other gods I could mention, I can see the sun. It's there for me every day. And the things it brings me are quite apparent all the time: heat, light, food, and a lovely day. There's no mystery, no one asks for money, I don't have to dress up, and there's no boring pageantry. And interestingly enough, I have found that the prayers I offer to the sun and the prayers I formerly offered to 'God' are all answered at about the same 50% rate.

-- George Carlin

Everyone who earns money exerts some control by buying food or whatever else they buy. This directs society to work on producing those goods and services. There's also political/military control, but it's also (a much narrower set of) humans who have that kind of control too.

3tailcalled
Actually, this is the sun controlling the world, not humans. The sun exerts control by permitting plants to grow, and their fruit creates an excess of organic energy, which permits animals like humans to live. Humans have rather limited choice here; we can redirect the food by harvesting it and guarding against adversaries, but the best means to do so are heavily constrained by instrumental matters. Locally, there is some control in that people can stop eating food and die, or overeat and become obese. Or they can choose what kinds of food to eat. But this seems more like "control yourself" than "control the world". The farmers can choose how much food to supply, but if a farmer doesn't supply what is needed, then some other farmer elsewhere will supply it, so that's more "control your farm" than "control the world". The world revolves around the sun.

Okay, I'll be the idiot who gives the obvious answer: Yeah, pretty much.

6tailcalled
Who, by what metric, in what way?

Very nice post, thanks for writing it.

Your options are numbered when you refer to them in the text, but are listed as bullet points originally. Probably they should also be numbered there!

Now we can get down to the actual physics discussion. I have a bag of fairly unrelated statements to make.

  • The "center of mass moves at constant velocity" thing is actually just as solid as, say, conservation of angular momentum. It's just less famous. Both are consequences of Noether's theorem, angular momentum conservation arising from symmetry under rotations and the

... (read more)
8Ben
Numbering the options properly is a good idea, done. To answer your points: * This is interesting. Symmetry under rotations gives us conservation of angular momentum. Symmetry under translations conservation of linear momentum. You are saying symmetry under boosts gives conservation of centre of mass velocity. Although in "normal" situations (billiard balls colliding) conservation of centre of mass velocity is a special case of as conservation of linear momentum - which I suppose is why I have not heard of it before. I need to look at this more as I find I am still confused. Intuitively I feel like if translation symmetry is doing momentum for us boost symmetry should relate to a quantity with an extra time-derivative in it somewhere.  There is no symmetry under angular boosts, which I imagine is why fly-wheels (or gyroscopes) allow for an "internal reaction drive" for angular velocity. * I did not know that the kinetic and canonical momentum had different values in other fields. That makes option (1) more believable. * Yes, the k-vector (wavevector) certainly extends by a factor of n. So if you want your definition of "momentum" to be linear in wavevector then you are stuck with Minkowski. * I believe, that at the interface between the water and the air we will have a partial reflection of the light. The reflected component of the light has an evanescent tail associated with it that tunnels into the air gap. If we had more water on the other side of the air gap then the evanescent tail would be converted back into a propagating wave, and the light would not reflect from the first water interface in the first place. As the evanescent tail has a length of the order of a wavelength this means that random gaps between the atoms in water or glass don't mess with the propagating light wave, as the wavelength is so much longer than those tiny gaps they do not contribute. Applying this picture to your question, I think we would expect to interpolate smooth

Good point, the whole "model treats tokens it previously produced and tokens that are part of the input exactly the same" thing and the whole "model doesn't learn across usages" thing are also very important.

When generating each token, they "re-read" everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.

Given that KV caching is a thing, the way I chose to phrase this is very misleading / outright wrong in retrospect. While of course inference could be done in this way, it's not the most efficient, and one could even make a similar statement about certain inefficient ways of simulating a person's thoughts.

If I... (read more)

7gwern
I think it's a very important thing to know about Transformers, as our intuition about these models is that there must be some sort of hidden state or on the fly adaptation, and this is at least potentially true of other models. (For example, in RNNs, it's a useful trick to run the RNN through the 'context window' and then loop back around and input the final hidden state at the beginning, and 'reread' the 'context window' before dealing with new input. Or there's dynamic evaluation, where the RNN is trained on the fly, for much better results, and that is very unlike almost all Transformer uses. And of course, RNNs have long had various kinds of adaptive computation where they can update the hidden state repeatedly on repeated or null inputs to 'ponder'.) But I don't think your rewrite is better, because it's focused on a different thing entirely, and loses the Memento-like aspect of how Transformers work - that there is nothing 'outside' the context window. The KV cache strikes me as quibbling: the KV cache is more efficient, but it works only because it is mathematically identical and is caching the computations which are identical every time. I would just rewrite that as something like,

Let's say we have a bunch of datapoints in that are expected to lie on some lattice, with some noise in the measured positions. We'd like to fit a lattice to these points that hopefully matches the ground truth lattice well. Since just by choosing a very fine lattice we can get an arbitrarily small error without doing anything interesting, there also needs to be some penalty on excessively fine lattices. This is a bit of a strange problem, and an algorithm for it will be presented here.

method

Since this is a lattice problem, the first question to jump ... (read more)

This might be worth pinning as a top-level post.

The amount of entropy in a given organism stays about the same, though I guess you could argue it increases as the organism grows in size. Reason: The organism isn't mutating over time to become made of increasingly high entropy stuff, nor is it heating up. The entropy has to stay within an upper and lower bound. So over time the organism will increase entropy external to itself, while the internal entropy doesn't change very much, maybe just fluctuates within the bounds a bit.

It's probably better to talk about entropy per unit mass, rather than entropy de... (read more)

2tailcalled
I mean, actually it is. Plus accumulation of various kinds of damage, experiences, etc. which makes it differ from other organisms. Looking it up, apparently people drop very slightly in temperature when they age, which I guess might dominate the entropy considerations (though I guess that is due to slowly dying, so it also seems compatible with entropy being related to life if reduction in life is related to reduction in entropy). Couldn't it be reasonable to say that entropy increases as a sign of increased vitality associated with growing up to adulthood, and then afters has a mixture of an infinitesimal increasing effect from life experience and a moderate associated wirh vitality breakdown? But if we go by unit mass, shouldn't we count both the entropy in the air and the entropy in the organic matter, since they're both related to the original mass that goes into life, meaning therefore life still increases entropy?
2quetzal_rainbow
I think the correct unit is "per particle" or "per mole".

Speaking of which, I wonder if multi-modal transformers have started being used by blind people yet. Since we have models that can describe images, I wonder if it would be useful for blind people to have a device with a camera and a microphone and a little button one can press to get it to describe what the camera is seeing. Surely there are startups working on this?

2DaemonicSigil
Cool, Facebook is also on this apparently: https://x.com/PicturesFoIder/status/1840677517553791440

Yes. See Be my AI.

a device with a camera and a microphone and a little button

Why the retrofuturistic description of a smartphone?

Found this paper on insecticide costs: https://sci-hub.st/http://dx.doi.org/10.1046/j.1365-2915.2000.00262.x

It's from 2000, so anything listed here would be out of patent today.

4bhauth
Here are the costs from the above link: It's worth noting that countries (such as India) have the option of simply not respecting a patent when the use is important and the fees requested are unreasonable. Also, patents aren't international; it's often possible to get around them by simply manufacturing and using a chemical in a different country.

hardening voltage transformers against ionising radiation

Is ionization really the mechanism by which transformers fail in a solar storm? I thought it was that changes in the Earth's magnetic field induced large currents in long transmission lines, overloading the transformers.

1Remmelt
Good question!  Will look into it / check more if I have the time. 

Sorry for the self promotion, but some folks may find this post relevant: https://www.lesswrong.com/posts/uDXRxF9tGqGX5bGT4/logical-share-splitting (ctl-F for "Application: Conditional prediction markets")

tldr: Gives a general framework that would allow people to make this kind of trade with only $N in capital, just as a natural consequence of the trading rules of the market.

Anyway, I definitely agree that Manifold should add the feature you describe! (As for general logical share splitting, well, it would be nice, but probably far too much work to convert the existing codebase over.)

IMO, a very good response, which Eliezer doesn't seem to be interested in making as far as I can tell, is that we should not be making the analogy natural selection <--> gradient descent, but rather, human brain learning algorithm <--> gradient descent ; natural selection <--> us trying to build AI.

So here, the striking thing is that evolution failed to solve the alignment problem for humans. I.e. we have a prior example of strongish general intelligence being created, but no prior examples of strongish general intelligence being aligne... (read more)

3lemonhope
Yeah you can kind of stop at "we are already doing natural selection." The devs give us random variation. The conferences and the market give us selection. The population is large, the mutation rate is high, the competition is fierce, and replicating costs $0.25 + 10 minutes.

People here might find this post interesting: https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d

The author argues that search algorithms will play a much larger role in AI in the future than they do today.

I remember reading the EJT post and left some comments there. The basic conclusions I arrived at are:

  • The transitivity property is actually important and necessary, one can construct money-pump-like situations if it isn't satisfied. See this comment
  • If we keep transitivity, but not completeness, and follow a strategy of not making choices inconsistent with out previous choices, as EJT suggests, then we no longer have a single consistent utility function. However, it looks like the behaviour can still be roughly described as "picking a utility function at
... (read more)
6Thomas Kwa
Note that if the distribution of utility under the prior is heavy-tailed, you can get infinite utility even with arbitrarily low relative entropy, so the optimal policy is undefined. In the case of goal misspecification, optimization with a KL penalty may be unsafe or get no better utility than the prior.

If you're working with multidimensional tensors (eg. in numpy or pytorch), a helpful pattern is often to use pattern matching to get the sizes of various dimensions. Like this: batch, chan, w, h = x.shape. And sometimes you already know some of these dimensions, and want to assert that they have the correct values. Here is a convenient way to do that. Define the following class and single instance of it:

class _MustBe:
  """ class for asserting that a dimension must have a certain value.
      the class itself is private, one should import a particular obj
... (read more)

Oh, very cool, thanks! Spoiler tag in markdown is:

:::spoiler
text here
:::

Heh, sure.

Promote from a function to a linear operator on the space of functions, . The action of this operator is just "multiply by ". We'll similarly define meaning to multiply by the first, second integral of , etc.

Observe:

Now we can calculate what we get when applying times. The calculation simplifies when we note that all terms are of the form . Result:

Now we apply the above operator to :

The sum terminates b

... (read more)
6notfnofn
Very nice! Notice that if you write r=j−k, I as D−1, and play around with binomial coefficients a bit, we can rewrite this as:   (By the way, how do you spoiler tag?)

Use integration by parts:

Then  is another polynomial (of smaller degree), and  is another "nice" function, so we recurse.

4notfnofn
This is true, but I'm looking for an explicit, non-recursive formula that needs to handle the general case of the kth anti-derivative (instead of just the first). The solution involves doing something funny with formal power series, like in this post.

Other people have mentioned sites like Mechanical Turk. Just to add another thing in the same category, apparently now people will pay you for helping train language models:

https://www.dataannotation.tech/faq?

Haven't tried it yet myself, but a roommate of mine has and he seems to have had a good experience. He's mentioned that sometimes people find it hard to get assigned work by their algorithm, though. I did a quick search to see what their reputation was, and it seemed pretty okay:

... (read more)
1Tigerlily
Thanks, I have done DataAnnotation already a few months back. It's true that it's difficult to get assignments there after you finish the first one or two. They supposedly have tons of work for people who specialize in certain tech roles, but that obviously won't apply to most people. There is also virtually no way to contact anyone who works at DataAnnotation if you have questions. But I have made a few dollars there.

Linkpost for: https://pbement.com/posts/threads.html

Today's interesting number is 961.

Say you're writing a CUDA program and you need to accomplish some task for every element of a long array. Well, the classical way to do this is to divide up the job amongst several different threads and let each thread do a part of the array. (We'll ignore blocks for simplicity, maybe each block has its own array to work on or something.) The method here is as follows:

for (int i = threadIdx.x; i < array_len; i += 32) {
    arr[i] = ...;
}

So the threads make the foll... (read more)

So once that research is finished, assuming it is successful, you'd agree that many worlds would end up using fewer bits in that case? That seems like a reasonable position to me, then! (I find the partial-trace kinds of arguments that people make pretty convincing already, but it's reasonable not to.)

2TAG
The other problem is that MWI is up against various subjective and non-realist interpretations, so it's not it's not the case that you can build an ontological model of every interpretation.

MW theories have to specify when and how decoherence occurs. Decoherence isn't simple.

They don't actually. One could equally well say: "Fundamental theories of physics have to specify when and how increases in entropy occur. Thermal randomness isn't simple." This is wrong because once you've described the fundamental laws and they happen to be reversible, and also aren't too simple, increasing entropy from a low entropy initial state is a natural consequence of those laws. Similarly, decoherence is a natural consequence of the laws of quantum mechanics (with a not-too-simple Hamiltonian) applied to a low entropy initial state.

-1TAG
MW has to show that decoherence is a natural consequence, which is the same thing. It can't be taken on faith, any more than entropy should be. Proofs of entropy were supplied a long time ago, proofs of decoherence of a suitable kind, are a work in progress.

Good post, and I basically agree with this. I do think it's good to mostly focus on the experimental implications when talking about these things. When I say "many worlds", what I primarily mean is that I predict that we should never observe a spontaneous collapse, even if we do crazy things like putting conscious observers into superposition, or putting large chunks of the gravitational field into superposition. So if we ever did observe such a spontaneous collapse, that would falsify many worlds.

Amount of calculation isn't so much the concern here as the amount of bits used to implement that calculation. And there's no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn't have to do.

In particular, I mentioned earlier that Copenhagen has to have rules for when measurements occur and what basis they occur in. How does MWI incur a similar cost? What does MWI have to compute that Copenhagen doesn't that uses up the same number of bits of source code?

Like, yes, an expect... (read more)

0TAG
And vice versa. You can do unnecessary calculation under any interpretation, so that's an uninteresting observation. The importantly is that the minimum amount of calculation you have to do get an empirically adequate theory is the same under any interpretation, because interpretations don't change the maths, they just ... interpret it.... differently. In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist -- it's just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse. The maths is the same, the interpretation is different. You can also do the maths without interpreting it, as in Shut Up And Calculate. This gets back to a long-standing confusion between Copenhagen and objective collapse theories (here, I mean, not in the actual physics community). Copenhagen ,properly speaking, only claims that collapse occurs on or before measurement. It also claims that nothing is known about the ontology of.the system before collapse -- it's not the case that anything "is" a wave function. An interpretation of QM doesn't have to have an ontology, and many dont. Which, of course, is another factor that renders the whole Kolmogorov. Complexity approach inoperable. Objective collapse theories like GRW do have to specify when and collapse occurs...but MW theories have to specify when and how decoherence occurs. Decoherence isn't simple.

Right, so we both agree that the randomness used to determine the result of a measurement in Copenhagen, and the information required to locate yourself in MWI is the same number of bits. But the argument for MWI was never that it had an advantage on this front, but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (Not to decide the outcome of the collapse, those random bits are already spoken for. Just the source code of the procedure that collapses the ... (read more)

2TAG
Again: that's some less calculation that the reader of the tape has to do.

Disagree.

If you're talking about the code complexity of "interleaving": If the Turing machine simulates quantum mechanics at all, it already has to "interleave" the representations of states for tiny things like a electrons being in a superposition of spin states or whatever. This must be done in order to agree with experimental results. And then at that point not having to put in extra rules to "collapse the wavefunction" makes things simpler.

If you're talking about the complexity of locating yourself in the computation: Inferring which world you're in is... (read more)

2TAG
I'm not talking about the code complexity of interleaving the SI's output. I am talking about interpreting the serial output of the SI ....de-interleaving , as it were. If you account for that , then the total complexity is exactly the same as Copenhagen and that's the point. I'm not a dogmatic Copenhagenist, so that's not a gotcha. Basically , the amount of calculation you have to do to get an empirically adequate theory is the same under any interpretation, because interpretations don't change the maths, they just ... interpret it .....differently. The SI argument for MWI only seems to work because it encourages the reader to neglect the complexity implicit in interpreting the output tape.

This notion of faith seems like an interesting idea, but I'm not 100% sure I understand it well enough to actually apply it in an example.

Suppose Descartes were to say: "Y'know, even if there were an evil Daemon fooling every one of my senses for every hour of the day, I can still know what specific illusions the Daemon is choosing to show me. And hey, actually, it sure does seem like there are some clear regularities and patterns in those illusions, so I can sometimes predict what the Daemon will show me next. So in that sense it doesn't matter whether my... (read more)

To be clear, I'm definitely pretty sympathetic to TurnTrout's type error objection. (Namely: "If the agent gets a high reward for ingesting superdrug X, but did not ingest it during training, then we shouldn't particularly expect the agent to want to ingest superdrug X during deployment, even if it realizes this would produce high reward.") But just rereading what Zack has written, it seems quite different from what TurnTrout is saying and I still stand by my interpretation of it.

  • eg. Zack writes: "obviously the line itself does not somehow contain a repr
... (read more)

It's the same thing for piecewise-linear functions defined by multi-layer parameterized graphical function approximators: the model is the dataset. It's just not meaningful to talk about what a loss function implies, independently of the training data. (Mean squared error of what? Negative log likelihood of what? Finish the sentence!)

This confusion about loss functions...

I don't think this is a confusion, but rather a mere difference in terminology. Eliezer's notion of "loss function" is equivalent to Zack's notion of "loss function" curried with the... (read more)

The issue seems more complex and subtle to me.

It is fair to say that the loss function (when combined with the data) is a stochastic environment (stochastic due to sampling the data), and the effect of gradient descent is to select a policy (a function out of the function space) which performs very well in this stochastic environment (achieves low average loss).

If we assume the function-approximation achieves the minimum possible loss, then it must be the case that the function chosen is an optimal control policy where the loss function (understood as incl... (read more)

Could you give an example of knowledge and skills not being value neutral?

(No need to do so if you're just talking about the value of information depending on the values one has, which is unsurprising. But it sounds like you might be making a more substantial point?)

Fair enough for the alignment comparison, I was just hoping you could maybe correct the quoted paragraph to say "performance on the hold-out data" or something similar.

(The reason to expect more spread would be that training performance can't detect overfitting but performance on the hold-out data can. I'm guessing some of the nets trained in Miller et al did indeed overfit (specifically the ones with lower performance).)

More generally, John Miller and colleagues have found training performance is an excellent predictor of test performance, even when the test set looks fairly different from the training set, across a wide variety of tasks and architectures.

Seems like figure 1 from Miller et al is a plot of test performance vs. "out of distribution" test performance. One might expect plots of training performance vs. "out of distribution" test performance to have more spread.

1Nora Belrose
I doubt there would be much difference, and I think the alignment-relevant comparison is to compare in-distribution but out-of-sample performance to out-of-distribution performance. We can easily do i.i.d. splits of our data, that's not a problem. You might think it's a problem to directly test the model in scenarios where it could legitimately execute a takeover if it wanted to.

In this context, we're imitating some probability distribution, and the perturbation means we're slightly adjusting the probabilities, making some of them higher and some of them lower. The adjustment is small in a multiplicative sense not an additive sense, hence the use of exponentials. Just as a silly example, maybe I'm training on MNIST digits, but I want the 2's to make up 30% of the distribution rather than just 10%. The math described above would let me train a GAN that generates 2's 30% of the time.

I'm not sure what is meant by "the difference from... (read more)

2Gunnar_Zarncke
Thanks. Your ΔH looked like ∇Q from gradient descent, but you don't intend to take derivatives, nor maximize x, so I was mistaken.  

Perturbation Theory in Machine Learning

Linkpost for: https://pbement.com/posts/perturbation_theory.html

In quantum mechanics there is this idea of perturbation theory, where a Hamiltonian is perturbed by some change to become . As long as the perturbation is small, we can use the technique of perturbation theory to find out facts about the perturbed Hamiltonian, like what its eigenvalues should be.

An interesting question is if we can also do perturbation theory in machine learning. Suppose I am training a GAN, a diffuser, or some other machine lea... (read more)

2Gunnar_Zarncke
What is the difference between a perturbation and the difference from a gradient in SGD? Both seem to do the same thing. I'm just pattern-matching and don't know too much of either.

I don't think we should consider the centroid important in describing the LLM's "ontology". In my view, the centroid just points in the direction of highest density of words in the LLM's space of concepts. Let me explain:

The reason that embeddings are spread out is to allow the model to distinguish between words. So intuitively, tokens with largeish dot product between them correspond to similar words. Distinguishability of tokens is a limited resource, so the training process should generally result in a distribution of tokens that uses this resource in a... (read more)

1Chris_Leong
If there's a 100 tokens for snow, it probably indicates that it's a particularly important concept for that language.

Mathematically, convergence just means that the distance to some limit point goes to 0 in the limit. There's no implication that the limit point has to be unique, or optimal. Eg. in the case of Newton fractals, there are multiple roots and the trajectory converges to one of the roots, but which one it converges to depends on the starting point of the trajectory. Once the weight updates become small enough, we should say the net has converged, regardless of whether it achieved the "optimal" loss or not.

If even "converged" is not good enough, I'm not sure wh... (read more)

4TurnTrout
You raise good points. I agree that the mathematical definition of convergence does not insinuate uniqueness or optimality, thanks for reminding me of that.

On the actual object level for the word "optimal", people already usually say "converged" for that meaning and I think that's a good choice.

2TurnTrout
I personally dislike "converged" because it implies that the optimal policy is inevitable. If you reach that policy, then yes you have converged. However, the converse ("if you have not reached an optimal policy, then you have not converged") is not true in general. Even in the supervised regime (with a stationary data distribution) you can have local minima or zero-determinant saddle points (i.e. flat regions in the loss landscape).

Relatedly, you bring up adversarial examples in a way that suggests that you think of them as defects of a primitive optimization paradigm, but it turns out that adversarial examples often correspond to predictively useful features that the network is actively using for classification, despite those features not being robust to pixel-level perturbations that humans don't notice—which I guess you could characterize as "weird squiggles" from our perspective, but the etiology of the squiggles presents a much more optimistic story about fixing the problem wit

... (read more)

You Can Just Put an Endpoint Penalty on Your Wasserstein GAN

Linkpost for: https://pbement.com/posts/endpoint_penalty.html

When training a Wasserstein GAN, there is a very important constraint that the discriminator network must be a Lipschitz-continuous function. Roughly we can think of this as saying that the output of the function can't change too fast with respect to position, and this change must be bounded by some constant . If the discriminator function is given by then we can write the Lipschitz condition for the discriminator as:

... (read more)

Yes. I think Beff was speaking imprecisely there. In order to be consistent with what he's written elsewhere, he should have said something like: "maximizing the rate of free energy dissipation".

Load More