Wiki Contributions

Comments

Sorted by

Hmm, so I'm very wary of defending tropical geometry when I know so little about it; if anyone more informed is reading please jump in! But until then, I'll have a go.

tropical geometry might be relevant ML, for the simple reason that the functions coming up in ML with ReLU activation are PL

I'm not sure I agree with this argument.

Hmm, even for a very small value of `might'? I'm not saying that someone who wants to contribute to ML needs to seriously consider learning some tropical geometry, just that if one already knows tropical geometry it's not a crazy idea to poke around a bit and see if there are applications.

The use of PL functions is by no means central to ML theory, and is an incidental aspect of early algorithms.

I agree this is an important point. I don’t actually have a good idea what activation functions people use in practise these days. Thinking about asymptotic linearity makes me think about the various papers appearing using polynomial activation functions. Do you have an opinion on this? For people in algebraic geometry it’s appealing as it generates lots of AG problems (maybe v hard), but I don’t have a good feeling as to whether it’s got anything much to do with `real life’ ML. I can link to some of the papers I’m thinking of if that’s helpful, or maybe you are already a bit familiar.

I don't see why one wouldn't just use ordinary currents here (currents on a PL manifold can be made sense of after smoothing, or in a distribution-valued sense, etc.).

I think you’re right; this paper just came to mind because I was reading it recently.

whether tropical geometry has ever been useful (either in proving something or at least in reconceptualizing something in an interesting way) in linear programming.

A little googling suggests there are some applications. This paper seems to give an application of tropical geometry to complexity of linear programming: https://inria.hal.science/hal-03505719/document and this list of conference abstracts seems to give other applications: https://him-application.uni-bonn.de/fileadmin/him/Workshops/TP3_21_WS1_Abstracts.pdf Whether they are 'convincing' I leave up to you.

1  Algebraic geometry in general (including tropical geometry) isn't good at dealing with deep compositions of functions, and especially approximate compositions.

Fair, though one might also see that as an interesting challenge. I don’t have a feeling as to whether this is for really fundamental reasons, or people haven’t tried so hard yet.

2 [….] I simply can't think of any behavior that is at all meaningful from an AG-like perspective where the questions of fan combinatorics and degrees of polynomials are replaced by questions of approximate equality.

There are plenty of cases where "high degree" is enough (Falting’s Theorem is the first thing that comes to mind, but there are lots). But I agree that "degree approximately 5" feels quite unnatural.

Hi Dmitry,

To me it seems not unreasonable to think that some ideas from tropical geometry might be relevant ML, for the simple reason that the functions coming up in ML with ReLU activation are PL, and people in tropical geometry have thought seriously about PL functions. Of course this does not guarantee that there is anything useful to be said!

One possible example that comes to mind in the context of your post here is the concept of polyhedral currents. As I understand it, here the notion of "density of polygons' is used as a kind of proxy for the derivative of a PL function? But I think the theory of polyhedral currents gives a much more general theory of differentiation of PL functions. Very naively, rather than just recording the locus where the function fails to be linear, one also records how much the derivative changes when crossing the walls. I learnt about this from a paper of Mihatsch: https://arxiv.org/pdf/2107.12067 but I'm certain there are older references.

I'm really a log person, I don't know the tropical world very well; sorry if what I write does not make sense!

Get that agreement in writing.

I'm not sure that would be particularly reassuring to me (writing as one of the contributors). First, how would one check that the agreement had been adhered to (maybe it's possible, I don't know)? Second, people in my experience often don't notice they are training on data (as mentioned in a post above by ozziegooen).

I think this is a key point. Even the best possible curriculum, if it has to work for all students at the same rate, is not going to work well. What I really want (both for my past-self as a student, and my present self as a teacher of university mathematics) is to be able to tailor the learning rate to individual students and individual topics (for student me, this would have meant 'go very fast for geometry and rather slowly for combinatorics'). And while we're at it, can we also customise the learning styles (some students like to read, some like to sit in class, some to work in groups, etc)?

This is technologically more feasible than it was a decade ago, but seems far from common.

Thanks Charlie.

Just to be double-sure, the second process was choosing the weight in a ball (so total L2 norm of weights was <= 1), rather than on a sphere (total norm == 1), right?

Yes, exactly (though for some constant , which may not be , but turn out not to matter).

Is initializing weights that way actually a thing people do?

Not sure (I would like to know). But what I had in mind was initialising a network with small weights, then doing a random walk ('undirected SGD'), and then looking at the resulting distribution. Of course this will be more complicated than the distributions I use above, but I think the shape may depend quite a bit on the details of the SGD. For example, I suspect that the result of something like adaptive gradient descent may tend towards more spherical distributions, but I haven't thought about this carefully.

If training large neural networks only moves the parameters a small distance (citation needed), do you still think there's something interesting to say about the effect of training in this lens of looking at the density of nonlinearities?

I hope so! I would want to understand what norm the movements are 'small' in (L2, L, ...).

LayerNorm looks interesting, I'll take a look.

Maths at my Dutch university also has homework for quite a few of the courses, which often counts for something like 10-20% of final grade. It can usually be submitted online, so you only need to be physically present for exams. However, there are a small number of courses that are exceptions to this, and actually require attendance to some extent (e.g. a course on how to give a scientific presentation, where a large part of the course consists of students giving and commenting on each other's presentations - not so easy to replace the learning experience with a single exam at the end).

But this differs between Dutch universities.

I suspect the arXiv might not be keen on an account that posts papers by a range of people (not including the account-owner as coauthor). This might lead to heavier moderation/whatever. But I could be very wrong!

Some advice for getting papers accepted on arxiv

As some other comments have pointed out, there is a certain amount of moderation on arXiv. This is a little opaque, so below is an attempt to summarise some things that are likely to make it easier to get your paper accepted. I'm sure the list is very incomplete!

In writing this I don't want to give the impression that posting things to arXiv is hard; I have currently 28 papers there, have never had a single problem or delay with moderation, and the submission process generally takes me <15 mins these days.

  1. Endorsement. When you first attempt to submit a paper you may need to be endorsed. JanBrauner kindly offered below to help people with endorsements; I might also be able to do the same, but I've never posted in the CS part of arXiv, so not sure how effective this will be. However, even better to avoid need for moderation. To this end, use an academic email address if you have one; this is quite likely to already be enough. Also, see below on subject classes (endorsement requirements depend on which subject class(es) you want to post in).

  2. Choosing subject classes. Each paper gets one or more subject classes, like CS.AI; see [https://arxiv.org/category_taxonomy] for a list. Some subject classes attract more junk than others, and the ones that attract more junk are more heavily moderated. In mathematics, it is math.GM (General Mathematics) that attracts most junk, hence is most heavily moderated. I guess most people here are looking at CS.AI, I don't know what this is like. But one easy thing is to minimise cross-listing (adding additional subject classes for your paper), as then you are moderated by all of them.

  3. Write in (la)tex, submit the tex file. You don't have to do this, but it is standard and preferred by the arXiv, and I suspect makes it less likely your paper gets flagged for moderation. It is also an easy way to make sure your paper looks like a serious academic paper.

  4. It is possible to submit papers on behalf of third parties. I've never done this, and I suspect such papers will be more heavily moderated.

  5. If you have multiple authors, it doesn't really matter who submits. After the submission is posted you are sent a 'paper password' allowing coauthors to 'claim' the paper; it is then associated to their arXiv account, orcid etc (orcid is optional, but a really good idea, and free).

Finally, a request: please be nice to the moderators! They are generally unpaid volunteers doing a valuable service to the community (e.g. making sure I don't have to read nonsense proofs of the Riemann hypothesis every morning). Of course it doesn't feel good if your paper gets held up, but please try not to take it personally.

The arXiv really prefers that you upload in tex. For the author this makes it less likely that your paper will be flagged for moderation etc (I guess). So if it were possible to export to Rex I think that for the purposes of uploading to arXiv this would be substantially better. Of course, I don’t know how much more/less work it is…

Hi Charlie, If you can give a short (precise) description for an agent that does the task, then you have written a short programme that solves the task. I think then if you need more space to ‘explain what the agent would do’ then you are saying there also exists a less efficient/compact way to specify the solution. From this perspective I think the latter is then not so relevant. David

Load More