gjm

Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated for several years. I live near Cambridge (UK) and work for Hewlett-Packard (who acquired the company that acquired what remained of the small company I used to work for, after they were acquired by someone else). My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.

If you're wondering why some of my very old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.

Wiki Contributions

Comments

Sorted by
gjm158

I think you're using "memetic" to mean "of high memetic fitness", and I wish you wouldn't. No one uses "genetic" in that way.

An idea that gets itself copied a lot (either because of "actually good" qualities like internal consistency, doing well at explaining observations, etc., or because of "bad" (or at least irrelevant) ones like memorability, grabbing the emotions, etc.) has high memetic fitness. Similarly, a genetically transmissible trait that tends to lead to its bearers having more surviving offspring with the same trait has high genetic fitness. On the other hand, calling a trait genetic means that it propagates through the genes rather than being taught, formed by the environment, etc., and one could similarly call an idea or practice memetic if it comes about by people learning it from one another rather than (e.g.) being instinctive or a thing that everyone in a particular environment invents out of necessity.

When you say, e.g., "lots of work in that field will be highly memetic despite trash statistics, blatant p-hacking, etc." I am pretty certain you mean "of high memetic fitness" rather than "people aware of it are aware of it because they learned of it from others rather than because it came to them instinctively or they reinvented it spontaneously because it was obvious from what was around them".

(It would be possible, though I'd dislike it, to use "memetic" to mean something like "of high memetic fitness for 'bad' reasons" -- i.e., liable to be popular for the sort of reason that we might not appreciate without the notion of memes. But I don't think that can be your meaning in the words I quoted, which seem to presuppose that the "default" way for a piece of work to be "memetic" is for it to be of high quality.)

gjm74

Unless I misread, it said "mRNA" before.

gjm2211

Correction: the 2024 Nobel Prize in Medicine was for the discovery of microRNA, not mRNA which is also important but a different thing.

gjm126

I think it's more "Hinton's concerns are evidence that worrying about AI x-risk isn't silly" than "Hinton's concerns are evidence that worrying about AI x-risk is correct". The most common negative response to AI x-risk concerns is (I think) dismissal, and it seems relevant to that to be able to point to someone who (1) clearly has some deep technical knowledge, (2) doesn't seem to be otherwise insane, (3) has no obvious personal stake in making people worry about x-risk, and (4) is very smart, and who thinks AI x-risk is a serious problem.

It's hard to square "ha ha ha, look at those stupid nerds who think AI is magic and expect it to turn into a god" or "ha ha ha, look at those slimy techbros talking up their field to inflate the value of their investments" or "ha ha ha, look at those idiots who don't know that so-called AI systems are just stochastic parrots that obviously will never be able to think" with the fact that one of the people you're laughing at is Geoffrey Hinton.

(I suppose he probably has a pile of Google shares so maybe you could squeeze him into the "techbro talking up his investments" box, but that seems unconvincing to me.)

gjm20

Pedantic correction: you have some sizes where you've written e.g. 20' x 20' and I'm pretty sure you mean 20" x 20".

(Also, the final note saying pixel art is good for crisp upscaling and you should start with the lowest-resolution version seems very weird to me, though the way it's worded makes it unlikely that this is a mistake; another sentence or so elaborating on why this is a good idea would be interesting to me.)

gjm61

So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.

gjm50

I'm pretty sure what's going on is:

  • The system automatically generates candidate theorems it might try to prove, expressing possible answers, and attempts to prove them.
  • In this case, the version of the theorem it ended up being able to prove was the one with 2 in that position. (Which is just as well, since -- I assume, not having actually tried to solve the problem for myself -- that is in fact the unique number for which such a theorem is true.)
  • So the thing you end up getting a proof of includes the answer, but not because the system was told the answer in advance.

It would be nice to have this more explicitly from the AlphaProof people, though.

[EDITED to add:] Actually, as per the tweet from W T Gowers quoted by "O O" elsewhere in this thread, we do have it explicitly, not from the AlphaProof people but from one of the mathematicians the AlphaProof people engaged to evaluate their solutions.

gjm80

The AlphaZero algorithm doesn't obviously not involve an LLM. It has a "policy network" to propose moves, and I don't know what that looks like in the case of AlphaProof. If I had to guess blindly I would guess it's an LLM, but maybe they've got something else instead.

gjm20

I don't think this [sc. that AlphaProof uses an LLM to generate candidate next steps] is true, actually.

Hmm, maybe you're right. I thought I'd seen something that said it did that, but perhaps I hallucinated it. (What they've written isn't specific enough to make it clear that it doesn't do that either, at least to me. They say "AlphaProof generates solution candidates", but nothing about how it generates them. I get the impression that it's something at least kinda LLM-like, but could be wrong.)

Load More