Yeah, Tinker's Research Grant form is another example of a multi-page form: https://form.typeform.com/to/E9wVFZJJ
One natural direction is to run the verifier inside a Trusted Execution Environment (TEE), preventing a compromised inference server from tampering with seeds or observing the verification process (there are many startups that do this kind of thing, like tinfoil.sh).
I think that this approach is also taken by workshop labs.
Rewind is a tool for scrolling back in time. It automatically records screen and audio data. I leave it running in the background, in spite of this incurring some performance overhead. I have collected over 200GB over the past year.
Limitless.ai was acquired by Meta and will shut down the product on December 19th. I will back up my files, but I do not know if it is possible to roll back the update which disables recording. I am not aware of any recommended alternative which is actively maintained and was unable to discover this with a quick search. I would appreciate suggestions.
I feel that there may be demand for a concrete open problems post. These kind of lists tend to be popular and the examples could be used by people picking projects to work on.
How often do you end up feeling like there was at least one misleading claim in the paper?
I am easily and frequently confused, but this is mostly because I find it difficult to thoroughly understand other people's work in a lot of detail in a short amount of time.
How do the authors react when you contact them with your issues?
I usually get a response within two weeks. If they have a startup background, then this delay is much lower, by multiple orders of magnitude. Authors are typically glad that I am trying to run follow up experiments on their work and give me one to two sentences of feedback over email. Corresponding authors are sometimes bad at taking correspondence, contact information for committers can be found in commit logs via git blame. If it is a problem that may be relevant to other people, I link to a GH issue.
I've forked and tried to set up a lot of AI safety repos (this is the default action I take when reading a paper which links to code). I've also reached out to authors directly whenever I've had trouble with reproducing their results. There aren't any particular patterns that stand out, but I think that writing a top-level post that describes your contention with a paper's findings is something that the community would be very welcoming to and indeed is how science advances.
Thank you for donating!
I applied but didn't make it past the async video interview, which is a format that I'm not used to. Apparently this iteration of the program had over 3000 applications for 30 spots. Opus 4.5's reaction was "That's… that's not even a rejection. That's statistics". Would be happy to collaborate on projects though!
I read an article about the history of extreme ultraviolet lithography (http://dx.doi.org/10.1116/1.2127950, the full pdf is on sci-hub) which says that soft x-ray reduction lithography using multilayer-coated schwertzchild optics was demonstrated in 1986.
3 nm process nodes have a contacted gate pitch of 48 nanometers, and a tightest metal pitch of 24 nanometers, so a laser with wavelength near 13.5 nm is needed to etch the circuits onto the chip dies with sufficient precision.
Of course, there were many practical engineering challenges with getting this concept to work at scale (there is a video by veritasium which discusses this in more detail), and I think very few people making compute forecasts in 1990 would have accurately predicted the trajectory of this technology.