EU AI Policy / Mechatronics Engineer - Co-Lead @AI Standards Lab
LW Feature request/idea for a feature - In posts that have lots of in-text links to other posts, perhaps add an LLM 1-2 sentence (context informed) summary in the hover preview?
I assume that for someone who has been around the forum for many years, various posts are familiar enough that name-dropping them in a link is sufficient to give context. But If I have to click a link and read 4+ other posts as I am going through one post, perhaps the LW UI can fairly easily build in that feature.
(suggesting it as a features since it does seem like LW is a place that experiments with various features not too different from this - ofc, I can always ask for a LLM summary manually myself if I need to)
I have the Boox Nova Air (7inch) for nearly 2 years now - a bit small for reading papers but great for books and blog posts. You can run google play apps, and even set up a google drive sync to automatically transfer pdfs/epubs onto it. At some point I might get the 10inch version (the Note Air).
Another useful feature is taking notes inside pdfs, by highlighting and then handwriting the note into the Gboard handwrite-to-text keyboard. Not as smooth as on an iPad, but pretty good way to annotate a paper.
This was very interesting, looking forward to the follow up!
In the "AIs messing with your evaluations" (and checking for whether the AI is capable of/likely to do so) bit, I'm curious if there is any published research on this.
Hmm, in that case maybe I misunderstood the post, my impression wasnt that he was saying AI literally isn't a science anymore, but more that engineering work is getting too far ahead of the science part - and that in practice most ML progress now is just ML Engineering, where understanding is only a means to an end (and so is not as deep as it would be if it was science first).
I would guess that engineering gets ahead of science pretty often, but maybe in ML it's more pronounced - hype/money investment, as well as perhaps the perceived relative low stakes (unlike aerospace, or medical robotics which is my field) not scaring the ML engineers enough to actually care about deep understanding, and also perhaps the inscrutable nature of ML - if it were easy to understand, it wouldn't be as unappealing spend resources to do so.
I don't really have a take on where the in elegance comes in to play here
While theoretical physics is less "applied science" than chemistry, there's still a real difference between chemistry and chemical engineering.
For context, I am a Mechanical Engineer, and while I do occasionally check the system I am designing and try to understand/verify how well it is working, I am fundamentally not doing science. The main goal is solving a practical problem (i.e. as little theoretical understanding as is sufficient), where in science the understanding is the main goal, or at least closer to it.
So basically, post hoc, ergo propter hoc (post hoc fallacy)
If winning happened after rationality (in this case, any action you judge to be rational under any definition you prefer) it does not mean it happened because of it.
This was a great read! Personally I feel like it ended too quickly - even without going into gruesome details, I felt like 1 more paragraph or so of concluding bits in the story was needed. But, overall I really enjoyed it.
I'm trying to think of ideas here. As a recap of what I think the post says:
^let me know if I am understanding correctly.
Some ideas/thoughts:
I might have more thoughts later on.
(for context, I am recently involved in governance work for the EU AI Act)
Signal boost for the "username hiding" on homepage feature in settings - it seems cool, will see if it changes how I use LW.
I wonder also about a "hide karma by default". Though less sure if that will actually achieve the intended purpose, as karma can be a good filter when just skimming comments and not reading in detail.