Thanks for reporting this! Most likely it was because of 'window height' wasn't excluding the parts covered by mobile browsers. I'm now specifically using 'inner height' which should fix it.
Wow I wish I had searched before beginning my own summary project.
The projects aren't quite interchangeable though. Mine are significantly longer than these, but are intended to be acceptable replacements for the full text, for less patient readers.
Thank you, I hadn't noticed the difference but I agree that complacency is not the message.
I think I can word things the way you are and spread a positive message.
Thanks a lot, you've un-stumped me.
I'm in the process of summarizing The Twelve Virtues of Rationality and don't feel good about writing the portion on perfectionism
"...If perfection is impossible that is no excuse for not trying. Hold yourself to the highest standard you can imagine, and look for one still higher. Do not be content with the answer that is almost right; seek one that is exactly right."
Sounds like destructive advice for a lot of people. I could add a personal disclaimer or adjust the tone away from "never feel satisfied" towards "don't get complacent" though that's a beyond what I feel a summarizer ought to do.
Similarly, the 'argument' virtue sounds like bad advice to take literally, unless tempered with a 'shut up and be socially aware' virtue.
I'd appreciate any perspective on this or what I should do.
In future should I post summaries individually, or grouped together like this?
Individual posts is more linkable and discoverable, but having a post for a full sequence of summaries might be more ergonomic to read and discuss.
Thanks for your thoughts, I'm glad I asked.
You're right my goal isn't very well defined yet. I'm mostly thinking along the lines of the https://non-trivial.org and https://ui.stampy.ai projects. I'd need a better understanding of beginner readers to communicate with them well. I'm not confident that I'll write great summaries on the first try, but I imagine any serious issues can be solved with some feedback and iteration.
Would summarizing lesswrong writings to be more concise and beginner friendly be a valuable project? Several times I've wanted to introduce people to the ideas, but couldn't expect them to actually get through the sequences (optimized for things other than concision).
Is lowering barrier to entry to rationality considered a good thing? It sounds intuitively good, but I could imagine concern of the techniques being misused, or benefit of some minimum barrier to entry.
Any failstates I should be concerned of? I anticipate shorter content is easier to immediately forget, giving an illusion of learning.
Thanks for your time. Please resist any impulse to tell me what you think I want to hear :)
I think that list covers the top priorities I can think of. I really loved the Embedded Agency illustrated guide (though to be honest it still leads to brain implosions and giving up for most people I've sent it to). I'd love to see more areas made more approachable that way.
Good point on avoiding duplication of effort.. I suppose most courses would correspond to a series of nodes in the wiki graph, but the course would want slightly different writing for flow between points, and maybe extended metaphors or related images.
I guess the size of typical Stampy cards has a lot to do with how much that kind of additional layering would be needed. Smaller cards are more reusable but may take more effort in gluing together cohesively.
Maybe it'd be beneficial to try to outline topics worth covering, kind of like a curriculum and course outlines. That might help learn things like how often the nodes form long chains or are densely linked.
Inspired by https://non-trivial.org, I logged in to ask if people thought a very-beginner-friendly course like that would be valuable for the alignment problem - then I saw Stampy. Is there room for both? Or maybe a recommended beginner path in Stampy styled similarly to non-trivial?
There's a lot of great work going on.
I'm thinking of artificial communities and trying to manufacture the benefits of normal human communities.
If you imagine yourself feeling encouraged by the opinions of an llm wrapper agent - how would that have been accomplished?
I'm getting stuck on creating respect and community status. It's hard to see llms as an ingroup (with good reason).