At the moment, a post is marked as "read" after just opening it. I understand it is useful not to have to mark every post as "I read this", but it makes it so that if I just look at a post for 10 seconds to see whether it interests me, it gets marked as read. I would prefer if one could change the settings to a "I have to mark posts as <read> manually" mode. With a small box at the bottom of a post, one can check.
I think it's mostly that people complain when something gets worse but don't praise an update that improves the UX.
If a website or app gets made worse people get upset und complain, but if the UI gets improved people generally don't constantly praise the designers. A couple of people will probably comment on an improved design but not nearly as many as when the UI gets worse.
So whenever someone mentions a change it is almost always to complain.
If I just look at the Software I am using right now:
I think this might play a really big role. I'm a teenager and I and all the people I knew during school were very political. At parties people would occasionally talk about politics, in school talking about politics was very common, people occasionally went to demonstrations together, during the EU Parlament election we had a school wide election to see how our school would have voted. Basically I think 95% of students, starting at about age 14, had some sort of Idea about politics most probably had one party they preferred.
We were probably most concerned ...
Deutsch has also written elsewhere about why he thinks AI doom is unlikely and I think his other arguments on this subject are more convincing. For me personally, he is who gives me the greatest sense of optimism for the future. Some of his strongest arguments are:
Hi, thanks for the advice.
Do you, or other people, know why your comment is getting downvoted? Right now it's at -5 so I have to assume the general LW audience disagrees with your advice. Presumably people think it is really hard to become a ML researcher? Or do they think we already have enough people in ML so we don't need more?
I am interested in working on AI alignment but doubt I'm clever enough to make any meaningful contribution, so how hard is it to be able to work on AI alignment? I'm currently a high school student, so I could basically plan my whole life so that I end up a researcher or software engineer or something else. Alignment being very difficult, and very intelligent people already working on it, it seems like I would have to almost be some kind of math/computer/ML genius to help at all. I'm definitely above average, my IQ is like 121 (I know the limitations of IQ...
Interesting. Do you have any recommendations on how to do this most effectively? At the moment I'm
Questions I'd have:
- Is google sheets good for something like this or are there better programs?
- Any adv
... (read more)