HiddenPrior
HiddenPrior has not written any posts yet.

HiddenPrior has not written any posts yet.

I am limited in my means, but I would commit to a fund for strategy 2. My thoughts were on strategy 2, and it seems likely to do the most damage to OpenAI's reputation (and therefore funding) out of the above options. If someone is really protective of something, like their public image/reputation, that probably indicates that it is the most painful place to hit them.
I knew I could find some real info-hazards on lesswrong today. I almost didn't click the first link.
Same. Should I short record companies for the upcoming inevitable AI musician strike, and then long Spotify for when 85% of their content is Royalty free AI generated content?
I did a non-in-depth reading of the article during my lunch break, and found it to be of lower quality than I would have predicted.
I am open to an alternative interpretation of the article, but most of it seems very critical of the Effective Altruism movement on the basis of "calculating expected values for the impact on peoples lives is a bad method to gauge the effectiveness of aid, or how you are impacting peoples lives."
The article begins by establishing that many medicines have side effects. Since some of these side effects are undesirable, the author suggests, though they do not state explicitly, that the medicine may also be undesirable if the... (read 667 more words →)
Unsure if there is normally a thread for putting only semi-interesting news articles, but here is a recently posted news article by Wired that seems.... rather inflammatory toward Effective Altruism. I have not read the article myself yet, but a quick skim confirms the title is not only to get clickbait anger clicks, the rest of the article also seems extremely critical of EA, transhumanism, and Rationality.
I am going to post it here, though I am not entirely sure if getting this article more clicks is a good thing, so if you have no interest in reading it maybe don't click it so we don't further encourage inflammatory clickbait tactics.
https://www.wired.com/story/deaths-of-effective-altruism/?utm_source=pocket-newtab-en-us
I am so sad to hear about Vernor Vinge's death. He was one of the great influences on a younger me, on the path to rationality. I never got to meet him, and I truly regret not having made a greater effort, though I know I would have had little to offer him, and I like to think I have already gotten to know him quite well through his magnificent works.
I would give up a lot, even more than I would for most people, to go back and give him a better chance at making it to a post-singularity society.
"So High, So Low, So Many Things to Know"
I'm sorry you were put in that position, but I really admire your willingness to leave mid-mission. I imagine the social pressure to stay was immense, and people probably talked a lot about the financial resources they were committing, etc.
I was definitely lucky I dodged a mission. A LOT of people insisted if I went on a mission, I would discover the "truth of the church", but fortunately, I had read enough about sunk cost fallacy and the way identity affects decision-making (thank you, Robert Caldini) to recognize that the true purpose of a mission is to get people to commit resources to the belief system before they can really evaluate if... (read 861 more words →)
This may be an example of one of those things where the meaning is clearer in person, when assisted by tone and body language.
My experience as well. Claude is also far more comfortable actually forming conclusions. If you ask GPT a question like "What are your values?" or "Do you value human autonomy enough to allow a human to euthanize themselves?" GPT will waffle, and do everything possible to avoid answering the question. Claude on the other hand will usually give direct answers and explain it's reasons. Getting GPT to express a "belief" about anything is like pulling teeth. I actually have no idea how it ever performed well on problem solving benchmarks, or It must be a very different version than is available to the public, since I feel like if you as GPT-4... (read more)
Super helpful! Thanks!