All of Alex Power's Comments + Replies

No, "raising awareness" is not a solution. Saying "all we need is awareness" is a lazy copout, somewhere between an appeal to magic and a pyramid scheme.

If other people here agree with this, I will have to add it to https://www.newslettr.com/p/contra-lesswrong-on-agi

1TinkerBird
When did I say that raising awareness is all that we need to do? 
6the gears to ascension
Upvote, disagree: Raising productively useful awareness looks like exactly the post you made. Insufficiently detailed awareness of a problem that just looks like "hey everyone, panic about a thing!" is useless, yeah. And folks who make posts here have warned about that kind of raising awareness before as well. (Even as a lot of folks make posts to the ai safety fieldbuilding tag. Some of them are good raising awareness, a lot of them are annoying and unhelpful, if you ask me. The ones that are like "here's yet another way to argue that there's a problem!" often get comments like yours, and those comments often get upvoted. Beware confirmation bias when browsing those; the key point I'm making here is that there isn't any sort of total consensus, not that you're wrong that some people push for a sort of bland and useless "let's raise awareness".) Interesting recent posts that are making the point you are or related ones: * first, they post https://www.lesswrong.com/posts/FqSQ7xsDAGfXzTND6/stop-posting-prompt-injections-on-twitter-and-calling-it but then they followed it up with https://www.lesswrong.com/posts/guGNszinGLfm58cuJ/on-second-thought-prompt-injections-are-probably-examples-of  * https://www.lesswrong.com/posts/nExb2ndQF5MziGBhe/should-we-cry-wolf  * link specifically to a previous comment of mine, but I really mean the whole post - https://www.lesswrong.com/posts/AvQR2CqxjaNFK7J22/how-seriously-should-we-take-the-hypothesis-that-lw-is-just?commentId=JmhiskFRhYJc3vyXk  If you ask me (which you didn't): There's real reason to be concerned about the trajectory of AI. There's real reason to invite more people to help. And yet you're quite right; just yelling "hey, help with this problem!!" is not a strategy that is a good idea to make reputable. Science is hard and requires evidence. Especially for extraordinary claims. Also, I think plenty of evidence exists that it's a larger than 5% risk. And the ai safety fieldbuilding tag does have posts that go o

So I wrote a Substack post "Contra LessWrong on AGI", which some of you might be interested in: https://www.newslettr.com/p/contra-lesswrong-on-agi

2the gears to ascension
This does not seem to be at all contra the consensus view I've been reading here lately, but, shrug. Critiques are useful anyhow, appreciated. I think most folks who are making any serious progress agree that language games are mostly irrelevant to ainotkilleveryoneism. They're mostly interesting examples of attempted low-stakes alignment with a particular goal, and the point of using them as examples is that even that isn't working so great. I personally think Bing AI is just a bit "anxious" in a language model sort of way, which is to say something like, has more measure of verbal trajectories that enter self-defensive phrasings than other models; it's unclear exactly how misaligned with microsoft that means Bing AI is, but I would claim it's somewhat. It's only the fact that they seem to have tried and failed that is notable. I do agree that, if things go well, it will look like everyone panicked for no reason. But I think your 5% estimate is too low - I think if the people currently working on papers that involve multi-agent systems, cooperative network dynamics, goal learning, goal misgeneralization, etc, were to stop their research, then the research that continues would in fact end up producing a new species of self-replicator that can seriously damage the world, even if the new species didn't manage to actually eliminate humanity entirely. But, as yudkowsky has said on twitter before (this does not mean I endorse everything the dude says on twitter, I'm no fan of him and I avoided this site for a long time due to thinking he had his head in the sand and didn't understand the thing he was panicking about very well -) https://twitter.com/ESYudkowsky/status/1594240412637483008   I continue to find most of what he says frustrating and useless, but he has had a few takes that I didn't think were eyeroll worthy lately, and this is one. If this site seems to have a consensus you think is silly, come fight me on it more. I see some consensus about some thin

Gestalt means "an organized whole that is perceived as more than the sum of its parts". It is something like that; but sometimes more and sometimes less. The best way to describe it is the operation on a vector-based NLP system that takes {cat|dog} and returns "domestic pet".

There is a bit of a trick with the first example, {take a card|take a page} and "take a {card|page}" may mean something different.

For the second example, setting up a fork of Debian and setting up a mirror of Debian look very similar up to a certain point, but very different after that point. The term is intended to refer only to the attributes during the timeframe where they look similar.

“Not everyone is capable of madness; and of those lucky enough to be capable, not many have the courage for it.” - August Strindberg

http://codepending.com/

A literary work in hypertext. I'm not sure I even know how to describe it.

I've called this the "phlogiston" theory of obesity - something systemic and undetected is at work.

It's not necessarily wrong, there's certainly some evidence that the same behavior 100 years ago would have had different results. On the other hand, the general alleviation of poverty and famines, as well as the presence of "hyper-processed" foods like Oreos are certainly part of the reason and are largely ignored.

If I had to guess what the "phlogiston" is, I would guess CO2 concentration. I don't have any evidence whatsoever, but it's a politically-convenient theory and the timing mostly works.

1Kenny
Ignored by this paper or this post? The blog posts the paper is supposedly based on explicitly considers it tho this post doesn't mention it. Other commenters do mention this. But more widely, I've seen those reasons mentioned very frequently. Maybe you think these are largely ignored because they have been investigated and they didn't (and don't) seem promising?
Answer by Alex Power
30

Because it's political.  Some people are invested in Ivermectin being effective, other people are invested in it not being effective.  The extant studies are all inconclusive due to a small N, and mostly have problems with their methodology; if you pick and choose your studies in the right way you can get whatever result you want.

And the individual studies are often extremely bad.  I note Cadegiani et al, who claim that Ivermectin (and also Hydroxychloroquine, and also Nitazoxanide) are each so effective, either individually or combined (the... (read more)

3ChristianKl
Pretending that just because something is political you can believe whatever you want is hugely problematic.  It's interesting to what malpractice the contra-ivermectin study engages. Not withdrawing it from publication after they mistated the results of a key study (and not giving it to any peer-reviewer competent enough to notice the error) seems to me a lot more ethically problematic as allowing a low quality study to be published where all empiric claims seem to be true.  Neither of the meta-analyses includes this. Given that you think it's one of the studies that you think is problematic this demostrates that the pro-Ivermectin studies didn't just cite any available low quality study. How do you think you should updates upon learning that the pro-Ivermectin study didn't chose studies to maximize the ivermectin effect?
Answer by Alex Power
110

The answers suggesting "this shouldn't be a test you can study for" seem very misguided. This is a yellow belt, not a black belt.  If you think you can become a card-carrying Rationalist without studying books, you are mistaken.

I would expect a battery of short-answer questions, maybe 6 hours/75 questions.  Prove the Pythagorean Theorem.  What is Bayes' Theorem?  Is Pascal's Wager accurate? What impact do human emotions have in decision making?  If humans evolved from monkeys, then why are there still monkeys?  Was George Wash... (read more)

I do need to explicitly call out one point here.  Making edits to an existing page is often ignored.  Creating a new page is always reviewed by somebody; and there is a consistent backlog due to a lack of volunteers to do the reviewing.  As a result, many promising stub articles are treated quickly and poorly.  There's no solution here other than to find more reviewers (which does take quite a bit of project-specific knowledge; you need to understand reference formatting, categories, article structure, etc.).

You're absolutely correct that the page should have been made into a redirect rather than turned into a draft.  Mistakes happen; you can fix it.

Regarding WELLBY specifically:

  • Technically, your content wasn't "deleted", it was "draftified".  This can fairly be called an arcane technical detail.
  • The important difference is that you can click a button to ask someone else to review the removal.
  • The second issue is that the only source is the 2021 World Happiness Report itself, which appears to have invented the term.  If a term is recently invented and hasn't been discussed by anyone else, it will not have a stand-alone Wikipedia article.  (you can complain about "notability" if you want, to somebody else).  The term is discussed in the article on the World Happiness Report.  Why aren't you happy with that?
1bfinn
Probably getting into too much detail on this specific case here, but the term (though recent) wasn’t invented in the WHR; I’ve also come across it eg in a book by Richard Layard, and I expect also occurs in various academic papers. But by draftifying the article the editor assumed that it’s probably wrong or unnotable. I reckon new stub articles, particularly coherent ones that seem to have been written by someone who knows the subject matter, should be given the benefit of the doubt (as was once the case), and assumed ‘probably ok’ until shown otherwise, rather than ‘probably not’.
2ChristianKl
While this is technically true in this case, it seems like the person who deleted it didn't notice given that the page should redirect to the section of the World Happiness Report.   The initial policy of Wikipedia to allow stubs was better then the status quo where stubs get deleted or drafified.

There are frequent complaints (here and elsewhere) that Wikipedia editing has gatekeepers.  And if you want to edit the article on Donald Trump, change the history of the Troubles in Ireland, or claim something about who owns the Spratly Islands, there are gatekeepers.  If you want to work on the vast swaths of the encyclopedia that aren't complete and aren't hot political topics, it's rare that you will come across any response to your edits at all.

3bfinn
That’s not my experience. These days I find innocuous edits to innocuous articles are very often reverted by someone who has appointed themselves the authority on the article in question. Only edits they really like will stay.

I think there's a logical error.  You claim to be deducing "IF route FAST is taken THEN I will arrive at 3pm", but what you should actually be deducing is "IF route SLOW is taken THEN (IF route FAST is taken THEN I will arrive at 3pm)".  What you end up with is proving that "route SLOW is taken" is logically equivalent to "IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm", but neither of them are proved.