The answer seems to be yes.
On the manifund page it says the following:
Virtual AISC - Budget version
Software etc
$2K
Organiser salaries, 2 ppl, 4 months
$56K
Stipends for participants
$0
Total $58K
In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.Salaries are calculated based on $7K per person per month.
Based on the minimum threshold of $28k, that would seem to offer about 2 ppl for 2 months.
In my country it says to take 1-2 paracetamol, so that might be the cause of the confusion.
Thank you for your feedback! This is a mistake on my part. I will take the article down until I've looked into this and have updates my resources.
Edit: I have updated the article. It should be better now :)
After learning about the Portfolio Diet I have been doing the same! Whenever I'm cooking I tend to ask three questions:
For me these questions work because I'm already eating plenty of fruits and vegetables. And, I haven't really added plant sterols into my diet yet.
Good luck to you!
I didn't know these numbers and I didn't know about the Taeuber paradox, but they definitely put Part 5 into perspective.
I wonder if early treatment should be considered a refinement? That is debatable and I honestly don't know the answer. But it does put an upper bound on the benefits of starting early treatment, for which I'm grateful.
You make some good points, but thinking about the fact that researchers should correct for multiple-hypothesis testing always makes me a little sad—this almost never happens. Do you have an example where a study does this really nicely?
Also, do you have any input on the hypothesis that treating early is a worthwhile risk?
I wanted to make this comment for a while now, but I was worried that it wasn't worth posting because it assumes some familiarity with multi-agent systems (and it might sound completely nuts to many people). But, since your writing on this topic has influenced me more than anyone else, I'll give it a go. Curious what you think.
I agree with the other commenters that having GPT-3 actively meddle with actual communications would feel a little off.
Although intuitively it might feel off—for me it does as well—in practice GPT-3 would be just another agent interacting with all the other agents using global neural workspace (mediated by vision).
Everyone is very worried about transferring their thinking and talking to a piece of software, but for me this seems to come from a false sense of agency. It is true that I have no agency over GPT-3, but the same goes for my thoughts. All my thoughts simply appear in my mind and I have absolutely zero choice in choosing them. From my perspective there is no difference other than automatically identifying with thoughts as "me".
There might be a big difference in performance. I can systematically test and tune GPT-3. If I don't like how it performs, then I shouldn't let it influence my colony of agents. But if I do like it, then it is the first time that I have added another agent to my mind, that has the qualities I have chosen.
It is very easy to read about non-violent communication (or other topics like biases), and think to yourself, that is how I want to write and act, but in practice it is hard to change. By introducing GPT-3 as another agent, that is tuned for this exact purpose, I might be able to make this change orders of magnitude easier.
I think that is a fair point, I honestly don't know.
Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:
But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result.
I've noticed that some skydivers wear necklaces with a "closing pin." Skydiving really is a lifestyle, and I don't think that many people outside of skydiving would recognize a closing pin or wear it as jewelry.