I don't know if you're purposely being antagonistic, but I'll respond because I try to assume that people are arguing in good faith.
You predict nothing of that sort in the linked comment. The antibody test being negative is a distinct event from immunization.
The first linked comment I said that "there's a 1-2% chance here that you've effectively immunized yourself from COVID.". In the second linked comment, I clarified that an anti-body test would be the the predictor of immunization.
I picked 50% because of the comment:
...My rough guess is that there's
My prediction that the anti-body test would come back negative: https://www.lesswrong.com/posts/niQ3heWwF6SydhS7R/making-vaccine?commentId=hgwzegWcLEYrZMmZj
You predict nothing of that sort in the linked comment. The antibody test being negative is a distinct event from immunization.
No one took me up on a bet: https://www.lesswrong.com/posts/niQ3heWwF6SydhS7R/making-vaccine?commentId=h8mqNdAypszWYQ7dh.
You offered 50/50 odds for an event that johnswentworth gave 29% likelihood (which was the maximum of anybody giving). It's quite obvious why nobo...
I don't understand the argument about SAD.
Should I conclude from my inability to find any published studies on the Internet testing this question that there is some fatal flaw in my plan that I’m just not seeing?
A simple Google search shows thousands of articles addressing this very solution. The first Google result I found is a paper from 1984 with 2,758 citations: https://jamanetwork.com/journals/jamapsychiatry/article-abstract/493246
...We report our preliminary attempts to modify these depressions by manipulating environmental lighting conditions. We
A simple Google search shows thousands of articles addressing this very solution.
The solution in the paper you link is literally the solution Eliezer described trying, and not working:
As of 2014, she’d tried sitting in front of a little lightbox for an hour per day, and it hadn’t worked.
(Note that the "little lightbox" in question was very likely one of these, which you may notice have mostly ratings of 10,000 lux rather than the 2,500 cited in the paper. So, significantly brighter, and despite that, didn't work.)
It does sound like you misunderstood...
You can buy nasal sprays over-the-counter, while I can't think of a single injectable medicine that you can buy legally without a prescription. I don't think the "stab people in the arm" argument is very strong.
Would you like to make a friendly wager? (Either Dentin, or johnswentworth, or anyone else making their own vaccine). We can do 50/50, since its in between our estimates. If you have two positive back-to-back anti-body tests within 2 months, you win (assuming you don't actually contract covid, which I trust you'll be honest here). If not, I win. To start off with, I'm willing to put down $100, but happy to go up or down.
My estimate for whether or not I would test positive on a blood test was only about 50%, since blood isn't the primary place that the response is generated. I'm already betting a substantial amount of money (peptide purchases and equipment) that this will be helpful, and I see no reason to throw an additional $50 on a break-even bet here.
I would, however, be happy to commit to sharing results, whether they be positive or negative.
... and now it occurs to me that if Lesswrong had a 'public precommitments' feature, I would totally use it.
Vaccines that are brought to clinical trials have a 33.4% approval rate, which seems like a reasonable estimate of the chances that this vaccine works if executed correctly.
I don't follow. Don't vaccines have trials on cells, mice, primates, before clinical? So unless radvac has also done similar testing, this 33.4% isn't comparable.
Props to you for taking action here, this is some impressive stuff.
That being said, I'm extremely skeptical that this will work, my belief is that there's a 1-2% chance here that you've effectively immunized yourself from COVID.
What do you believe is the probability of success?
Why are established pharmaceutical companies spending billions on research and using complex mRNA vaccines when simply creating some peptides and adding it to a solution works just as well?
My rough guess is that there's a 75% probability of effectively full immunity, and a 90% probability of severity reduction. This is a pretty well tested and understood vaccine mechanism, and the goal isn't "perfect immunity" as "prime the immune system so it doesn't spend a week guessing about what antibodies it needs to combat the virus effectively".
As to why established companies don't do it, I believe it's partially logistics, and largely red tape. Logisitics first (though it should be noted that at least some of these could likely be tackled with...
With Sam Altman (CEO of YCombinator) talking so much about AI safety and risk over the last 2-3 months, I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.
Although on second thought, they're aiming for different goals. While MIRI is focused on safety once strong AI occurs, OpenAI is trying to actually speed up the research of strong AI.
Nate Soares says there will be some collaboration between OpenAI and MIRI:
This isn't bad, though I feel like:
This I call "pretending to be Wise". Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses - refusing to sum up evidence - refusing to pass judgment - refusing to take sides - staying above the fray and looking down with a lofty and condescending gaze - which is to say, signaling wisdom by saying and doing nothing - well, that I find particularly pretentious.
would apply to the XKCD example, but not to the people claiming that the Lebanon attacks should've been publicized more than the Paris attacks. I hope I'm not treading too much into political territory here.
Is there a good word for https://xkcd.com/774/? The closest word I can think of is "countersignaling", but it doesn't precisely describe it. I've noticed this sort of behavior a lot on Facebook recently, with the Paris terrorist attacks.
Whenever the conjunction fallacy is brought up, it always irks me, because it doesn't seem like a real fallacy. In the example given by Rationality A to Z, "[...] found that experimental subjects consdiered it less likely that a strong tennis player would lose the first set than he would lose the first set but win the match."
There's two valid interpretations of this statement here:
1) The fallacious interpretation: P(Lose First Set) < P(Lose First Set and Win Match)
2) P(Lose First Set) < P(Win Match | Lose First Set), which is a valid and n...
Looks like it has been addressed in Conjunction Controversy (Or, How They Nail It Down):
...A further experiment is also discussed in Tversky and Kahneman (1983) in which 93 subjects rated the probability that Bjorn Borg, a strong tennis player, would in the Wimbledon finals "win the match", "lose the first set", "lose the first set but win the match", and "win the first set but lose the match". The conjunction fallacy was expressed: "lose the first set but win the match" was ranked more probable than"
I haven't really looked into it, but there was an odd message that he left in his IAMA in regards to Girardian philosophy: http://www.reddit.com/r/IAmA/comments/2g4g95/peter_thiel_technology_entrepreneur_and_investor/ckfn9rj?context=3 . Would love for anyone who knows more to jump in.
As noted in http://lesswrong.com/lw/lfg/cfar_in_2014_continuing_to_climb_out_of_the/, they haven't even started yet. Also, just replicating a study they cite in their rationality training would be a good step.
...One of the future premises of CFAR is that we can eventually apply the full scientific method to the problem of constructing a rationality curriculum (by measuring variations, counting things, re-testing, etc.) -- we aim to eventually be an evidence-based organization. In our present state this continues to be a lot harder than we would like; and o
On CFAR's front page:
In the process, we’re breaking new ground in studying the long-term effects of rationality training on life outcomes using randomized controlled trials.
Despite CFAR's 2-3 year existence (probably longer informally, as well) they have yet to publish a single paper on these "randomized controlled trials". I would advise not donating until they make good on their claims.
edit: I've also made some notes on CFAR and their use of science as an applause light in previous comments.
Thinking about a quote from HPMOR (the podcast is quite good, if anyone was interested):
...But human beings had four times the brain size of a chimpanzee. 20% of a human's metabolic energy went into feeding the brain. Humans were ridiculously smarter than any other species. That sort of thing didn't happen because the environment stepped up the difficulty of its problems a little. Then the organisms would just get a little smarter to solve them. Ending up with that gigantic outsized brain must have taken some sort of runaway evolutionary process, something
Personally, I prefer more produced podcasts, in the style of Serial, Freakonomics, etc, because very few people are good interviewees. I would like to hear more if you could improve the microphone quality - I couldn't distinguish some words, even upon relistening. I'm sure the person behind HPMOR Podcast would offer more tips if you contacted him.
Hey Dan, thanks for responding. I wanted to ask a few questions:
You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey?
"0 to 10, are you glad you came?" This is a biased question, because you frame that the person is glad. A similar negative question may say "0 to 10, are you dissatisfied that you came?" Would it be possible to anonymize and post the survey questions and data?
...We also sent out a survey earlier this year to 20 randomly selected alumni who had attende
Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn't have much to learn or because the actual techniques were not effective?
I don't believe I had a high level of knowledge on the specific topics they were teaching (behavior change, and the like). I did study some cognitive science in my undergraduate years, and I take issue with the 'science'.
Do you think your experience was typical?
I believe that the majority of people don't get much, if anything, from CFAR's rationality...
I didn't learn anything useful. They taught, among other things, "here's what you should do to gain better habits". Tried it and didn't work on me. YMMV.
One thing that really irked me was the use of cognitive 'science' to justify their lessons 'scientifically'. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I'm not sure what the correct phrase is - the words weren't constraining beliefs? don't pay rent? they could have made up scientific sounding words and it would have ...
(This is Dan from CFAR again)
We have a fair amount of data on the experiences of people who have been to CFAR workshops.
First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question "0 to 10, are you glad you came?" The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response...
(Dan from CFAR here)
Hi cursed - glad to hear your feedback, though I'm obviously not glad that you didn't have a good experience at the CFAR events you went to.
I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR's curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I'll leave a separate comment about thing 2.
Some of the techniques that CFAR teaches are based pretty directly on things from the academi...
Cryonics ideas in practice:
"The technique involves replacing all of a patient's blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. "If a patient comes to us two hours after dying you can't bring them back to life. But if they're dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed," says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique."
Thanks, I made an edit you might not have seen, I mentioned I do have experience with calculus (differential, integral, multi-var), discrete math (basic graph theory, basic proofs), just filling in some gaps since it's been awhile since I've done 'math'. I imagine I'll get through the first two books quickly.
Can you recommend some algebra/analysis/topology books that would be a natural progression of the books I listed above?
I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use? Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):
I'm w...
I advise that you read the first 3 books on your list, and then reevaluate. If you do not know any more math than what is generally taught before calculus, then you have no idea how difficult math will be for you or how much you will enjoy it.
It is important to ask what you want to learn math for. The last four books on your list are categorically different from the first four (or at least three of the first four). They are not a random sample of pure math, they are specifically the subset of pure math you should learn to program AI. If that is your goal,...
"from 11PM to 5PM PST on Saturday, Jan. 4th."
Guessing you meant 11AM. -Edit: The Eventbrite link says 11AM to 7PM. What is it?
I wasn't convinced about testimonials from CFAR camps (also as a student, the price deterred me), but with a money back guarantee it seems like the opportunity cost of spending 6 hours at CFAR outweighs whatever else I would do. Tempted to go.