I would not interpret your case as severe according to this grade. The specification "some assistance usually required" seems like they mean your reaction is so bad you need help eating/washing/using the toilet, which I assume was not the case for you. While -- especially for a young healthy person -- staying in bed all day is a "marked reduction" of activity, there's still quite some room for further worsening before you're at a point where it's life threatening.
While the wording could be more clear, if my interpretation is correct I would agree this is an OK grading to use.
What about Israel? R-value is the highest since the first wave despite ~60% vaccinated with afaict mostly mRNA vaccines: OurWolrdInData Link.
Luckily hospitalizations/deaths also appear to be not strongly affected.
What was wrong with the original plan to open source the vaccine to any company that wanted to make it and have them compete to scale up production?
Bill Gates himself addresses this question here: https://youtu.be/Grv1RJkdyqI?t=558
The reasoning as I understand it: Vaccine production is too complicated for open access to work well. There is a significant risk that something goes wrong and the vaccine factory has to shut down, so it is better for that factory to be producing a vaccine they know exactly how to do. Oxford partnering with AstraZeneca ensures th...
Most importantly, this framing is always about drawing contrasts: you're describing ways that your culture _differs_ from that of the person you're talking to. Keep this point in the forefront of your mind every time you use this method: you are describing _their_ culture, not just yours. [...] So, do not ever say something like "In my culture we do not punish the innocent" unless you also intend to say "Your culture punishes the innocent" -- that is, unless you intend to start a fight.
Does this also apply to your own personal...
Does this also apply to your own personal culture (whether aspiring or as-is), or "just" the broader context culture?
We're talking about a tool for communicating with many different people with many different cultures, and with people whose cultures you don't necessarily know very much about. So the bit you quoted isn't just making claims about my culture, or even one of the (many) broader context cultures, it's making claims about the correct prior over all such cultures.
But what claims exactly? I intended these two:
"Retreat" in the sense of a spiritual retreat, but with the topic of rationality instead of meditation or spirituality. Following the same principle as, e.g. the Czech EA retreat.
"Rationality" as it is generally understood on LessWrong. So this is aimed at people who aspire to be more rational and want to interact with like-minded people.
[Note: mostly just me trying to order my thoughts, kind of hoping someone can see and tell me where my confusion comes from]
So the key insight regarding suffering seems to be that pain is not equal to suffering. Instead there is a mental motion (flinching away from pain) that produces (or is equal to?) suffering. And whereas most people see pain as intrinsically bad, Looking allows you to differentiate between the pain and the flinching away, realizing that pain in and of itself is not bad. It also allows you to get rid of the flinching away, thus eliminat...
I haven't really understood where the fakeness in the framework is. And the other comments also seem to not acknowledge, that it is a fake framework, which I am interpreting as people taking this framework at face value to be true or real. I suspect I haven't quite understood what is meant by "fake framework".
I'm currently seeing two main ways in which I can make the fakeness make sense to me:
I haven't really understood where the fakeness in the framework is.
Well, by my model of epistemic hygiene, it's therefore especially important to label it "fake" as you step into using it. Otherwise you risk forgetting that it's an interpretation you're adding, and when you can't notice the interpretations you're adding anymore then you have a much harder time Looking at what's true.
In my usage, "fake" doesn't necessarily mean "wrong". It means something more like "illusory". T...
Is there a difference between what you are describing and simply having a more or less nuanced view on the matter? It seems like you're confirming exactly what Paul Graham describes. You've made your identity as a mathematician smaller and are thus no longer threatened by people expressing certain opinions on math. But there are still things that are fundamental to your identity as a mathematician, that need protecting. If someone says "math is useless" does that not evoke a feeling of needing to defend maths?
Could you paint a more detailed picture of what you mean by happiness? There is a wide range of things that can be called happiness, and I assume you only mean some of them. In particular, I don't think you mean the happiness you feel when you get a reward, because that's what we are actually optimized for achieving.
how will writing it again change anything?
Why should anyone answer this question? Kaj has already written an answer to this question above, but you don't understand it. How will writing it again change anything? You still won't understand it. This request for an explanation makes no sense whatsoever. It's not that you understand the answer and have some complaint and want it to be better in some way, you just won't understand.
You claim you want to be told when you're mistaken, but you completely dismiss any and all arguments. You're just like "thes...
I suspect you may be thinking of the thing where people prefer e.g. a (A1) 100% chance of winning 100€ (how do I make a dollar sign?) to a (A2) 99% chance of winning 105€, but at the same time prefer (B2) a 66% chance of winning 105€ to (B1) a 67% chance of winning 100€. This is indeed irrational, because it means you can be exploited. But depending on your utility function, it is not necessarily irrational to prefer both A1 to A2 and B1 to B2.
The current topic is epistemology, not the color of the sky, so you don't get to gloss over epistemology as you might in a conversation about some other topic.
So because the discussion in general is about epistemology, you won't accept any arguments for which the epistemology isn't specified, even if the topic of that argument doesn't pertain directly to epistemology, but if the discussion is about something else, you will just engage with the arguments regardless of the epistemology others are using?
That seems… unlikely to work well...
Given some data and multiple competing hypotheses that explain the data equally well, the laws of probability tell us that the simplest hypothesis is the likeliest. We call this principle of preferring simpler hypotheses Occam's Razor. Moreover, using this principle works well in practice. For example in machine learning, a simpler model will often generalize better. Therefore I know that Occam's Razor is "any good". Occam's Razor is a tool that can be used for problems as described by the italicized text above. It makes no claims ...
how do you know Occam's Razor is any good?
Imo chapter 28 of this book gives a good sense why Occam's Razor is good. I'll try to explain it here briefly as I understand it.
Suppose we have a class of simple models, with three free binary parameters, and a class of more complex models, with ten free binary parameters. We also have some data, and we want to know which model we should choose to explain the data. A priori, out of the parameter sets for the simple model each has a probability of 1/8 of being the best one, whereas for the complex m...
OK, if I'm interpreting this correctly, "consistency" could be said to be the ability to make a plan and follow through with it, barring new information or unexpected circumstances. So the actions the CDT agent has available aren't just "say yes" and "say no" but also "say yes, get into the car, and bring the driver 1000$ once you are in the city", interpreting all of that as a single action.
However in that case, it is not necessary to distinguish between detecting lies and simulating.
How is detecting lies fundamentally different from simulation? What is a lie? If I use a memory charm on myself, such that I honestly believe I will pay 1000$, but only until I arrive in the city, would that count as a lie? Isn't the whole premise of the hitchhiker problem, that the driver cannot be tricked, and you're just saying "Ah, but if the driver can be tricked in this way, this type of decision theorist can trick her!"
Is anything known about how many people who weren't already rationalists have been inspired by HPMOR to make a serious effort at being rational and changing the world, and (even harder to find out) what they have actually done as a result?
I have been keeping track of which people have read at least parts of HPMOR either directly or indirectly because of my recommendation, so I think I can give at least a rough idea of what the answer may look like.
All of this is as far as I know, I haven't directly asked many of the people about this.
Including my
Right now I'm exploring the possibility of setting up a site similar to yourmorals so that the survey can be effectively broken up and hosted in a way where users can sign in and take different portions of it at their leisure.
It may be worth collaborating with the EA community on this, since there is considerable overlap, both in participants and in the kinds of surveys people may be interested in.
I'd consider putting FRI closer to Effective Altruism, since they are also concerned with suffering more generally.
Do you have criteria for including fiction? Other relevant fiction I am aware of:
A bluer shade of white: About being able to enhance your own intelligence, but less about AI and more about transhumanism.
The Metropolitan Man: Also less about AI, more about existential risk.
Crystal trilogy: Roughly human-level AI as the main character.
Also Vernor Vinge is spelled with an 'o'.
I think both private non-anonymous reactions and public anonymous reactions are likely to be valuable, whereas public non-anonymous reactions could be potentially harmful and private anonymous reactions seem mostly useless.
"I've seen this" coming from the parent poster and "nice post" are valuable feedback for the author of the post/comment, but less useful information for other people so it would best be private and non-anonymous.
Reactions that say something about the content of a comment, like "interesting" or "con
StackExchange also has a minimum reputation requirement for votes to count. When you try to vote on something it displays a box saying the vote was recorded but doesn't change the publicly displayed vote count.
What I don't like about the way it is implemented on StackExchange is that it seems it's not possible to take back a vote until you have enough reputation to vote at all.
Besides protecting the vote count from being distorted by newcomers, I think the main advantage is that it makes it much harder to farm a bunch of karma with sockpuppet accounts.
From what I've read so far, I think Information Theory, Inference and Learning Algorithms does a rather good job of conveying the intuitions behind topics.
It reminds me a lot of the "mastermind group" thing, where we had weekly hangouts to talk about our goals etc. The America/Europe group eventually petered out (see here for retrospective by regex), the Eurasia/Australia group appears to be ongoing albeit with only two (?) participants.
There have also been online reading groups for the sequences, iirc. I don't know how those went though.
forums, wikis, open source software
I see a few relevant differences:
I think this is a great idea, likely to have positive value for participants. So going Hamming questions on this, I think two things are important.
To survive, and increase one's power are instrumentally convergent goals of any intelligent agent, which means that evolution does not select for any specific type of mind, ethics, or final values.
But, as you also point out, evolution "selects on the criterion of ingroup reproductive fitness", which does select for a specific type of mind and ethics, especially if you also have the constraint, that the agent should be intelligent. As far as I am aware all of the animals considered the most intelligent are social animals (octopi may be an excep...
There is also Bertrand, which is organic. Their ingredients look like it would be pretty tasty, but it costs 9€ per day.
Slate star codex also wrote about this. http://slatestarcodex.com/2016/01/11/schizophrenia-no-smoking-gun/
Yeah, the estimates will always be subjective to an extent, but whether you choose historic figure, or all humans and fictional characters that ever existed or whatever, shouldn't make huge differences to your results, because, in Bayes' formula, the ratio P(C|E)/P(C) ¹ should always be roughly the same, regardless of filter.
¹ C: coin exists
E: person existed
The AI analogue would be: If the AI has the capacity to wirehead itself, it can make itself enter the color perception subroutines. Whether something new is learned depends on the remaining brain architecture. I would say, in the case of humans, it is clear that whenever something new is experienced, the human learns what that experience feels like. I reckon that for some people with strong visualization (in a broad sense) abilities it is possible to know what an experience feels like without experiencing first hand by synthesizing a new experience from pr...
From the cover text of How to Build a Brain it seems the main focus is on the architecture of SPAWN, and I suspect it does not actually give a proper introduction to other areas of computational neuroscience. That said, I wouldn't be surprised if it is the most enjoyable book to read on the topic, that you can find. I have read Computational Neuroscience by Hanspeter Mallot, which is very short, weird and not very good. I'm currently about halfway through Theoretical Neuroscience by Dayan and Abbott. My impression is, it might be decent for people with a s...
Human Learning and Memory, by Dadid A. Lieberman (2012)
A well-written overview of current knowledge about human learning and memory. Of special interest:
The preprint for the article on cognitive decline due to long COVID was shared in the LessWrong Telegram group last October. I looked over it at the time and wrote down some notes, which I will reproduce here. Note I haven't looked through the final version of the article to check if things still match up.
... (read more)