I intended to bring it up as plausible, but not explicitly say that I thought it was p>0.5 (because it wasn't a firm belief and I didn't want others to do any bayesian update). I wanted to read arguments about its plausibility. (Some pretty convincing arguments are SBF's high level of luxury consumption and that he took away potentially all Alameda shares from the EA cofounder of Alameda, Tara Mac Aulay).
If it is plausible, even if it isn't p>0.5, then it's possible SBF wasn't selfish, in which case that's a reason for EA to focus more on inculcating...
Someone on sneerclub said that he is falling on his sword to protect EA's reputation, I don't have a good counterargument to that.
This conversation won't go over well in court, so if he is selfish, then this conversation probably reflects mental instability.
The idea that he was trying to distance himself from EA to protect EA doesn't hold together because he didn't actually distance himself from EA at all in that interview. He said ethics is fake, but it was clear from context that he meant ordinary ethics, not utilitarianism.
"keeping (instead of a list of ideas for projects) a list"
This may be implied, but it may be helpful to be explicit if you mean "literally keep a list such as in an online document and/or a physical document".
I would take a look at “World Systems” theory as an idea behind the development of the modern balances of power and wealth.
Ironically, World Systems Theory is discredited in economics departments with similar reasoning as this criticism of Diamond=both ignore the established practice of an academic field and both explain things that never happened.
Is it more than 30% likely that in the short term (say 5 years), Google isn't wrong? If you applied massive scale to the AI algorithms of 1997, you would get better performance, but would your result be economically useful? Is it possible we're in a similar situation today where the real-world applications of AI are already good-enough and additional performance is less useful than the money spent on extra compute? (self-driving cars is perhaps the closest example: clearly it would be economically valuable, but what if the compute to train it would cost 20 billion US dollars? Your competitors will catch up eventually, could you make enough profit in the interim to pay for that compute?)
How slow does it have to get before a quantitative slowing becomes a qualitative difference? AIImpacts https://aiimpacts.org/price-performance-moores-law-seems-slow/ estimates price/performance used to improve an order of magnitude (base 10) every 4 years but it now takes 12 years.
With regard to "How should you develop intellectually, in order to become the kind of person who would have accepted heliocentrism during the Copernican revolution?"
I think a possibly better question might be "How should you develop intellectually, in order to become the kind of person who would have considered both geocentrism and heliocentrism plausible with probability less than 0.5 and greater than 0.1 during the Copernican revolution?"
edit: May have caused confusion, alternative phrasing of same idea:
who would have considered geoce...
I disagree. The point of the post is not that these theories were on balance equally plausible during the Renaissance. It's written so as to overemphasize the evidence for geocentrism, but that's mostly to counterbalance standard science education.
In fact, one my key motivations for writing it -- and a point where I strongly disagree with people like Kuhn and Feyerabend -- is that I think heliocentrism was more plausible during that time. It's not that Copernicus, Kepler Descartes and Galileo were lucky enough to be overconfident in the right direction, and really should just have remained undecided. Rather, I think they did something very right (and very Bayesian). And I want to know what that was.
Any idea why?
Is it possibly a deliberate strategy to keep average people away from the intellectual movement (which would result in an increased intellectual quality)? If so, I as an average person should probably respect this desire and stay away.
Possibly there should be 2 communities for intellectual movements: one community with a thickly walled garden to develop ideas with quality intellectuals, and a separate community with a thinly walled garden in order to convince a broader audience to drive adoption of those ideas?
Your comment is quite clear and presents an important idea, thank you.
Why is the original comment about coffee in the presentation lacking in context? Is it deliberately selectively quoted to have less context in order to be provocative?
I think this is honest and I'm thankful to have read it.
Probably I'm biased and/or stupid, but with regard to Slavoj's comment “Coffee without cream is not the same as coffee without milk.” [this article's author's requests being charitable to this comment], the most charitable I can convince myself to be is "maybe this postmodernist ideology is an ideology specifically designed to show how ideology can be stupid - in this way, postmodernists have undermined other stupid ideologies by encouraging deconstruction of ideology t
I think this might be confounded: the kind of people with sufficient patience or self-discipline or something (call it factor X) are the kind of people both to read the sequences in full and also to produce quality content. (this would cause a correlation between the 2 behaviors without the sequences necessarily causing improvement).
Here's a post by Scott Sumner (an economist with a track record) about how taxing positional goods does make sense:
The main problem with taxing positional goods is that the consumption just moves to another country.
I don't have an economics degree, but:
1) governments could cooperate to tax positional goods (such as with a treaty)
2) governments could repair the reduced incentive to work hard by lowering taxes on the rich
3) these 2 would result in lower prices for non-positional goods
4) governments could adjust for lost tax revenue by lowering welfare programs because of (3)
The flaw I can think of (there are probably others) is that workers in positional goods industries might lose their jobs.
What other flaws are there or why isn't this happening already?
Regarding 'relax constraints that make real resources artificially scarce' - why not both your idea and the OP's idea to tax positional goods? In the long run the earth/our future light cone really is only so big so don't we need any and all possible solutions to make a utopia?
Is there any product like an adult pacifier that is socially acceptable to use?
I am struggling with self-control to not interrupt people and am afraid for my job.
EDIT: In the meantime (or long-term if it works) I'll use less caffeine (currentlly 400mg daily) to see if that helps.
Efficient charity: you don't need to be an altruist to benefit from contributing to charity
Effective altruism rests on two philosophical ideas: altruism and utilitarianism.
In my opinion, even if you're not an altruist, you might still want to use statistics to learn about charity.
Some people believe that they have an ethical obligation to cause a net 0 suffering. Others might believe they have an ethical obligation to cause only an average amount of suffering. In these causes, in order to reduce suffering to an acceptable level, efficient charity might be ...
Disclaimer: I may not be the first person to come up with this idea
What if for dangerous medications (such as 2-4 dinitrophenol (dnp) possibly?) the medication was stored in a device that would only dispense a dose when it received a time-dependent cryptographic key generated by a trusted source at a supervised location (the pharmaceutical company/some government agency/an independent security company)?
Could this be useful to prevent overdoses?
Disclaimer: Not remotely an expert at biology, but I will try to explain.
One can think of the word "gene" as having multiple related uses.
Use 1: "Genotype". Even if we have different color hair, we likely both have the same "gene" for hair which could be considered shared with chimpanzees. If you could re-write DNA nucleobases, you could change your hair color without changing the gene itself, you would merely be changing the "gene encoding". The word "genotype" refers to a "function" which takes ...
Thank you, I initially wrote my function with the idea of making it one (of many) "lower bound"(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that "this works fine as a general theory, not just a lower bound".
Thank you for helping me think more clearly.
"How dire [do] the real world consequences have to be before it's worthwhile debating dishonestly"?
M̶y̶ ̶b̶r̶i̶e̶f̶ ̶a̶n̶s̶w̶e̶r̶ ̶i̶s̶:̶
One lower bound is:
If the amount that rationality affects humanity and the universe is decreasing over the long term. (Note that if humanity is destroyed, the amount that rationality affects the universe probably decreases).
T̶h̶i̶s̶ ̶i̶s̶ ̶a̶l̶s̶o̶ ̶m̶y̶ ̶a̶n̶s̶w̶e̶r̶ ̶t̶o̶ ̶t̶h̶e̶ ̶q̶u̶e̶s̶t̶i̶o̶n̶ ̶"̶w̶h̶a̶t̶ ̶i̶s̶ ̶w̶i̶n̶n̶i̶n̶g̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶r̶a̶t̶i̶o̶n̶a̶l̶i̶s̶t̶ ̶c̶o̶m̶m̶u̶n̶i̶t̶y̶"̶?̶
R̶a̶t...
If the author could include a hyperlink to Richard Wiseman when he is first mentioned, it might prevent any reader from being confused and not realizing that you are describing actual research. (I was confused in this way for about half of the article).
I wonder if there's a chance of the program that always collaborates winning/tieing.
If all the other programs are extremely well-written, they will all collaborate with the program that always collaborates (or else they aren't extremely well-written, or they are violating the rules by attempting to trick other programs).
RE "Should we then draw different conclusions from their experiments?"
I think, depending on the study's hypothesis and random situational factors, a study like the first can be in the garden of forking paths. A study which stops at n=100 when it reaches a predefined statistical threshold isn't guaranteed to have also reached that statistical threshold if it had kept running until n=900.
Suppose a community of researchers is split in half (this is intended to match the example in this article but increase the imagined sample size of studies to more than 1 st... (read more)