All of Ilverin the Stupid and Offensive's Comments + Replies

RE "Should we then draw different conclusions from their experiments?"

I think, depending on the study's hypothesis and random situational factors, a study like the first can be in the garden of forking paths. A study which stops at n=100 when it reaches a predefined statistical threshold isn't guaranteed to have also reached that statistical threshold if it had kept running until n=900.

Suppose a community of researchers is split in half (this is intended to match the example in this article but increase the imagined sample size of studies to more than 1 st... (read more)

I intended to bring it up as plausible, but not explicitly say that I thought it was p>0.5 (because it wasn't a firm belief and I didn't want others to do any bayesian update). I wanted to read arguments about its plausibility. (Some pretty convincing arguments are SBF's high level of luxury consumption and that he took away potentially all Alameda shares from the EA cofounder of Alameda, Tara Mac Aulay).

If it is plausible, even if it isn't p>0.5, then it's possible SBF wasn't selfish, in which case that's a reason for EA to focus more on inculcating... (read more)

Someone on sneerclub said that he is falling on his sword to protect EA's reputation, I don't have a good counterargument to that.

This conversation won't go over well in court, so if he is selfish, then this conversation probably reflects mental instability.

0Ben Pace
I'm not trying to be rude, but if you can't see a perspective from which that is nonsensical, then perhaps you are not strong enough in the relevant way to read what bullies say about those that they hate.
3Nanda Ale
  I see a lot of the EA discussion is worried about the public consequences of SBF using EA to justify bad behavior. What if people unfairly conclude EA ideas corrupt people's thinking and turn them into SBF-alikes? And some concern that EA genuinely could do this. If you think that is the big danger then I understand how you might conclude SBF saying "I never believed the EA stuff, it was all an act." is better for EA. Valid thing to worry about (especially about your own thinking), but it's online rationalists who are worried about this. Looking at this as an outsider, this is missing the forest for the trees.  Much of the public starts from the assumption that rich people giving to charities is all a big scam, generally. Just a means to enrich or empower themselves. EA's biggest donor admitting it was a scam along is not protecting EA, it's confirming this model in everyone's minds. They knew it all along! Every future EA donator will be trivially pattern matched to have the same motives as SBF. I enjoy reading discussions on EAs role in sub optimal Kelly bet size conclusions. But big picture that is not the the biggest danger by far.

The idea that he was trying to distance himself from EA to protect EA doesn't hold together because he didn't actually distance himself from EA at all in that interview. He said ethics is fake, but it was clear from context that he meant ordinary ethics, not utilitarianism.

"keeping (instead of a list of ideas for projects) a list"

This may be implied, but it may be helpful to be explicit if you mean "literally keep a list such as in an online document and/or a physical document".

2Pattern
I think that higher precision isn't always needed (or used efficiently).
3Nathan Helm-Burger
That's already what TPUs do, basically

I would take a look at “World Systems” theory as an idea behind the development of the modern balances of power and wealth.

Ironically, World Systems Theory is discredited in economics departments with similar reasoning as this criticism of Diamond=both ignore the established practice of an academic field and both explain things that never happened.

Is it more than 30% likely that in the short term (say 5 years), Google isn't wrong? If you applied massive scale to the AI algorithms of 1997, you would get better performance, but would your result be economically useful? Is it possible we're in a similar situation today where the real-world applications of AI are already good-enough and additional performance is less useful than the money spent on extra compute? (self-driving cars is perhaps the closest example: clearly it would be economically valuable, but what if the compute to train it would cost 20 billion US dollars? Your competitors will catch up eventually, could you make enough profit in the interim to pay for that compute?)

8Andy Jones
I'd say it's at least 30% likely that's the case! But if you believe that, you'd be pants-on-head loony not to drop a billion on the 'residual' 70% chance that you'll be first to market on a world-changing trillion-dollar technology. VCs would sacrifice their firstborn for that kind of deal.

How slow does it have to get before a quantitative slowing becomes a qualitative difference? AIImpacts https://aiimpacts.org/price-performance-moores-law-seems-slow/ estimates price/performance used to improve an order of magnitude (base 10) every 4 years but it now takes 12 years.

With regard to "How should you develop intellectually, in order to become the kind of person who would have accepted heliocentrism during the Copernican revolution?"

I think a possibly better question might be "How should you develop intellectually, in order to become the kind of person who would have considered both geocentrism and heliocentrism plausible with probability less than 0.5 and greater than 0.1 during the Copernican revolution?"

edit: May have caused confusion, alternative phrasing of same idea:

who would have considered geoce... (read more)

I disagree. The point of the post is not that these theories were on balance equally plausible during the Renaissance. It's written so as to overemphasize the evidence for geocentrism, but that's mostly to counterbalance standard science education.

In fact, one my key motivations for writing it -- and a point where I strongly disagree with people like Kuhn and Feyerabend -- is that I think heliocentrism was more plausible during that time. It's not that Copernicus, Kepler Descartes and Galileo were lucky enough to be overconfident in the right direction, and really should just have remained undecided. Rather, I think they did something very right (and very Bayesian). And I want to know what that was.

Any idea why?

Is it possibly a deliberate strategy to keep average people away from the intellectual movement (which would result in an increased intellectual quality)? If so, I as an average person should probably respect this desire and stay away.

Possibly there should be 2 communities for intellectual movements: one community with a thickly walled garden to develop ideas with quality intellectuals, and a separate community with a thinly walled garden in order to convince a broader audience to drive adoption of those ideas?

1scarcegreengrass
Yes, i think a big aspect of postmodernist culture is speaking in riddles because you want to be interacting with people who like riddles. I don't think that the ability to understand a confusingly-presented concept is quite the same thing as intellectual quality, however. I think it's a more niche skill.
4Said Achmiz
You can't understand Zizek, or Zizek's "strategy", if you approach it in so straightforward a way. And that's the point. It's not "average" people who are being "kept away", it's enlightened people who are being filtered for. By "enlightened", of course, I do not mean the Zen notion, or any such thing, nor do I even use the term normatively; I only mean that those who have independently had the experiences and reached the understandings necessary to apprehend what Zizek is saying, will be able to do so. That is the filter. If Zizek explains to you in plain language what he is saying, you may understand it; but that is counterproductive, because if you need his points explained to you in plain language then you are not the sort of person he is speaking to. Conversely, if you listen to Zizek and do not understand him, you may later have the relevant experiences and reach the relevant understandings, and apprehend his points retroactively. Many things work this way.

Your comment is quite clear and presents an important idea, thank you.

Why is the original comment about coffee in the presentation lacking in context? Is it deliberately selectively quoted to have less context in order to be provocative?

2SilentCal
Why speak in riddles? Because sometimes solving a puzzle teaches you more than being the solution. As an observation about coffee, Zizek's statement is true in its way but not especially useful. His broader point is "you should think about history and context more." So he presents you with two physically identical items, coffee without milk and coffee without cream, so that you can be surprised by noticing that there's potentially an important difference, and that surprise will make you update towards considering context and history as well as present physical makeup.
9ChristianKl
Zizek sounds just as ridiculous when you hear him speak in context.
2quanticle
I think it's more that, on a slide you necessarily have to remove context in order to keep the presentation legible (both metaphorically and literally) for the audience. Walls of text in tiny print don't make for good slides.

I think this is honest and I'm thankful to have read it.

Probably I'm biased and/or stupid, but with regard to Slavoj's comment “Coffee without cream is not the same as coffee without milk.” [this article's author's requests being charitable to this comment], the most charitable I can convince myself to be is "maybe this postmodernist ideology is an ideology specifically designed to show how ideology can be stupid - in this way, postmodernists have undermined other stupid ideologies by encouraging deconstruction of ideology t

... (read more)
9quanticle
It's an illustration of postmodernism's insistence on looking at the context of a thing in addition to the thing itself. A modernist would look at coffee-without-cream and coffee-without-milk and say, "So what, they're both black coffee, right?" But a postmodernist would say, "Yes, they're both black coffee but the choices that led to each being black coffee were different." That history, that context, is different between the two coffees, and thus they're different. Another way of thinking about it is, "Is a (ex-)Jewish atheist different from an (ex-)Catholic atheist?"

I think this might be confounded: the kind of people with sufficient patience or self-discipline or something (call it factor X) are the kind of people both to read the sequences in full and also to produce quality content. (this would cause a correlation between the 2 behaviors without the sequences necessarily causing improvement).

Here's a post by Scott Sumner (an economist with a track record) about how taxing positional goods does make sense:

http://www.themoneyillusion.com/?p=26694

0Benquo
Paul's arguing for punitive taxes on positional goods for the sake of reducing wasteful consumption. I think Sumner's mostly trying to argue that the social costs of taxing the consumption of the rich are low. I agree with the latter point, for roughly the same reason I disagree with the former; I think wasteful conspicuous consumption's a side-effect of limited opportunities for more substantive consumption or investment.

The main problem with taxing positional goods is that the consumption just moves to another country.

I don't have an economics degree, but:

1) governments could cooperate to tax positional goods (such as with a treaty)

2) governments could repair the reduced incentive to work hard by lowering taxes on the rich

3) these 2 would result in lower prices for non-positional goods

4) governments could adjust for lost tax revenue by lowering welfare programs because of (3)

The flaw I can think of (there are probably others) is that workers in positional goods industries might lose their jobs.

What other flaws are there or why isn't this happening already?

0ChristianKl
https://en.wikipedia.org/wiki/Luxury_tax Bush senior did pass such a tax but the Clinton administration allowed it to be repealed.

Regarding 'relax constraints that make real resources artificially scarce' - why not both your idea and the OP's idea to tax positional goods? In the long run the earth/our future light cone really is only so big so don't we need any and all possible solutions to make a utopia?

1Benquo
Attention is scarce. I wouldn't lobby against such a tax, but I would advise people not to put energy into advocating for one, because I think it's an especially inefficient solution.

Is there any product like an adult pacifier that is socially acceptable to use?

I am struggling with self-control to not interrupt people and am afraid for my job.

EDIT: In the meantime (or long-term if it works) I'll use less caffeine (currentlly 400mg daily) to see if that helps.

2MrMind
How about a lollipop? It's almost the same thing, and since inspector Kojak it's become much more socially acceptable, even cool, if you pull it off well. If you are a woman, though, you'll likely suffer some sexual objectification (what a news!).
5Lumifer
It's socially acceptable to twirl and manipulate small objects in your hands, from pens to stress balls. If you need to get your mouth involved, it's mostly socially acceptable to chew on pens. Former smokers used to hold empty pipes in their mouths, just for comfort, but it's hard to pull off nowadays unless you're old or a fully-blown hipster.
8SithLord13
Could chewing gum serve as a suitable replacement for you?

Efficient charity: you don't need to be an altruist to benefit from contributing to charity

Effective altruism rests on two philosophical ideas: altruism and utilitarianism.

In my opinion, even if you're not an altruist, you might still want to use statistics to learn about charity.

Some people believe that they have an ethical obligation to cause a net 0 suffering. Others might believe they have an ethical obligation to cause only an average amount of suffering. In these causes, in order to reduce suffering to an acceptable level, efficient charity might be ... (read more)

Disclaimer: I may not be the first person to come up with this idea

What if for dangerous medications (such as 2-4 dinitrophenol (dnp) possibly?) the medication was stored in a device that would only dispense a dose when it received a time-dependent cryptographic key generated by a trusted source at a supervised location (the pharmaceutical company/some government agency/an independent security company)?

Could this be useful to prevent overdoses?

29eB1
There are already dispensing machines that dispense doses on a timer. They are mostly targeted at people who need reminding (e.g. Alzheimers), though, rather than people who may want to take too much. I don't think the cryptographic security would be the problem in that scenario, but the physical security of the device. You would need some trusted way to reload it and it would have to be very difficult to open even though it would presumably just be sitting on your table at home, which is a very high bar. It could possibly be combined with always-on tampering reporting and legal threats to make the idea of tampering with it less appealing though.
3Lumifer
If the dispensing device is "locked" against the user and you want to enforce dosing you don't need any crypto keys. Just make the device have an internal clock and dispense a dose every X hours. In the general case, the device is externally controlled and then people who have control can do whatever they want with it. I'm still not seeing a particular need for a crypto key.

Disclaimer: Not remotely an expert at biology, but I will try to explain.

One can think of the word "gene" as having multiple related uses.

Use 1: "Genotype". Even if we have different color hair, we likely both have the same "gene" for hair which could be considered shared with chimpanzees. If you could re-write DNA nucleobases, you could change your hair color without changing the gene itself, you would merely be changing the "gene encoding". The word "genotype" refers to a "function" which takes ... (read more)

Thank you, I initially wrote my function with the idea of making it one (of many) "lower bound"(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that "this works fine as a general theory, not just a lower bound".

Thank you for helping me think more clearly.

"How dire [do] the real world consequences have to be before it's worthwhile debating dishonestly"?

M̶y̶ ̶b̶r̶i̶e̶f̶ ̶a̶n̶s̶w̶e̶r̶ ̶i̶s̶:̶

One lower bound is:

If the amount that rationality affects humanity and the universe is decreasing over the long term. (Note that if humanity is destroyed, the amount that rationality affects the universe probably decreases).

T̶h̶i̶s̶ ̶i̶s̶ ̶a̶l̶s̶o̶ ̶m̶y̶ ̶a̶n̶s̶w̶e̶r̶ ̶t̶o̶ ̶t̶h̶e̶ ̶q̶u̶e̶s̶t̶i̶o̶n̶ ̶"̶w̶h̶a̶t̶ ̶i̶s̶ ̶w̶i̶n̶n̶i̶n̶g̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶r̶a̶t̶i̶o̶n̶a̶l̶i̶s̶t̶ ̶c̶o̶m̶m̶u̶n̶i̶t̶y̶"̶?̶

R̶a̶t... (read more)

5Mestroyer
Downvoted for the fake utility function. "I wont let the world be destroyed because then rationality can't influence the future" is an attempt to avoid weighing your love of rationality against anything else. Think about it. Is it really that rationality isn't in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived? If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?

If the author could include a hyperlink to Richard Wiseman when he is first mentioned, it might prevent any reader from being confused and not realizing that you are describing actual research. (I was confused in this way for about half of the article).

9ESRogs
Agreed, especially since the name Wiseman sounds like it could be symbolic. Also, what book is being talked about here?

I wonder if there's a chance of the program that always collaborates winning/tieing.

If all the other programs are extremely well-written, they will all collaborate with the program that always collaborates (or else they aren't extremely well-written, or they are violating the rules by attempting to trick other programs).

[This comment is no longer endorsed by its author]Reply