Dumbledore's Army

Wiki Contributions

Comments

Sorted by

Hard disagree to point 1. The fact that humanity hasn't tried to hide is not counter-evidence to the Dark Forest theory. If the Dark Forest is correct, the prediction is that all non-hiding civilisations will be destroyed. We don't see anyone else out there, not because every civilisation decided to hide, but because only hiders survived.

To be clear: the prediction of the Dark Forest theory is that if humanity keeps being louder and noisier, we will at some point come to the attention of an elder civilisation and be destroyed. I don't know what probability to put on this theory being correct. I doubt it ranks higher than AI in terms of existential risk.

I do know that 'we haven't been destroyed yet, barely 100 years after inventing radio' is only evidence that there are no ultra-hostile civilisations within 50 light-years which also have the capability to detect even the very weakest radio signals from an antique Marconi. It is not evidence that we won't be destroyed in future when signals reach more distant civs and/or we make more noticeable signals.

A parable to elucidate my disagreement with parts of Zvi's conclusion:

Jenny is a teenager in a boarding school. She starts cutting herself using razors. The school principal bans razors. Now all the other kids can't shave and have to grow beards (if male) and have hairy armpits. Jenny switches to self-harming with scissors. The school principal bans scissors. Now every time the students receive a package they have to tear it open with their bare hands, and anyone physically weak or disabled has to go begging for someone to help them. Jenny smashes a mirror into glass shards, and self-harms with those. The principal bans mirrors...

Any sane adult here would be screaming at the principal. "No! Stop banning stuff! Get Jenny some psychological treatment!" 

Yes, I know the parable is not an exact match to sports betting, but it feels like a similar dynamic. There are some people with addictive personality disorder and they will harm themselves by using a product that the majority of the population can enjoy responsibly[1]. (Per a comment above, 39% of the US population bet online, of whom only a tiny fraction will have a gambling problem.) The product might be gambling, alcohol, online porn, cannabis, or something else. New such products may be invented in future. But, crucially, if one product is not available, then these people will very likely form an addiction to something else[2]. That is what 'addictive personality disorder' means.

Sometimes the authorities want to ban the product in order to protect addicts. But no one ever asks the question: how can we stop them from wanting to self-harm in the first place? Because just banning the current popular method of self-harming is not a solution if people will go on to addict themselves to something else instead[3]. I feel that the discourse has quietly assumed a fabricated option: if these people can't gamble then they will be happy unharmed non-addicts. 

I am a libertarian, and I have great sympathy for Maxwell Tabarrok's arguments. But in this case, I think the whole debate is missing a very important question. We should stop worrying about whether [specific product] is net harmful and start asking how we can fix the root cause of the problem by getting effective treatment to people with addictive personality disorder. And yes, inventing effective treatment and rolling it out on a population scale is a much harder problem than just banning [target of latest moral panic]. But it's the problem that society needs to solve. 

 

  1. ^

    In my parable the principal bans 'useful' things, while authorities responding to addictive behaviour usually want to ban entertainment. That isn't a crux for me. Entertainment is intrinsically valuable, and banning it costs utility - potentially large amounts of utility when multiplied across an entire population. 

  2. ^

    Or more than one something else. People can be addicted to multiple things, I'm just eliding that for readability.

  3. ^

    Zvi quotes research saying that legalizing sports betting resulted in a 28% increase in bankruptcies, which suggests it might be more financially harmful than whatever other addictions people had before, but that's about as much as we can say.

I agree with your first paragraph. I think the second is off-topic in a way that encourages readers, and possibly you yourself, to get mind-killed. Couldn’t you use a less controversial topic as an example? (Very nearly any topic is less controversial.) And did you really need to compound the problem by assigning motivations to other people whom you disagree with? That’s a really good way to start a flame war.

I spent eighteen months working for a quantitative hedge fund. So we were using financial data -- that is accounts, stock prices, things that are inherently numerical. (Not like, say, defining employee satisfaction.) And we got the data from dedicated financial data vendors, the majority from a single large company, who had already spent lots of effort to standardise it and make it usable. We still spent a lot of time on data cleaning.

The education system also tells students which topics they should care about and think about. Designing a curriculum is a task all by itself, and if done well it can be exceptionally helpful. (As far as I can tell, most universities don't do it well, but there are probably exceptions.)

A student who has never heard of, say, a Nash equilibrium isn't spontaneously going to Google for it, but if it's listed as a major topic in the game theory module of their economics course, then they will. And yes, it's entirely plausible that, once students know what to google for, then they find that YouTube or Wikipedia are more helpful than their official lecture content. Telling people they need to Google for Nash equilibria is still a valuable function.

As Richard Kennaway said, there are no essences of words. In addition to the points others have already made, I would add: Alice learns what the university tells her to. She follows a curriculum that someone else sets. Bob chooses his own curriculum. He himself decides what he wants to learn. In practice, that indicates a huge difference in their relative personalities, and it probably means that they end up learning different things.

While it's certainly possible that Bob will choose a curriculum similar to a standard university course, most autodidacts end up picking a curriculum wildly different. Maybe the university's standard chemistry course includes an introduction to medical drugs and biochemistry, and Bob already knows he doesn't care about that so he can skip that part. Maybe the university's standard course hardly mentions superconducting materials but Bob unilaterally decides to read everything about them and make that his main focus of study.

The argument given relies on a potted history of the US. It doesn't address the relative success of UK democracy - which even British constitutional scholars sometimes describe as an elective dictatorship that notoriously doesn't give a veto to minorities. It doesn't address the history of France, Germany, Italy, Canada, or any other large successful democracy, none of which use the US system, most of which aren't even presidential,

If you want to make a point about US history, fine. If you want to talk about democracy, please try drawing from a sample size larger than one.

Answer by Dumbledore's Army20

I second GeneSmith’s suggestion to ask readers for feedback. Be aware that this is something of an imposition and that you’re asking people to spend time and energy critiquing what is currently not great writing. If possible, offer to trade - find some other people with similar problems and offer to critique their writing. For fiction, you can do this on CritiqueCircle but I don’t know of an organised equivalent for non-fiction.

The other thing you can do is to iterate. When you write something, say to yourself that you are writing the first draft of X. Then go away and do something else, come back to your writing later, and ask how you can edit it to make it better. You already described problems like using too many long sentences. So edit your work to remove them. If possible, aim to edit the day after writing - it helps if you can sleep on it. If you have time constraints, at least go away and get a cup of coffee or something in order to separate writing time from editing time.

Answer by Dumbledore's Army20

First, I just wanted to say that this is an important question and thank you for getting people to produce concrete suggestions.

Disclaimer, I’m not a computer scientist so I’m approaching the question from the point of view of an economist. As such, I found it easier to come up with examples of bad regulation than good regulation.

Some possible categories of bad regulation:

1 It misses the point.

  • Example: a regulation only focused on making sure that the AI can’t be made to say racist things, without doing anything to address extinction risk.
  • Example: a regulation that requires AI-developers to employ ethics officers or risk management or similar without any requirement that they be effective. (Something similar to cyber-security today: the demand is that companies devote legible resources to addressing the problem, so they can’t be sued for negligence. The demand is not that the resources are used effectively to reduce societal risk.)

NB: I am implicitly assuming that a government which misses the point will pass bad regulation and then stop because they feel that they have now addressed ‘AI safety’. That is, passing bad legislation makes it less likely that good legislation is passed.

2 It creates bad incentives

  • Example: from 2027 the government will cap maximum permissible compute for training at whatever the maximum used by that date was. Companies are incentivised to race to do the biggest training runs they can before that date
  • Example: restrictions or taxes on compute apply to all AI companies unless they’re working on military or national security projects. Companies are incentivised to classify as much of their research as possible as military, meaning the research still goes ahead, but it’s now much harder for independent researchers to assess safety, because now it’s a military system with a security classification.
  • Example: the regulation makes AI developers liable for harms caused by AI but makes an exception for open-source projects. There is now a financial incentive to make models open-source

3 It is intentionally accelerationist, without addressing safety

  • A government that wants to encourage a Silicon Valley type cluster in its territory offers tax breaks for AI research over and above existing tax credits. Result: they are now paying people for going into capabilities research, so there is a lot more capabilities research
  • Industrial policy, or supply chain friendshoring, that results in a lot of new semiconductor fabs being built (this is an explicit aim of America’s IRA). The result is a global glut of chip capacity, and training AI ends up a lot cheaper than in a free-market situation.

Although clown attacks may seem mundane on their own, they are a case study proving that powerful human thought steering technologies have probably already been invented, deployed, and tested at scale by AI companies, and are reasonably likely to end up being weaponized against the entire AI safety community at some point in the next 10 years.

I agree that clown attacks seem to be possible. I accept a reasonably high probability (c70%) that someone has already done this deliberately - the wilful denigration of the Covid lab leak seems like a good candidate, as you describe. But I don't see evidence is that deliberate clown attacks are widespread. And specifically, I don't see evidence that these are being used by AI companies. (I suspect that most current uses are by governments.)

I think it's fair to warn against the risk that clown attacks might be used against the AI-not-kill-everyone community, and that this might have already happened, but you need a lot more evidence before asserting that it has already happened. If anything, the opposite has occurred, as the CEOs of all major AI companies signed onto the declaration stating that AGI is a potential existential risk. I don't have quantitative proof, but from reading a wide range of media across the last couple of years, I get the impression that the media and general public are increasingly persuaded that AGI is a real risk, and are mostly no longer deriding the AGI-concerned as being low-status crazy sci-fi people.

Load More