All of xiann's Comments + Replies

I agree with the central point of this, and the anti-humanism is where the e/acc crowd turn entirely repugnant. But in reference to the generative AI portion, the example doesn't really land for me because I think the issue at its core is pitting two human groups against each other; the artists who would like to make a stable living off their craft, and the consumers of art who'd like less scarcity of art, particularly the marginally-creative stock variety that nonetheless forms the majority of most artists' paycheck (as opposed to entirely original works ... (read more)

2dr_s
You're partly right that of course one side of the issue is just that the companies are undercutting the art market by offering a replacement product at prices that are impossible to compete with, but from seeing the complaints and viewpoints of artists, the copyright violation aspect of it is also a big deal to most of them. If only because someone undercutting you is already bad, someone undercutting you by stealing your own know-how and turning it against you adds insult to injury. To some extent I think people are focusing on this due to the belief that if not for the blatant copyright violations, the kind of large training sets required for powerful AI models would be economically unviable, and it's fairly likely that they're right (at least for now). Also, the kind of undercutting that we're seeing with AI would be fundamentally impossible with human artists. You could have one work 16 hours a day with only bread, water and a straw mat to sleep on and they wouldn't be productive one tenth of an AI model that can spit out a complete digital image in seconds with little more energy use than a large gaming computer. So we're at a point where quantity becomes a quality of its own - AI art generation economy is so fundamentally removed from the human art creation market that it doesn't just compete, it straight up takes a sledgehammer to it and then pisses on the pieces. I also don't think here that AI art is responding to an end user demand. Digital art is infinitely reproducible and already so abundant most people wouldn't know what to do with it. The most critical end user application where someone might not easily find replacements for their very specific needs is, well, porn. That's certainly one application that AI art is good for, but not one most companies explicitly monetize for image reasons. Other than that, I'd say the biggest demand that AI art satisfies is that of middlemen who need art to enhance some other project: game developers (RPG portraits, V

Assuming Sam was an abuser, what would hacking wifi signals do that the level of shadowbanning described not do? It strikes me as unlikely because it doesn't seem to have much reward in the world where Sam is the abuser.

I know this post will seem very insensitive, so I understand if it gets downvoted (though I would also say that's the very reason sympathy-exploitation tactics work), but I would like to posit a 3rd fork to the "How to Interpret This" section: That Annie suffers from a combination of narcissistic personality disorder and false memory creation in service of the envy that disorder spawns. If someone attempted to fabricate a story that was both maximally sympathy-inducing and reputation-threatening for the target, I don't think you could do much better than t... (read more)

4jjaksic
Are a person's mental disorders (especially ones that started in early childhood) the person's own fault, or are they possibly a consequence of trauma or abuse? If you abuse someone as a child, they are very likely to develop some mental disorders (the greater the abuse, the more severe and long-lasting they're likely to be). Is it then fair to say, "This person's claims of abuse have no merit, just look at their mental disorders" (as in, a "crazy person's" claims should not be believed)?

I agree, I'm reminded of the quote about history being the search for better problems. The search for meaning in such a utopic world (from our perspective) thrills me, especially when I think about all the suffering that exists in the world today. The change may be chaotic & uncomfortable, but if I consider my personally emotions about the topic, it would be more frightening for the world to remain the same.

I should have been more precise. I'm talking about the kind of organizational capabilities required to physically ensure no AI unauthorized by central authority can be created. Whether aligned AGI exists (and presumably in this case, is loyal is said authority over other factions of society that may become dissatisfied) doesn't need to factor into the conversation much.

That may well be the price of survival, nonetheless I felt I needed to point out the very likely price of going down that route. Whether that price is worth paying to reduce x-risk from p(x... (read more)

This might sound either flippant or incendiary, but I mean it sincerely: Wouldn't creating a powerful enough enforcement regime to permanently, reliably guarantee no AGI development necessitate both the society implementing that regime being far more stable over future history than any state has thus far been, and more importantly introduce incredible risk of creating societies that most liberal democracies would find sub-optimal (to put it mildly) that are then locked-in even without AGI due to the aforementioned hyper-stability?

This plan seems likely to sacrifice most future value itself, unless the decision-making humans in charge of the power of the enforcement regime act purely altruistically.

6otto.barten
First, I don't propose 'no AGI development'. If companies can create safe and beneficial AGIs (burden of proof is on them), I see no reason to stop them. On the contrary, I think it might be great! As I wrote in my post, this could e.g. increase economic growth, cure disease, etc. I'm just saying that I think that existential risk reduction, as opposed to creating economic value, will not (primarily) originate from alignment, but from regulation. Second, the regulation that I think has the biggest chance of keeping us existentially safe will need to be implemented with or without aligned AGI. With aligned AGI (barring a pivotal act), there will be an abundance of unsafe actors who could run the AGI without safety measures (also by mistake). Therefore, the labs themselves propose regulation to keep almost everyone but themselves from building such AGI. The regulation required to do that is almost exactly the same. Third, I'm really not as negative as you are about what it would take to implement such regulation. I think we'll keep our democracies, our freedom of expression, our planet, everyone we love, and we'll be able to go anywhere we like. Some industries and researchers will not be able to do some things they would have liked to do because of regulation. But that's not at all uncommon. And of course, we won't have AGI as long as it isn't safe. But I think that's a good thing.

"Normally when Cruise cars get stuck, they ask for help from HQ, and operators there give the vehicles advice or escape routes. Sometimes that fails or they can’t resolve a problem, and they send a human driver to rescue the vehicle. According to data released by Cruise last week, that happens about an average of once/day though they claim it has been getting better."

From the Forbes write-up of GM Cruise's debacle this weekend. I think this should update people downward somewhat in FSD % complete. I think commenters here are being too optimistic about curr... (read more)

That is one example, but wouldn't we typically assume there is some worst example of judicial malpractice at any given time, even in a healthy democracy? If we begin to see a wave of openly partisan right or left-wing judgements, that would be a cause for concern, particularly if they overwhelm the ability of the supreme court to overrule. The recent dueling rulings over mifepristone was an example of this (both the original ruling and the reactive ruling), but it is again a single example so far.

I actually think the more likely scenario then a fascistic b... (read more)

When you say "force demand to spread out more", what policies do you propose, and how confident are you that this is both easier to accomplish than the YIMBY solution and leads to better outcomes?

My default (weak) assumption is that a policy requiring more explicit force is more likely to produce unintended negative consequences as well as greater harm if unpopular. So a ban on A has a higher bar to clear for me to be on board than a subsidy of B over A. My initial reaction to the sentence "force demand to spread out more" is both worry at how heavy-handed... (read more)

6bhauth
I don't want to get into reasons for desirability of suburban vs high-density areas, which is a topic of its own, but clearly a lot of people prefer to live in lower-density areas than NYC. Here are some actions I support based on the above model: 1. More antitrust action. I think America has oligopolies that are bad for consumers anyway, and corporate consolidation means more centralization of corporate leadership in a few top cities - and then every layer of managers wants to live close to the layer above them. I support breaking up many big companies. 2. If a department/subsidiary of a company is localized to a region, the management of that department/subsidiary should be legally required to live and work in that region, rather than where the top corporate leadership is. (Apart from that being partly zero-sum competition, I think companies act largely according to the desires of management, so forcing middle management to do things it doesn't want to can improve overall welfare.) 3. Remote work being an option should, in some cases, be legally required. I think management sometimes forces workers to come to an office just for its own self-gratification.

Feeling unsafe is probably not a free action though; as far as we can tell cortisol has a deleterious effect on both physical health & mental ability over time, and it becomes more pronounced w/ continous exposure. So the cost of feeling unsafe all the time, particularly if one feels less safe/more readiness than the situation warrants, is to hurt your prospects in situations where the threat doesn't come to pass (the majority outcome).

The most extreme examples of this are preppers; if society collapses they do well for themselves, but in most worlds they simply have an expensive, presumably unfun hobby and inordinate amounts of stress about an event that doesn't come to pass.

3CronoDAS
Yeah, things close to full-blown doomsday doesn't happen very often. The most common is probably literal war (as in Ukraine and Syria) and the best response to that on an individual level is usually "get the hell away from where the fighting is." Many of the worst natural disasters are also best handled by simply evacuating. If you don't have to/didn't have time to evacuate and you don't die in the immediate aftermath, your worst problems might be the local utilities shutting down for a while and needing to find alternative sources of water and heat until they're fixed. The potential natural disasters for which I think doomsday-level prepping might actually make a difference are volcanoes and geomagnetic storms, because they could cause problems on a continent-wide or global scale and "go somewhere unaffected" or "endure the short-term disruptions until things go back to normal" might not work. Volcanoes can block the sun and cripple global agriculture, and a giant electromagnetic pulse could cause enough damage to both the power grid and to natural gas pipelines that it could take years to rebuild them. (Impacts from space might also be on the list, depending on the severity.)

That's pretty well tailored to the community here, but there are still some red flags. How would them sending money to you and you donating it to MIRI "accelerate the value"? Also, why would a legit matcher not simply want confirmation of your donation without them ever touching the money?

Not to mention, is it really this easy to use anti-fraud tools to perpetrate fraud?

2philh
Some companies do donation-matching, so if an employee donates $X to a charity then the company will also donate $X. The scammer is pretending "I don't work for such a company, but I'd like to double my donation, so I'd like to send money to you and you donate it and then your company also donates".