PotteryBarn
PotteryBarn has not written any posts yet.

PotteryBarn has not written any posts yet.

"Why would you voluntarily make your daily life actually unpleasant just to increase an already high income that you'll probably have less time to enjoy anyway?"
IIRC, dentists have some of the highest rates of depression and suicide of any profession. As for if this means things could only get better under a new business model, increased earnings would translate to earlier retirements (and by extension lower supply of labor), or if dentists would prefer to keep their current earnings over trying a potentially more intensive job, I can't say.
I think the term "Market Failure" describes an interesting phenomenon and there should be some term that describes situations where negative externalities are being generated, there is suboptimal production of a social good, etc. At the same time, it is easy to see how "market failure" easily gives laypeople additional connotations.
Specifically, I agree that this phenomenon generalizes beyond what most people think of as "markets" (i.e. private firms doing business). I can see where this would bias most peoples' hasty analysis away from potential free-market solutions and towards that status quo or cognitively-simple solutions ("we just ought to pass a law! Lets form a new agency to enforce stricter regulations!") without also... (read more)
Im a person who is unusually eager to bite bullets when it comes to ethical thought experiments. Evolved vs. created moral patients is a new framework for me and I'm trying to think how much bullet I'd be willing to bite when it comes to privileging evolution. Especially if the future could include a really large number of created entities exhibiting agentic behavior relative to evolved ones.
I can imagine a spectrum of methods of creation that resemble evolution to various degrees. A domesticated dog seems more "created" and thus "purposed" by the evolved humans than a wolf, who can't claim a creator in the same way, but they seem to be morally... (read more)
I have seen several proposals for solving alignment (such as OpenAI's Superalignment initiative) involve harnessing incremental or "near-human level" AI to advance alignment research. I recall from recent interviews that Eliezer is skeptical of this approach at least partially on the grounds that an AI sufficiently advanced to contribute meaningfully to alignment work would 1.) already be dangerous in it's own right, and 2.) capable of deceiving human alignment researchers with false or insufficient proposals to advance alignment research.
Would it be possible to resolve the second problem by neither holding the AI directly accountable to human researchers or an overseer AI, as is commonly suggested, but instead to a supermajority view of... (read more)
Maybe somewhat unrelated, but does anyone know if there's been an effort to narrate HP:MoR using AI? I have several friends that I think could really stand to enjoy it, but who can't get past the current audiobook narration. I mostly agree with them, although it's better on 1.5x.
Sorry for the late reply, I haven't commented much on LW and it didn't appreciate the time it would take for someone to reply to me, so I missed this until now. If I reply to you, Ape in the coat, does that notify dr_s too?
If I understand dr_s's quotation, I believe he's responding to the post I referenced. How Many Lives Does X-Risk Work Save from Non-Existence includes pretty early on:
... (read more)Whenever I say "lives saved" this is shorthand for “future lives saved from nonexistence.” This is not the same as saving existing lives, which may cause profound emotional pain for people left behind, and some may consider more tragic than future
Thank you for a very thorough post. I think your writing has served me as a more organized account of some of my own impressions opposing longtermisim.
I agree with CrimsonChin in that I think there's a lot of your post many longtermists would agree with, including the practicality of focusing on short-term sub-goals. Also, I personally believe that initiatives like global health, poverty reduction, etc. probably improve the prospects of the far future, even if their expected value seems less than X-risk mitigation.
Nonetheless, I still think we should be motivated by the immensity of the future even if it is off set by tiny probabilities and there are huge margins of error,... (read more)
Thank you for making these threads. I have been reading LW off and on for several years and this will be my first post.
My question: Is purposely leaning into creating a human wire-header an easier alignment target to hit than the more commonly touted goal of creating an aligned superintelligence that prevents the emergence of other potentially dangerous superintelligence, yet somehow reliably leaves humanity mostly in the driver's seat?
If the current forecast on aligning superintelligent AI is so dire, is there a point where it would make sense to just settle for ceding control and steering towards creating a superintelligence very likely to engage in wire heading humans (or post-humans)? I'm imagining... (read more)
The possibility of sudden spikes in prices during a crisis also incentivize "speculators" to stockpile the good in advance. If they buy during normalcy in anticipation of a crisis, they smooth out the price curve by raising the price before the crisis has hit. As they hold the inventory in the mean time, these speculators are effectively creating a buffer in a potentially otherwise just-in-time supply chain.