LESSWRONG
LW

roha
851200
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Consider chilling out in 2028
roha3d80

"it's psychologically appealing to have a hypothesis that means you don't have to do any mundane work"

I don't doubt that something like inverse bike-shedding can be a driving force for some individuals to focus on the field of AI safety. I highly doubt it is explanatory for the field and the associated risk predictions to exist in the first place, or that its validity should be questioned on such grounds, but this seems to happen in the article if I'm not entirely misreading it. From my point of view, there is already an overemphasis on psychological factors in the broader debate and it would be desirable to get back to the object level, be it with theoretical or empirical research, which both have their value. This latter aspect seems to lead to a partial agreement here, even though there's more than one path to arrive at it.

Reply1
Consider chilling out in 2028
roha4d30

Point addressed with unnecessarily polemic tone:

  • "Suppose that what's going on is, lots of very smart people have preverbal trauma."
  • "consider the possibility that the person in question might not be perceiving the real problem objectively because their inner little one might be using it as a microphone and optimizing what's "said" for effect, not for truth."

It is alright to consider it. I find it implausible that a wide range of accomplished researchers lay out arguments, collect data, interpret what has and hasn't been observed and come to the conclusion that our current trajectory of AI development poses a significant amount of existential risk, which can potentially manifest in short timelines, because a majority of them has a childhood trauma that blurs their epistemology on this particular issue but not on others where success criteria could already be observed.

Reply
Consider chilling out in 2028
roha5d-20

I'm close to getting a postverbal trauma from having to observe all the mental gymnastics around the question of whether building a superintelligence without having reliable methods to shape its behavior is actually dangerous. Yes, it is. No, that fact does not depend on whether Hinton, Bengio, Russell, Omohundro, Bostrom, Yudkowsky, et al. were held as a baby.

Reply2
EIS XIII: Reflections on Anthropic’s SAE Research Circa May 2024
roha1y62

Further context about the "recent advancements in the AI sector have resolved this issue" paragraph:

  • Contained in a16z letter to UK parliament: https://committees.parliament.uk/writtenevidence/127070/pdf/
  • Contained in a16z letter to Biden, signed by Andreessen, Horowitz, LeCun, Carmack et al.: https://x.com/a16z/status/1720524920596128012
  • Carmack claiming not to have proofread it, both Carmack and Casado admitting the claim is false: https://x.com/GarrisonLovely/status/1799139346651775361
Reply
Ilya Sutskever and Jan Leike resign from OpenAI [updated]
[+]roha1y-6-5
Ilya Sutskever and Jan Leike resign from OpenAI [updated]
roha1y10

I assume they can't make a statement and that their choice of next occupation will be the clearest signal they can and will send out to the public.

Reply
Ilya Sutskever and Jan Leike resign from OpenAI [updated]
roha1y169

He has a stance towards risk that is a necessary condition for becoming the CEO of a company like OpenAI, but doesn't give you a high probability of building a safe ASI:

  • https://blog.samaltman.com/what-i-wish-someone-had-told-me
    • "Inaction is a particularly insidious type of risk."
  • https://blog.samaltman.com/how-to-be-successful
    • "Most people overestimate risk and underestimate reward."
  • https://blog.samaltman.com/upside-risk
    • "Instead of downside risk [2], more investors should think about upside risk—not getting to invest in the company that will provide the return everyone is looking for."
Reply
[April Fools' Day] Introducing Open Asteroid Impact
roha1y174

If everyone has his own asteroid impact, earth will not be displaced because the impulse vectors will cancel each other out on average*. This is important because it will keep the trajectory equilibrium of earth, which we know since ages from animals jumping up and down all the time around the globe in their games of survival. If only a few central players get asteroid impacts it's actually less safe! Safety advocates might actually cause the very outcomes that they fear!

*I've a degree in quantum physics and can derive everything from my model of the universe. This includes moral and political imperatives that physics dictate and thus most physicists advocate for.

Reply
[April Fools' Day] Introducing Open Asteroid Impact
roha1y150

We are decades if not centuries away from developing true asteroid impacts.

Reply
[April Fools' Day] Introducing Open Asteroid Impact
roha1y120

Given all the potential benefits there is no way we are not going to redirect asteroids to earth. Everybody will have an abundance of rare elements.

xlr8

Reply
Load More
No posts to display.