Raghuvar Nadig

Wiki Contributions

Comments

Sorted by

Thanks! I should have been more clear that the trajectory toward level 5 (with all human virtue/trust being hackable for instrumental gains) itself is concerning, not just the eventual leap when it gets there.

The next goodwill-inducing paradigm that has outlived its utility seems to be the concept of "AGI":

From here:

Oddly, that could be the key to getting out from under its contract with Microsoft. The contract contains a clause that says that if OpenAI builds artificial general intelligence, or A.G.I. — roughly speaking, a machine that matches the power of the human brain — Microsoft loses access to OpenAI’s technologies.

The clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations. Under the terms of the contract, the OpenAI board could decide when A.G.I. has arrived.

Despite being founded on the precept of developing AGI, structuring the company and many major contracts around the idea, while never precisely defining it - there now seems to be deliberate distancing, as evidenced here. Notably Sam's recent vision of the future "The Intelligence Age" does not mention AGI.

I expect more tweets like this from OpenAI employees in the coming weeks/months, expressing doubts about the notion of AGI, often taking care to say that the causal motivations are altruistic/epistemic.

I categorically disagree with Eliezer's tweet that "OpenAI fired everyone with a conscience", and all of this might not be egregious as far as corporate sleights-of-hand/dissonance go - but scaled up recursively, eg. when extended to principles relating to alignment/warning shots/surveillance/misinformation/weapons, this does not bode well.

  • Explain why you're concerned in public.

 

I'm concerned about OpenAI's behavior in context of their stated trajectory towards level 5 intelligence - running an organization. If the model for a successful organization lies in the dissonance between actions intended to foster goodwill (open research/open source/non-profit/safety concerned/benefit all of humanity) but those virtuous paradigms are all instrumental rather than intrinsic, requiring NDAs/financial pressure/lobbying to be whitewashed, scaling that up with AGI (which would have more intimate and expansive data, greater persuasiveness, more emotional detachment, less moral hesitation) seems clearly problematic.

In the spirit of Situational Awareness, I'm curious how people are parsing some apparent contradictions:

  • OpenAI is explicitly pursuing AGI
  • Most/many people in the field (eg. Leopold Aschenbrenner, who worked with Ilya Sutskever) presume that (approximately) when AGI is reached, we'll have automated software engineers and ASI will follow very soon
  • SSI is explicitly pursuing straight-shot superintelligence - the announcement starts off by claiming ASI is "within reach"
  • In his departing message from OpenAI, Sutskever said "I’m confident that OpenAI will build AGI that is both safe and beneficial...I am excited for what comes next - a project that is very personally meaningful to me about which I will share details in due time"
  • At the same time, Sam Altman said "I am forever grateful for what he did here and committed to finishing the mission we started together"

Does this point to increased likelihood of a timeline in which somehow OpenAI develops AGI before anyone else, and also SSI develops superintelligence before anyone else?

Does it seem at all likely from the announcement that by "straight-shot" SSI is strongly hinting that it aims to develop superintelligence while somehow sidestepping AGI (which they won't release anyway) and automated software engineers? 

Or is it all obviously just speculative talk/PR, not to be taken too literally, and we don't really need to put much weight on the differences between AGI/ASI for now? Just seems like more unnecessary specificity than warranted, if that were the case.

Could you clarify how binding "OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity." is?

Sure! I think a bunch of other answers touch upon this though. 

The idea is that it's not determinism in itself that's causing the demotivation, that's just a narrative your subconscious mind brings forward when faced with a tough task, to protect you from thinking about something that is more difficult to face, but often actionable, eg. "I feel I'm not smart enough", "I think I will fail", "I'm embarrassed about what others will think".  By explicitly asking yourself what that 'other' cause is (by phrasing it as above, or perhaps by imagining a stern parent/coach giving you a reality check), you can focus on something that might be very tough but not literally impossible to solve like the universe being deterministic. 

Answer by Raghuvar Nadig40

The tool you essentially have in the face of determinism despair is awareness of distributed causality. It is the 'thinking about/sense of' part that is (or seems to be) causing it. A practical exercise I like is asking "If I had to bring myself to face the most 'makes me feel bad about myself' cause of my demotivation, what would it be?". Existential despair often masks some other pertinent but deeply invalidating anxiety.

I'm a former quant now figuring out how to talk to tech people about love (I guess it's telling that I feel a compelling pressure to qualify this). 

Currently reading

https://www.nytimes.com/2023/10/16/science/free-will-sapolsky.html

Open to talking about anything in this ballpark!

Ok, this is me discarding my 'rationalist' hat, and I'm not quite sure of the rules and norms applicable to shortforms, but I simply cannot resist pointing out the sheer poetry of the situation. 

I made a post about unconditional love and it got voted down to the point I didn't have enough 'karma' to post for a while. I'm an immigrant from India and took Sanskrit for six years - let's just say there is a core epistemic clash in its usage on this site[1]. A (very intelligent and kind) person whose id happens to be Christian takes pity and suggests, among other things, an syntactic distancing from the term 'love'. 

TMI: I'm married to a practicing Catholic - named Christian.


 

  1. ^

    Not complaining - I'm out of karma jail now and it's a terrific system. Specifically saying that the essence of 'karma', etymologically speaking, lies in its intangibility and implicit nature. 

Load More