All of Vishrut Arya's Comments + Replies

Isn't Zvi's post an attempt to explain those observations?

3Liron
My #1 and #2 are in a separate video Marc made after the post Zvi referred to, but ya, could fall under the "bizarrely poor arguments" Zvi is trying to explain. My #3 and his firm's various statements about Web3 in the last couple years, like this recent gaslighting, are additional examples of bizarrely poor arguments in an unrelated field. If we don't come in with an a-priori belief that Marc is an honest or capable reasoner, there's less confusion for Zvi to explain.
4DirectedEvolution
(3) isn't about AI so I don't think Zvi's model explains that. If we ignore (1) and (2), then the one example we're left with (which may or may not be badly reasoned) isn't good enough evidence to say that somebody "just consistently makes badly-reasoned statements."

Any explanations for why Nick Bostrom has been absent, arguably notably, in recent public alignment conversations (particularly since chatgpt)?

He's not on this list (yet other FHI members, like Toby Ord, are). He wasn't on the FLI open letter, too, but I could understand why he might've avoided endorsing that letter given its much wider scope.

habryka4231

Almost certainly related to that email controversy from a few months ago. My sense is people have told him (or he has himself decided) to take a step back from public engagement. 

I think I disagree with this, but it's not a totally crazy call, IMO.

Most of the argument can be boiled down to a simple syllogism: the superior intelligence is always in control; as soon as AI is more intelligent than we are, we are no longer in control.

Seems right to me.  And it's a helpful distillation. 

When we think about Western empires or alien invasions, what makes one side superior is not raw intelligence, but the results of that intelligence compounded over time, in the form of science, technology, infrastructure, and wealth. Similarly, an unaided human is no match for most animals. AI, no matter how inte

... (read more)
2jasoncrawford
Chess is a simple game and a professional chess player has played it many, many times. The first time a professional plays you is not their “first try” at chess. Acting in the (messy, complicated) real world is different.

There's a somewhat obscure but fairly-compelling-to-me model of psychology which states that people are only happy/okay to the extent that they have some sort of plan, and also expect that plan to succeed.

What's the name of this model; or, can you point to the fuller version of it? Seems right and would see it fleshed out.

2Duncan Sabien (Deactivated)
It's Connection Theory, but I do not know if there's any good published material online; it was proprietary from a small group and I've mostly heard about it filtered through other people.

hi Matt! on the coordination crux, you say 

The first AGIs we construct will be born into a culture already capable of coordinating, and sharing knowledge, making the potential power difference between AGI and humans relatively much smaller than between humans and other animals, at least at first.

but wouldn’t an AGI be able to coordinate and do knowledge sharing with humans because 

a) it can impersonate being a human online and communicate with them via text and speech and 

b) it‘ll realize such coordination is vital to accomplish it‘s goals a... (read more)

You can get many of the benefits of having one country through mechanisms like free trade agreements, open borders, shared currency zones etc.

This is key in my opinion.

Duplicates - digital copies as opposed to genetic clones - might not require new training (unless a whole/partial restart/retraining was being done).

Wouldn't new training be strongly adaptive -- if not strictly required -- if the duplicate's environment is substantively different from the environment of its parent?

When combined with self-modification, there could be 'evolution' without 'deaths' of 'individuals' - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated
... (read more)
2Pattern
It's an unusual case, but AlphaGo provides an example of something being removed and retrained and getting better.   Outside of that - perhaps. The viability of self-modifying software...I guess we'll see. For a more intuitive approach, let's imagine an AGI is a human emulation except it's immortal/doesn't die of old age. (I.e. maybe the 'software' in some sense doesn't change but the knowledge continues to accumulate and be integrated in a mind.) 1. Why would such an AI have 'children'? 2. How long do software systems last when compared to people? Just reasoning by analogy, yes 'mentoring' makes sense, though maybe in a different form. One person teaching everyone else in the world sounds ridiculous - with AGI, it seems conceivable. Or in a different direction, imagine if when you forgot about something you just asked your past self.   Overall, I'd say it's not an necessary thing, but for agents like us it seems useful, and so the scenario you describe seems probable, but not guaranteed.

I'll check it out -- thanks Zachary!