All of GravitasGradient's Comments + Replies

Perhaps "agency" is a better term here? In the strict sense of an agent acting in an environment?

And yeah, it seems we have shifted focus away from that.

Thankfully, thanks to our natural play instincts, we have a wonderful collection of ready made training environments: I think the field needs a new challenge of an agent playing video games, only receiving instructions of what to do using natural language.

These stories always assume that an AI would be dumb enough to not realise the difference between measuring something and the thing measured.

Every AGI is a drug addict, unaware that it's high is a false one.

Why? Just for drama?

5paulfchristiano
I think the AI systems in this story have a clear understanding of the the difference between the measurement and the thing itself. Are humans similarly like drug addicts, because we'd prefer experience play and love and friendship and so on even though we understand those things are mediocre approximations to "how many descendants we have"?
dxu100

realise the difference between measuring something and the thing measured

What does this cash out to, concretely, in terms of a system's behavior? If I were to put a system in front of you that does "realize the difference between measuring something and the thing measured", what would that system's behavior look like? And once you've answered that, can you describe what mechanic in the system's design would lead to that (aspect of its) behavior?

The predicted cost for GPT-N parameter improvements is for the "classical Transformer" architecture? Recent updates like the Performer should require substantially less compute and therefore cost.

2Rohin Shah
Yes, in general you want to account for hardware and software improvements. From the original post: From the summary: The $1B - $10B number is meant to include things like the Performer.

but indeed human utility functions will have to be aggregated in some manner

I do not see why that should be the case? Assuming virtual heavens, why couldn't each individuals personal preferences be fullfilled?

4Gurkenglas
1. The universe is finite, and has to be distributed in some manner. 2. Some people prefer interactions with the people alive today to ones with heavenly replicas of them. You might claim that there is no difference, but I say that in the end it's all atoms, all the meaning is made up anyway, and we know exactly why those people would not approve if we described virtual heavens to them so we shouldn't just do them anyway. 3. Some people care about what other people do in their virtual heavens. You could deontologically tell them to fuck off, but I'd expect the model of dictator lottery + acausal trade to arrive at another solution.
Answer by GravitasGradient00

It seems pretty undeniable to me from these examples that GPT-3 can reason to an extend.

However, it can't seem to do it consistently.

Maybe analogous to people with mental and/or brain issues that have times of clarity and times of confusion?

If we can find a way to isolate the pattern of activity in GPT-3 that relates to reasoning we might be bale to enforce that state permanently?