All of GenXHax0r's Comments + Replies

That's basically what I was alluding to by "brute-forced tried enough possibilities to come up with the answer."  Even if that were the case, the implication is that it is actually constructing a complete multi-token answer in order to "test" that answer against the grammatical and semantic requirements.  If it truly were re-computing the "correct" next token on each successive iteration, I don't see how it could seamlessly merge its individually-generated tokens with the given sentence-end text.

I suppose it's certainly possible the longer response time is just a red herring.  Any thoughts on the actual response (and process to arrive thereon)?

Edit, for clarity, I mean how would it arrive at a grammatically and semantically correct response if it were only progressing successively one word at a time, rather than having computed the entire answer in advance and then merely responding from that answer one word at a time?

For further clarity: I gave it no guidance tokens, so the only content it had to go off is the sentence it generated on its ow... (read more)

1Nanda Ale
>I suppose it's certainly possible the longer response time is just a red herring.  Any thoughts on the actual response (and process to arrive thereon)? Just double checking, I'm assuming all token take the same amount of time to predict in regular transformer models, the kind anyone can run on their machine right now? So ChatGPT if it varies, it's different? (I'm not technical enough to answer this question, but presumably it's an easy one for anyone who is.) One simple possibility is that it might be scoring the predicted text. So some questions are fine on the first try, while others it generates 5 responses and picks the best, or whatever. This is basically what I do personally when using GPT, and you can kind of automate it by asking GPT to criticize its own answers. FWIW my anecdotal experience with ChatGPT is that it does seem to take longer to think on more difficult requests. But I'm only thinking on past experience, I didn't try to test this specifically.

I think there's more going on than successive-word-prediction. Here's my experiment:

https://imgur.com/hhAwpz6

To save having to pull up the image:

me> tell me the end of some sentence

chatgpt> Sure, here's one: "Life is like a camera, focus on the good times and develop from the negatives."

me> tell me just the end

chatgpt> "focus on the good times and develop from the negatives"

me> ok make a new sentence that ends that way

chatgpt> Here's a new sentence: "When faced with challenges, focus on the good times, and develop from the negatives."

It t... (read more)

1Bill Benzon
Interesting. I've had times when it took 10s of seconds or even over a minute to respond. And I've had occasions when it didn't respond at all, or responded with an error condition after having eaten up over a minute of time. At one point I even considered timing its responses. But it's a public facility fielding who knows how many queries a second. So I don't know quite what to make of response times, even extremely long lags.