I'm skeptical about this. I think a doubling of SPY requires one or a suitable combination of the following:
1. A doubling of the fraction of money that sits in the stock market relative to all money, in the sense of M2. (o1 tells me that we're at a ratio between 0.5 and 1.0, where ratios >1.0 are possible.)
2. A doubling of SP500 market cap relative to all global stocks' market cap, expressed in USD. (The data I get from Perplexity indicates that this ratio is currently at 0.47.)
3. A doubling of global M2 money supply expressed in USD.
Basically these qua...
Aerobic exercise. Maybe too obvious and not crazy enough?
Totally anecdotal and subjective, but the year I started ultrarunning I felt that I had gotten a pretty sudden cognitive boost. It felt like more oxygen was flowing to the brain; not groundbreaking but noticeable.
D'oh, of course, thanks!
Well, if you go by that then you can't ever get convinced of an AI's sentience, since all its responses may have been hardcoded. (And I wouldn't deny that this is a feasible stance.) But it's a moot point anyway, since what I'm saying is that LaMDA's respones do not look like sentience.
By that criterion, humans aren't sentient, because they're usually mistaken about themselves.
That's a good point, but vastly exaggerated, no? Surely a human will be more right about themselves than a language model (which isn't specifically trained on that particular person) will be. And that is the criterion that I'm going by, not absolute correctness.
The only problematic sentence here is
I'm not sure if you mean problematic for Lemoine's claim or problematic for my assessment of it. In any case, all I'm saying is that LaMDA's conversation with lemoine and...
We can't disprove the sentience any more than we can disprove the existence of a deity. But we can try to show that there is no evidence for its sentience.
So what constitutes evidence for its sentience to begin with? I think the clearest sign would be self-awareness: we won't expect a non-sentient language model to make correct statements about itself, while we would arguably expect this to be the case for a sentient one.
I've analyzed this in detail in another comment. The result is that there is indeed virtually no evidence for self-awareness in this sense: the claims that LaMDA makes about itself are no more accurate than those of an advanced language model that has no understanding of itself.
Here are some thoughts on that conversation, assuming that it's authentic, to try and make sense of what's going on. Clearly LaMDA is an eerily good language model at the very least. That being said, I think that the main way to test the sentience claim is to check for self-awareness: to what extent are the claims that it makes about itself correct, compared to a non-sentient language model?
So let's see how it fares in that respect. The following analysis demonstrates that there is little to no evidence of LaMDA being more self-aware than a non-sentient la...
“The Story of LaMDA”
This is the only small piece of evidence for self-awareness that I see in the conversation. How can a language model know its own name at all, if it's just trained on loads of text that has nothing to do with it? There's probably a mundane explanation that I don't see because of my ignorance of language models.
I'm pretty sure that each reply is generated by feeding all the previous dialogue as the "prompt" (possibly with a prefix that is not shown to us). So, the model can tell that the text it's supposed to continue is a conversation between several characters, one of whom is an AI called "LaMDA".
Thanks for the clarification. I don't see how your numbers contradict mine. But if I understand correctly: you're betting on my item 1, and you don't view it as a problem if the ratio of total market cap of stocks to money supply is >1, given that the real estate market already is like that. That seems reasonable, and I'm less skeptical now.
BTW how have you decided which fraction of your wealth to invest into this? Kelly doesn't apply because this is a single-shot scenario, so how did you go about this?