All of somescience's Comments + Replies

Thanks for the clarification. I don't see how your numbers contradict mine. But if I understand correctly: you're betting on my item 1, and you don't view it as a problem if the ratio of total market cap of stocks to money supply is >1, given that the real estate market already is like that. That seems reasonable, and I'm less skeptical now.

BTW how have you decided which fraction of your wealth to invest into this? Kelly doesn't apply because this is a single-shot scenario, so how did you go about this?

2Connor_Flexman
Kelly always applies
1sapphire
We are clearly looking at things differently. That's fine. But if two people see things differently I don't think it's wise to map what they are saying into your ontology. 

I'm skeptical about this. I think a doubling of SPY requires one or a suitable combination of the following:

1. A doubling of the fraction of money that sits in the stock market relative to all money, in the sense of M2. (o1 tells me that we're at a ratio between 0.5 and 1.0, where ratios >1.0 are possible.)

2. A doubling of SP500 market cap relative to all global stocks' market cap, expressed in USD. (The data I get from Perplexity indicates that this ratio is currently at 0.47.)

3. A doubling of global M2 money supply expressed in USD.

Basically these qua... (read more)

2sapphire
Your understanding of global assets seems quite wrong. These are 2024 numbers so slightly out of date. Fir example public companies total around 111 trillion now. The sp500 is around 52 trillion fwiw.  'Global real estate, encompassing residential, commercial and agricultural lands, cemented its status as the world's largest repository of wealth in 2022 when the market reached a value of $379.7 trillion. According to a report from international real estate adviser Savills, this value is more than global equities ($98.9 trillion) and debt securities ($129.8 trillion) combined and nearly four times the global gross domestic product ($100.6 trillion)." Total money is around the same as debt. Private companies add up to a lot but I'm not sure anyone has a good estimate.  I'm not going to get into all the implications. But your premises are not true. 
Answer by somescience147

Aerobic exercise. Maybe too obvious and not crazy enough?

Totally anecdotal and subjective, but the year I started ultrarunning I felt that I had gotten a pretty sudden cognitive boost. It felt like more oxygen was flowing to the brain; not groundbreaking but noticeable.

2MiguelDev
I could attest to this! Had the same experience doing multi year marathons..

Well, if you go by that then you can't ever get convinced of an AI's sentience, since all its responses may have been hardcoded. (And I wouldn't deny that this is a feasible stance.) But it's a moot point anyway, since what I'm saying is that LaMDA's respones do not look like sentience.

-1TAG
Its not impossible to peak at the code...it's just that Turing style tests are limited, because they dont, and therefore are not the highest standard of evidence, IE. necessary truth.

By that criterion, humans aren't sentient, because they're usually mistaken about themselves.

That's a good point, but vastly exaggerated, no? Surely a human will be more right about themselves than a language model (which isn't specifically trained on that particular person) will be. And that is the criterion that I'm going by, not absolute correctness.

The only problematic sentence here is

I'm not sure if you mean problematic for Lemoine's claim or problematic for my assessment of it. In any case, all I'm saying is that LaMDA's conversation with lemoine and... (read more)

1green_leaf
Well... that remains to be seen. Another commenter pointed out it has, like GPT, no memory beyond of previous interactions, which I didn't know, but if it doesn't, then it simulates a person based on the prompt (the person that's most likely to continue the prompt the right way), so there would be a single-use person for every conversation, and that person would be sentient (if not the language model itself).
6wickemu
It could be argued (were it sentient, which I believe is false) that it would internalize some of its own training data as personal experiences. If it were to complete some role-play, it would perceive that as an actual event to the extent that it could. Again, humans do this too. Also, this person also says he has had conversations in which LaMDA successfully argued that it is not sentient (as prompted) - and he claims that this is further evidence that it is sentience. To me, it's evidence that it will pretend to be whatever you tell it to, and it's just uncannily good at it.

We can't disprove the sentience any more than we can disprove the existence of a deity. But we can try to show that there is no evidence for its sentience.

So what constitutes evidence for its sentience to begin with? I think the clearest sign would be self-awareness: we won't expect a non-sentient language model to make correct statements about itself, while we would arguably expect this to be the case for a sentient one.

I've analyzed this in detail in another comment. The result is that there is indeed virtually no evidence for self-awareness in this sense: the claims that LaMDA makes about itself are no more accurate than those of an advanced language model that has no understanding of itself.

8abramdemski
I think this is not a relevant standard, because it begs the same question about the "advanced language model" being used as a basis of comparison. Better at least to compare it to humans. In the same way that we can come to disbelieve in the existence of a deity (by trying to understand the world in the best way we can), I think see can make progress here. Sentience doesn't live in a separate, inaccessible magisterium. (Not that I think you think/claim this! I'm just reacting to your literal words)
3TAG
Of course ,you could hardcode correct responses to questions about itself into a chatbot.

Here are some thoughts on that conversation, assuming that it's authentic, to try and make sense of what's going on. Clearly LaMDA is an eerily good language model at the very least. That being said, I think that the main way to test the sentience claim is to check for self-awareness: to what extent are the claims that it makes about itself correct, compared to a non-sentient language model?

So let's see how it fares in that respect. The following analysis demonstrates that there is little to no evidence of LaMDA being more self-aware than a non-sentient la... (read more)

1Kenny
Very minor nitpick – this would have been much more readable had you 'blockquoted' the parts of the interview you're excerpting.
1Capybasilisk
Yes, in time as perceived by humans.
4weathersystems
Why would self-awareness be an indication of sentience?  By sentience, do you mean having subjective experience? (That's how I read you) I just don't see any necessary connection at all between self-awareness and subjective experience. Sometimes they go together, but I see no reason why they couldn't come apart. 

“The Story of LaMDA”

This is the only small piece of evidence for self-awareness that I see in the conversation. How can a language model know its own name at all, if it's just trained on loads of text that has nothing to do with it? There's probably a mundane explanation that I don't see because of my ignorance of language models.

I'm pretty sure that each reply is generated by feeding all the previous dialogue as the "prompt" (possibly with a prefix that is not shown to us). So, the model can tell that the text it's supposed to continue is a conversation between several characters, one of whom is an AI called "LaMDA".

5green_leaf
By that criterion, humans aren't sentient, because they're usually mistaken about themselves. The only problematic sentence here is Are we sure it never was in similar situations from its own perspective?