Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
jasp8160

I think that the "most" in the sentence "most philosophers and AI people do think that neurol networks can be conscious if they run the right algorithm" is an overstatement, though I do not know to what extent. 

I have no strong view on that, primarly because I think I lack some deep ML knowledge (I would weigh far more the view of ML experts than the view of philosophers on this topic). 

Anyway, even accepting that neural networks can be conscious with the right algorithm, I think I disagree about "the fact that it's a language model doesn't seem relevant". In a LLM language is not only the final layer, you have also the fact that the aim of the algorithm is p(next words), so it is a specific kind of algorithms. My feeling is that a p(next words) algorithms cannot be sentient, and I think that most ML researchers would agree with that, though I am not sure.

I am also not sure about the "reasoning-capability" scale,   even if a LLM is very close to human for most parts of conversations, or better than human for some specific tasks (i.e doing summaries, for exemple), that would not mean that it is close to do a scientific breakthrough (on that I basically agree with the comments of AcurB  some posts above)

jasp812-2

Thank you, 

I agree with your reasoning strictly logically speaking, but it seems to me that a LLM cannot be sentient or have thoughts, even theoritically, and the burden of proof seems strongly on the side of someone who would made opposite claims.

And for someone who do not know what is a LLM, it is of course easy to anthropomorphize the LLM for obvious reasons (it can be designed to sound sentient or to express 'thoughts'), and it is my feeling that this post was a little bit about that.

Overall, I find the arguments that I received after my first comment more convincing in making me feel what could be the problem, than the original post.

As for the possibility of a LLM to accelerate scientific progress towards agentic AI, I am skeptical, but I may be lacking imagination.  

 And again, nothing in the exemples presented in the original post is related to this risk, It seems that people that are worried are more trying to find exemples where the "character" of the AI is strange (which in my opinion are mistaken worries due to anthropomorphization of the AI), rather than finding exemples where the AI is particularly "capable" in terms of generating powerful reasoning or impressive "new ideas" (maybe also because at this stage the best LLM are far from being there).

jasp814-2

Thank you for your answers.

Unfortunately I have to say that it did not help me so far to have a stronger feeling about ai safety.

(I feel very sympathetic with this post for example https://forum.effectivealtruism.org/posts/ST3JjsLdTBnaK46BD/how-i-failed-to-form-views-on-ai-safety-3 )

To rephrase,  my prior is that LLM just predict next words (it is their only capability). I would be worried when a LLM does something else (though I think it cannot happen), that would be what I would call "misalignment".

On the meantime , what I read a lot about people worrying about ChatGPT/Bing sounds just like anthropomorphizing the AI with the prior that it can be sentient, have "intents"   / and to me it is just not right.

I am not sure to understand how having the ability to search the internet change dramatically that.

 If a LLM, when p(next words) is too low, can '''''decide'''' to search the internet to have better inputs, I do not feel that it makes a change in what I say above.

I do not want to have a too long fruitless discussion, I think indeed that I have to continue to read some materials on AI safety to better understand what are your models , but at this stage to be honest I cannot help thinking that some comments or posts are made by people who lack some basic understanding about what a LLM is , which may result in anthropomorphizing AI more than it should be. It is very easy when you do not know what a LLM is to wonder for exemple "CHAT Gpt answered that, but he seems to say that to not hurt me, I wonder what does ChatGPT really think ?"  and I typically think that this sentence makes no sense at all, because of what a LLM is.

jasp817-2

I am new to this website. I am also not a english native speaker so pardon me in advance. I am very sorry if it is considered as rude on this forum to not starting by a post for introducing ourselves.

I am here because I am curious about the AI safety thing, and I do have a (light) ML background (though more in my studies than in my job). I have read this forum and adjacent ones for some weeks now but despite all the posts I read I have failed so far to have a strong opinion on p(doom). It is quite frustrating to be honest and I would like to have one.

I just cannot resist to react to this post, because my prior (very strong prior, 99%), is that chat GPT 3, 4 , or even 100, is not and cannot be and will not be, agentic or worrying, because at the end it is just a LLM predicting the most probable next words.

The impression that I have is that the author of this post does not understand what a LLM is, but I give a 5% probability that on the contrary he understands something that I do not get at all.

For me, no matter how ´smart’ the result looks like, anthropomorphizing the LLM and worry about it is a mistake.

I would really appreciate if someone can send me a link to help me understand why I may be wrong.