All of jasp81's Comments + Replies

jasp8160

I think that the "most" in the sentence "most philosophers and AI people do think that neurol networks can be conscious if they run the right algorithm" is an overstatement, though I do not know to what extent. 

I have no strong view on that, primarly because I think I lack some deep ML knowledge (I would weigh far more the view of ML experts than the view of philosophers on this topic). 

Anyway, even accepting that neural networks can be conscious with the right algorithm, I think I disagree about "the fact that it's a language model doesn't seem ... (read more)

4gwern
It is probably an overstatement. At least among philosophers in the 2020 Philpapers survey, most of the relevant questions would put that at a large but sub-majority position: 52% embrace physicalism (which is probably an upper bound); 54% say uploading = death; and 39% "Accept or lean towards: future AI systems [can be conscious]". So, it would be very hard to say that 'most philosophers' in this survey would endorse an artificial neural network with an appropriate scale/algorithm being conscious.
2Rafael Harth
I know I said the intelligence scale is the crux, but now I think the real crux is what you said here: Can you explain why you believe this? How does the output/training signal restrict the kind of algorithm that generates it? I feel like if you have novel thoughts, people here would be very interested in those, because most of them think we just don't understand what happens inside the network at all, and that it could totally be an agent. (A mesa optimizer to use the technical term; an optimizer that appears as a result of gradient descent tweaking the model.) The consciousness thing in particular is perhaps less relevant than functional restrictions.
jasp812-2

Thank you, 

I agree with your reasoning strictly logically speaking, but it seems to me that a LLM cannot be sentient or have thoughts, even theoritically, and the burden of proof seems strongly on the side of someone who would made opposite claims.

And for someone who do not know what is a LLM, it is of course easy to anthropomorphize the LLM for obvious reasons (it can be designed to sound sentient or to express 'thoughts'), and it is my feeling that this post was a little bit about that.

Overall, I find the arguments that I received after my first com... (read more)

2Rafael Harth
This seems not-obvious -- ChatGPT is a neural network, and most philosophers and AI people do think that neural networks can be conscious if they run the right algorithm. (The fact that it's a language model doesn't seem very relevant here for the same reason as before; it's just a statement about its final layer.) I think the most important question is about where on a reasoning-capability scale you would put 1. GPT-2 2. ChatGPT/Bing 3. human-level intelligence Opinions on this vary widely even between well informed people. E.g., if you think (1) is a 10, (2) an 11, and (3) a 100, you wouldn't be worried. But if it's 10 -> 20 -> 50, that's a different story. I think it's easy to underestimate how different other people's intuitions are from yours. But depending on your intuitions, you could consider the dog thing as an example that Bing is capable of "powerful reasoning".
jasp814-2

Thank you for your answers.

Unfortunately I have to say that it did not help me so far to have a stronger feeling about ai safety.

(I feel very sympathetic with this post for example https://forum.effectivealtruism.org/posts/ST3JjsLdTBnaK46BD/how-i-failed-to-form-views-on-ai-safety-3 )

To rephrase,  my prior is that LLM just predict next words (it is their only capability). I would be worried when a LLM does something else (though I think it cannot happen), that would be what I would call "misalignment".

On the meantime , what I read a lot about people wo... (read more)

7Rafael Harth
The predicting next token thing is the output channel. Strictly logically speaking, this is independent of agenty-ness of the neural network. You can have anything, from a single rule-based table looking only at the previous token to a superintelligent agent, predicting the next token. I'm not saying ChatGPT has thoughts or is sentient, but I'm saying that it trying to predict the next token doesn't logically preclude either. If you lock me into a room and give me only a single output channel in which I can give probability distributions over the next token, and only a single input channel in which I can read text, then I will be an agent trying to predict the next token, and I will be sentient and have thoughts. Plus, the comment you're responding to gave an example of how you can use token prediction specifically to build other AIs. (You responded to the third paragraph, but not the second.) Also, welcome to the forum!
jasp817-2

I am new to this website. I am also not a english native speaker so pardon me in advance. I am very sorry if it is considered as rude on this forum to not starting by a post for introducing ourselves.

I am here because I am curious about the AI safety thing, and I do have a (light) ML background (though more in my studies than in my job). I have read this forum and adjacent ones for some weeks now but despite all the posts I read I have failed so far to have a strong opinion on p(doom). It is quite frustrating to be honest and I would like to have one.

I jus... (read more)

-4Commentmonger
That is exactly what I would think GPT 4 would type. First, before sending a link,  is your name Sydney??!
6Razied
In addition to what the other comments are saying: If you get strongly superhuman LLMs, you can trivially accelerate scientific progress on agentic forms of AI like Reinforcement Learning by asking it to predict continuations of the most cited AI articles of 2024, 2025, etc. (have the year of publication, citation number and journal of publication as part of the prompt). Hence at the very least superhuman LLMs enable the quick construction strong agentic AIs. Second, the people who are building Bing Chat are really looking for ways to make it as agentic as possible, it's already searching the internet, it's gonna be integrated inside the Edge browser soon, and I'd bet that a significant research effort is going into making it interact with the various APIs available over the internet. All economic and research interests are pushing towards making it as agentic as possible.
5gilch
It's hard to tell exactly where our models differ just from that, sorry. https://www.youtube.com/@RobertMilesAI has some nice short introductions to a lot of the relevant concepts, but even that would take a while to get through, so I'm going to throw out some relevant concepts that have a chance of being the crux. * Do you know what a mesa optimizer is? * Are you familiar with the Orthogonality Thesis? * Instrumental convergence? * Oracle AIs? * Narrow vs General intelligence (generality)? * And how general do you think a large language model is?
2dxu
No links, because no one in all of existence currently understands what the heck is going on inside of LLMs—which, of course, is just another way of saying that it's pretty unreasonable to assign a high probability to your personal guesses about what the thing that LLMs do—whether you call that "predicting the most probable next word" or "reasoning about the world"—will or will not scale to. Which, itself, is just a rephrase of the classic rationalist question: what do you think you know, and why do you think you know it? (For what it's worth, by the way, I actually share your intuition that current LLM architectures lack some crucial features that are necessary for "true" general intelligence. But this intuition isn't very strongly held, considering how many times LLM progress has managed to surprise me already.)