All of Sphinxfire's Comments + Replies

I think you should try to formulate your own objections to Chomsky's position. It could just as well be that you have clear reasons for disagreeing with his arguments here, or that you're simply objecting on the basis that what he's saying is different from the LW position. For my part, I actually found that post surprisingly lucid, ignoring the allusions to the idea of a natural grammar for the moment. As Chomsky says, a non-finetuned LLM will mirror the entire linguistic landcape it has been birthed from, and it will just as happily simulate a person arg... (read more)

4mukashi
"These programs have been hailed as the first glimmers on the horizon of artificial _general_ intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty. That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments" I think that day has "already" come. Mechanical minds are already surpassing human minds in many aspects: take any subject and tell ChatGPT to write a few paragraphs on it. It might not exhibit be the most lucid and creative of the best of humans, but I am willing to bet that its writing is going to be better than most humans. So, saying that its dawn is not yet breaking seems to me extremely myopic (it's like saying that thingy that the Wright brothers made is NOT the beginning of flying machines) "On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."  We could argue that the human mind CAN (in very specific cases, under some circumstances) be capable of rational processes. But in general, human minds are not trying to "understand" the world around creating explanations. Human minds are extremely inefficient, prone to biases, get tired very easily, need to be off 1/3 of the time, etc, etc, etc. "Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfac

Nothing that fancy, it's basically just a way to keep track of different publications in one place by subscribing to their feeds. More focused and efficient than checking all the blogs and journals, news and other stuff you are trying to keep up with manually. 

Oh, for sure. My point is more that the incredibly strong social pressure that characterized the dialogue around all questions concerning COVID completely overrode individual reflective capacity to the point where people don't even have a self-image of how their positions shifted over time and based on what new information/circumstances.

Even more sobering for me is how a lot of people in my circle of friends had pretty strong opinions on various issues at the height of the pandemic, from masks and lockdowns over vaccines to the origins of the virus and so on, but today, when I (gently) probe them on how those views have held up, what caused them to change their opinion on, say, whether closing down schools and making young children wear masks was really such a good idea, they act like they have always believed what's common sense now.

And these aren't people who generally 'go with the flow... (read more)

2Said Achmiz
Conversely, I’ve noticed some people who had the correct opinions before, but have since changed their opinions to conform with what is now (erroneously) seen as “common sense”.
1MSRayne
No actually! I've tried to learn about RSS before but gave up because it seemed really obscure. What's it actually for?
Answer by Sphinxfire10

I agree with the first answer, insofar as it's easy to lose sight of what's really in front of you when you start over-relying on labels to pre-structure how you look at the world - the labels themselves need to be objects of reflection. But still, I'll give you some labels and trust that you treat them critically.

Imo, German philosophy does have a valuable, and underappreciated, perspective to offer to the anglophone world when it comes to how one might conceive of rationality.

The classic 'sequence' would be Kant -> Fichte -> Schelling -> Hegel, ... (read more)

The truly interesting thing here is that I would agree unequivocally with you if you were talking about any other kind of 'cult of the apocalypse'.

These cults don't have to be based on religious belief in the old-fashioned sense, in fact, most cults of this kind that really took off in the 20th and 21st century are secular.

Since around the late 1800s, there has been a certain type of student that externalizes their (mostly his) unbearable pain and dread, their lack of perspective and meaning in life into 'the system', and throw themselves into the noble ca... (read more)

5Donald Hobson
  This is an interesting claim. If I had a planet destroying weapon that would leave the ISS astronauts alive, would you say "don't worry about it much, it's only 3 astronaut's problem"?
7bugsbycarlin
  This has Arrested Development energy ^_^ https://pbs.twimg.com/media/FUHfiS7X0AAe-XD.jpg    This is the thing to worry about.  There are real negative consequences to machine learning today, sitting inside the real negative consequences of software's dominance, and we can't stop the flat fact that a life of work is going away for most people. The death cult vibe is the wild leap. It does not follow that AI is going to magically gain the power to gain the power to gain the power to kill humanity faster than we can stop disasters.  
2Valentine
I agree. I wasn't trying to speak to this part. But now that you have, I'm glad you did. I don't mean to dismiss the very real impacts that this tech is having on people's lives. That's just a different thing than what I was talking about. Not totally unrelated, but a fair bit off to the side.

I don't think I've seen this premise done in his way before! Kept me engaged all the way/10.

"Humans are trained on how to live on Earth by hours of training on Earth. (...) Maybe most of us are just mimicking how an agent would behave in a given situation."

I agree that that's a plausible enough explanation for lots of human behaviour, but I wonder how far you would get in trying to describe historical paradigm shifts using only a 'mimic hypothesis of agenthood'.

Why would a perfect mimic that was raised on training data of human behaviour do anything paperclip-maximizer-ish? It doesn't want to mimic being a human, just like Dall-E doesn't want to ... (read more)

The alternative would be an AI that goes through the motions and mimics 'how an agent would behave in a given siuation' with a certain level of fidelity, but which doesn't actually exhibit goal-directed behavior.

Like, as long as we stay in the current deep learning paradigm of machine learning, my prediction for what would happen if an AI was unleashed upon the real world, regardless of how much processing power it has, would be that it still won't behave like an agent unless that's part of what we tell it to pretend. I imagine something along the lines of... (read more)

1green_leaf
If the agent would act as if it wanted something, and the AI mimics how an agent would behave, the AI will act as if it wanted something. I can see at least five ways in which this could fail: 1. It's simpler to learn a goal of playing Minecraft well (rather than learning the goal of playing as similar to the footage as possible). Maybe it's faster, or it saves space, or both, etc. An example of this would be AlphaStar, who learned first by mimicking humans, but then was rewarded for winning games. 2. One part of this learning would be creating a mental model of the world, since that helps an agent to better achieve its goals. The better this model is, the greater the chance it will contain humans, the AI, and the disutility of being turned off. 3. AIs already have inputs and outputs from/into the Internet and real life - they can influence much more than playing Minecraft. For a truly helpful AI, this influence will be deliberately engineered by humans to become even greater. 4. Eventually, we'll want the AI to do better than humans. If it only emulates a human (by imitating what a human would do) (which itself could create a mesa-optimizer, if I understand it correctly), it will only be as useful as a human. 5. Even if the AI is only tasked with outputting whatever the training footage would output and nothing more (like being good at playing Minecraft in a different world environment), ever, and it's not simpler to learn how to play Minecraft the best way it can, that itself, with sufficient cognition, ends the world. (The strawberry problem.) So I think maybe some combination of (1), (2) and (3) will happen.
3Martin Randall
Humans are trained on how to live on Earth by hours of training on Earth. We can conceive of the possibility of Earth being controlled by an external force (God or the Simulation Hypothesis). Some people spend time thinking about how to act so that the external power continues to allow the Earth to exist. Maybe most of us are just mimicking how an agent would behave in a given situation. The universe appears to be well constructed to provide minimal clues as to the nature of its creator. Minecraft less so.

Not a reductionist materialist perspective perse, but one idea I find plausible is that 'agent' makes sense as a necessary separate descriptor and a different mode of analysis precisely because of the loopiness you get when you think about thinking, a property that makes talking about agents fundamentally different from talking about rocks or hammers, the Odyssey, or any other 'thing' that could in principle be described on the single level of 'material reality' if we wanted to

When I try to understand the material universe and its physical properties, the ... (read more)

Thanks for the response. I hope my post didn't read as defeatist, my point isn't that we don't need to try to make AI safe, it's that if we pick an impossible strategy, no matter how hard we try it won't work out for us.

So, what's the reasoning behind your confidence in the statement 'if we give a superintelligent system the right terminal values it will be possible to make it safe'? Why do you believe that it should principally be possible to implement this strategy so long as we put enough thought and effort into it? 
Which part of my reasoning do yo... (read more)

Is it reasonable to expect that the first AI to foom will be no more intelligent than say, a squirrel?

In a sense, yeah, the algorithm is similar to a squirrel that feels a compulsion to bury nuts. The difference is that in an instrumental sense it can navigate the world much more effectively to follow its imperatives. 

Think about intelligence in terms of the ability to map and navigate complex environments to achieve pre-determined goals. You tell DALL-E2 to generate a picture for you, and it navigates a complex space of abstractions to give you a res... (read more)

Answer by Sphinxfire00

I think the answer to 'where is Eliezer getting this from' can be found in the genesis of the paperclip maximizer scenario. There's an older post on LW talking about 'three types of genie' and another on someone using a 'utility pump' (or maybe it's one and the same post?), where Eliezer starts from the premise that we create an artifical intelligence to 'make something specific happen for us', with the predictable outcome that the AI finds a clever solution which maximizes for the demanded output, one that naturally has nothing to do with what we 'really ... (read more)

I haven't commented on your work before, but I read Rationality and Inadequate Equilibria around the time of the start of the pandemic and really enjoyed them. I gotta admit, though, the commenting guidelines, if you aren't just being tongue-in-cheek, make me doubt my judgement a bit. Let's see if you decide to delete my post based on this observation. If you do regularly delete posts or ban people from commenting for non-reasons, that may have something to do with the lack of productive interactions you're lamenting.

Uh, anyway.

One thought I keep coming ba... (read more)

2Rob Bensinger
I disagree! We may not be on track to solve the problem given the amount (and quality) of effort we're putting into it. But it seems solvable in principle. Just give the thing the right goals! (Where the hard part lies in "give... goals" and in "right".)