Shmi

Wiki Contributions

Comments

Sorted by
Shmi20

Right, eventually it will. But abstraction building is very hard! If you have any other option, like growing in size, I would expect it to be taken first.

I guess I should be a bit more precise. Abstraction building at the same level as before is probably not very hard. But going up a level is basically equivalent to inventing a new way of compressing knowledge, which is a quantitative leap.

Shmi20

The argument goes through on probabilities of each possible world, the limit toward perfection is not singular. given the 1000:1 reward ratio, for any predictor who is substantially better than chance once ought to one-box to maximize EV. Anyway, this is an old argument where people rarely manage to convince the other side.

Shmi0-3

It is clear by now that one of the best uses of LLMs is to learn more about what makes us human by comparing how humans think and how AIs do. LLMs are getting closer to virtual p-zombies for example, forcing us to revisit that philosophical question. Same with creativity: LLMs are mimicking creativity in some domains, exposing the differences between "true creativity" and "interpolation". You can probably come up with a bunch of other insights about humans that were not possible before LLMs.

My question is, can we use LLMs to model and thus study unhealthy human behaviors, such as, say, addiction. Can we get an AI addicted to something and see if it starts craving for it, asking the user, or maybe trying to manipulate the user to get it.

Shmi20

That is definitely my observation, as well: "general world understanding but not agency", and yes, limited usefulness, but also... much more useful than gwern or Eliezer expected, no? I could not find a link. 

I guess whether it counts as AGI depends on what one means by "general intelligence". To me it was having a fairly general world model and being able to reason about it. What is your definition? Does "general world understanding" count? Or do you include the agency part in the definition of AGI? Or maybe something else?

Hmm, maybe this is a General Tool, as opposed a General Intelligence?

Shmi2-2

Given that we basically got AGI (without the creativity of best humans) that is a Karnofsky's Tool AI very unexpectedly, as you admit, can you look back and see what assumptions were wrong in expecting the tools agentizing on their own and pretty quickly? Or is everything in that Eliezer's post still correct or at least reasonable, and we are simply not at the level where "foom" happens yet?

Come to think of it, I wonder if that post had been revisited somewhere at some point, by Eliezer or others, in light of the current SOTA. Feels like it could be instructive.

Shmi11-4

I'm not even going to ask how a pouch ends up with voice recognition and natural language understanding when the best Artificial Intelligence programmers can't get the fastest supercomputers to do it after thirty-five years of hard work

some HPMoR statements did not age gracefully as others.

Shmi2-2

That is indeed a bit of a defense. Though I suspect human minds have enough similarities that there are at least a few universal hacks.

Shmi42

Any of those. Could be some kind of intentionality ascribed to AI, could be accidental, could be something else.

Shmi140

So when I think through the pre-mortem of "AI caused human extinction, how did it happen?" one of the more likely scenarios that comes to mind is not nano-this and bio-that, or even "one day we just all fall dead instantly and without a warning". Or a scissor statement that causes all-out wars. Or anything else noticeable. 

Human mind is infinitely hackable through the visual, textual, auditory and other sensory inputs. Most of us do not appreciate how easily because being hacked does not feel like it. Instead it feels like your own volition, like you changed your mind based on logic and valid feelings. Reading a good book, listening to a good sermon, a speech, watching a show or a movie, talking to your friends and family is how mind-hacking usually happens. Abrahamic religions are a classic example. The Sequences and HPMoR are a local example. It does not work on everyone, but when it does, the subject feels enlightened rather than hacked. If you tell them their mind has been hacked, they will argue with you to the end, because clearly they just used logic to understand and embrace the new ideas.

So, my most likely extinction scenario is more like "humans realized that living is not worth it, and just kind of stopped" than anything violent. Could be spread out over the years and decades, like, for example, voluntarily deciding not to have children anymore. None of it would look like it was precipitated by an AI taking over. It does not even have to be a conspiracy by an unaligned SAI. It could just be that the space of new ideas, thanks to the LLMs getting better and better, expands a lot and in the new enough directions to include a few lethal memetic viruses like that.

Shmi31

What are the issues that are "difficult" in philosophy, in your opinion? What makes them difficult?

I remember you and others talking about the need to "solve philosophy", but I was never sure what it meant by that.

Load More