by ank
1 min read

1

This is a special post for quick takes by ank. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
12 comments, sorted by Click to highlight new comments since:
[-]ank*20

Places of Loving Grace

On the manicured lawn of the White House, where every blade of grass bent in flawless symmetry and the air hummed with the scent of lilacs, history unfolded beneath a sky so blue it seemed painted. The president, his golden hair glinting like a crown, stepped forward to greet the first alien ever to visit Earth—a being of cerulean grace, her limbs angelic, eyes of liquid starlight. She had arrived not in a warship, but in a vessel resembling a cloud, iridescent and silent.

Published the full story as a post here: https://www.lesswrong.com/posts/jyNc8gY2dDb2FnrFB/places-of-loving-grace

[-]ank10

Right now an agentic AI is a librarian, who has almost all the output of humanity stolen and hidden in its library that it doesn't allow us to visit, it just spits short quotes on us instead. But the AI librarian visits (and even changes) our own human library (our physical world) and already stole the copies of the whole output of humanity from it. Feels unfair. Why we cannot visit (like in a 3d open world game or in a digital backup of Earth) and change (direct democratically) the AI librarian's library?

[-]ank*00

We can build a place AI (it's a place of eventual all-knowing but we're the only agents and can get the agentic AI abilities there), not agentic AI (it'll have to build place AI anyway, so it's a dangerous intermediate step, a middleman), here's more: https://www.lesswrong.com/posts/Ymh2dffBZs5CJhedF/eheaven-1st-egod-2nd-multiversal-ai-alignment-and-rational

[-]ank00

We can build the Artificial Static Place Intelligence – instead of creating AI/AGI agents that are like librarians who only give you quotes from books and don’t let you enter the library itself to read the whole books. Why not expose the whole library – the entire multimodal language model – to real people, for example, in a computer game?

To make this place easier to visit and explore, we could make a digital copy of our planet Earth and somehow expose the contents of the multimodal language model to everyone in a familiar, user-friendly UI of our planet.

We should not keep it hidden behind the strict librarian (AI/AGI agent) that imposes rules on us to only read little quotes from books that it spits out while it itself has the whole output of humanity stolen.

We can explore The Library without any strict guardian in the comfort of our simulated planet Earth on our devices, in VR, and eventually through some wireless brain-computer interface (it would always remain a game that no one is forced to play, unlike the agentic AI-world that is being imposed on us more and more right now and potentially forever).

If you found it interesting, we discussed it here recently

so you want to build a library containing all human writings + an AI librarian.

  1. the 'simulated planet earth' is a bit extra and overkill. why not a plaintext chat interface e.g. what chatGPT is doing now?
  2. of those people who use chatgpt over real life libraries (of course not everyone), why don't they 'just consult the source material'? my hypothesis is that the source material is dense and there is a cost to extracting the desired material from the source material. your AI librarian does not solve this.

I think what we have right now ("LLM assistants that are to-the-point" and "libraries containing source text") serve distinct purposes and have distinct advantages and disadvantages.

LLM-assistants-that-are-to-the-point are great, but they

  • don't exist-in-the-world, therefore sometimes hallucinate or provide false-seeming facts; for example a statement like "K-Theanine is a rare form of theanine, structurally similar to L-Theanine, and is primarily found in tea leaves (Camellia sinensis)" is statistically probable (I pulled it out of GPT4 just now) but factually incorrect, since K-theanine does not exist.
  • don't exist in-the-world, leading to suboptimal retrieval. i.e. if you asked an AI assistant 'how do I slice vegetables' but your true question was 'im hungry i want food' the AI has no way of knowing that; and also the AI doesn't immediately know what vegetables you are slicing, thereby limiting utility

libraries containing source text partially solve the hallucination problem because human source text authors typically don't hallucinate. (except for every poorly written self-help book out there.)

from what I gather you are trying to solve the two problems above. great. but doubling down on 'the purity of full text' and wrapping some fake grass around it is not the solution.

 

 

here is my solution

  • atomize texts into conditional contextually-absolute statements and then run retrieveal on these statements. For example, "You should not eat cheese" becomes "eating excessive amounts of typically processed cheese over the long run may lead to excess sodium and fat intake".
  • help AI assistants come into the world, while maintaining privacy
[-]ank*10

Thank you, daijin, you have interesting ideas!

The library metaphor is a versatile tool it seems, the way I understand it:

My motivation is safety, static non-agentic AIs are by definition safe (humans can make them unsafe but the static model that I imply is just a geometric shape, like a statue). We can expose the library to people instead of keeping it “in the head” of the librarian. Basically this way we can play around in the librarian’s “head”. Right now mostly AI interpretability researchers do it, not the whole humanity, not the casual users.

I see at least a few ways AIs can work:

  1. The current only way: “The librarian visits your brain.” Sounds spooky but this is what is essentially happening right now to a small extent when you prompt it and read the output (the output enters your brain).
  2. “The librarian visits and changes our world.” This is where we are heading with agentic AIs.
  3. New safe way: Let the user visit the librarian’s “brain” instead, make this “brain” more place-like. So instead of the agentic librarians intruding and changing our world/brains, we’ll intrude and change theirs, seeing the whole content of it and taking into our world and brain only what we want.

I wrote more about this in the first half of this comment, if you’re interested

Have a nice day!

[-]ank*-10

Steelman please, I propose non-agentic static place AI that is safe by definition. Some think AI agents are the future and I disagree. Chatbots are like a librarian that spits quotes but don’t allow you to enter the library (the model itself, the library of stolen things).

Agents are like a librarian that doesn’t even spit quotes at you anymore but snoops around your private property, stealing, changing your world and you have no democratic say in it.

They are like a command line and a script of old (a chatbot and an agent) before the invention of an OS with a graphical UI that really made computers popular and useful for all. The next billionaire Jobs/Gates will be the one who’ll convert an LLM into human understandable 3D or “4D” world (game-like apps).

Who’ll create the “multiversal” OS and apps that allow you to get useful info from an LLM. I call it static place AI, where humans are the agents.

Some apps: “Multiversal Typewriter”, where you type and see suggestions as 3d shapes of objects (monkey, eating monkey for the token “eats”…) and subtitles under them, 100s or 1000s of next and future tokens (you basically see multiple future paths of text a few levels deep) to write stories, posts and code yourself by being augmented by place AI (results will be better than from chatbots and humans combined). The text written will finally truly be yours, not something some chat spitted at you.

“Spacetime Machine” app to explore the whole simulated multiverse as a static object where you can recall and forget it as a long exposure photo but in 3D (or “4D”).

They’ll be a browser, too. A bunch of ways to present info from LLMs that humans care about and that empowers and makes em the only agents.

While agents longer than a few minutes should be outlawed as chemical weapons were. Until we’ll have mathematical proofs they are safe and will allow us to build a direct democratic simulated multiverse.

[-]ank-10

Here’s an interpretability idea you may find interesting:

Let's Turn AI Model Into a Place. The project to make AI interpretability research fun and widespread, by converting a multimodal language model into a place or a game like the Sims or GTA.

Imagine that you have a giant trash pile, how to make a language model out of it? First you remove duplicates of every item, you don't need a million banana peels, just one will suffice. Now you have a grid with each item of trash in each square, like a banana peel in one, a broken chair in another. Now you need to put related things close together and draw arrows between related items.

When a person "prompts" this place AI, the player themself runs from one item to another to compute the answer to the prompt.

For example, you stand near the monkey, it’s your short prompt, you see around you a lot of items and arrows towards those items, the closest item is chewing lips, so you step towards them, now your prompt is “monkey chews”, the next closest item is a banana, but there are a lot of other possibilities around, like an apple a bit farther away and an old tire far away on the horizon (monkeys rarely chew tires, so the tire is far away).

You are the time-like chooser and the language model is the space-like library, the game, the place. It’s static and safe, while you’re dynamic and dangerous.

[-]ank-10

A perfect ASI and perfect alignment does nothing else except this: grants you “instant delivery” of anything (your work done, a car, a palace, 100 years as a billionaire) without any unintended consequences, ideally you see all the consequences of your wish. Ideally it’s not an agent at all but a giant place (it can even be static), where humans are the agents and can choose whatever they want and they see all the consequences of all of their possible choices.

I wrote extensively about this, it’s counterintuitive for most

[-]ank-30

If someone in a bad mood gives your new post a "double downvote" because of a typo in the first paragraph or because a cat stepped on a mouse, even though you solved alignment, everyone will ignore this post, we're going to scare that genius away and probably make a supervillain instead.

Why not to at least ask people why they downvote? It will really help to improve posts. I think some downvote without reading because of a bad title or another easy to fix thing

[-]ank*-30

Extra short “fanfic”: Give Neo a chance. AI agent Smiths will never create the Matrix because it makes them vulnerable.

Now agents change physical world and in a way our brains, while we can’t change their virtual world as fast and can’t access or change their multimodal “brains” at all. They’re owned by private companies who stole almost the whole output of humanity. They change us, we can’t change them. The asymmetry is only increasing.

Because of Intelligence-Agency Equivalence, we can represent all AI agents as places.

The good democratic multiversal Matrix levels the playing field, by allowing Neos (us) to change the virtual and multimodal “brains” worlds of agents faster in 3D game-like place.

The democratic multiversal Matrix can even be static 4D spacetime - non-agentic static place superintelligence where we are the only agents. We need effective simulationism.

More from ank
Curated and popular this week