All of AtillaYasar's Comments + Replies

Editability and findability --> higher quality over time

 

Editability

Code being easier to find and easier to edit, for example,

if it's in the same live environment where you're working, or if it's a simple hotkey away, or an alt-tab away to a config file which updates your settings without having to restart,

makes it more likely to be edited, more subject to feedback loop dynamics.

Same applies to writing, or anything where you have connected objects that influence each other, where the "influencer node" is editable and visible.

configs : program layou

... (read more)

It seems easier to fit an analogy to an image than to generate an image starting with a precise analogy (meaning a detailed prompt).

Maybe because an image is in a higher-dimensional space and you're projecting onto the lower-dimensional space of words.
(take this analogy loosely idk any linear algebra)

Claude: "It's similar to how it's easier to recognize a face than to draw one from memory!"

 

(what led to this:)

I've been trying to create visual analogies of concepts/ideas, in order to compress them and I noticed how hard it is. It's hard to describe ima... (read more)

On our "interfaces" to the world...

 

Prelude

This is a very exploratory post.

It underlies my thinking in this post: https://www.lesswrong.com/posts/wQ3bBgE8LZEmedFWo/root-node-of-my-posts but it's hard to put into words but I'll make a first attempt here. I also expressed some of it here, https://www.lesswrong.com/posts/MCBQ5B5TnDn9edAEa/atillayasar-s-shortform?commentId=YNGb5QNYcXzDPMc5z, in the part about deeper mental models / information bottlenecks.

It's also why I've been spending over a year making a GUI that can encompass as much of my activities... (read more)

Just because X describes Y in a high level abstract way, doesn't mean studying X is the best of understanding Y.

Often, the best way is to simply study Y, and studying X just makes you sound smarter when talking about Y.

 

pointless abstractions: cybernetics and OODA loop

This is based on my experience trying to learn stuff about cybernetics, in order to understand GUI tool design for personal use, and to understand the feedback loop that roughly looks like, build -> use -> rethink -> let it help you rethink -> rebuild, where me and any LLM in... (read more)

5Viliam
The abstract descriptions are sometimes leaky. Recently I had a short training on "how to solve problems", which assumed discrete steps, something like "define the problem", "collect facts", "find out causes", "propose a fix". Some people seem impressed by seeing four or five bullet points that in theory would allow them to solve any problem, if they only apply them in the right sequence. To me, it seems like the real world does not follow this "waterfall" model. For example, collecting the facts. Which facts? About what? How much certainty do you need for those facts? If you really tried to collect all facts about a nontrivial system, it would take much more time than you usually have. So you go with some assumptions, like "these are the things that usually break, let's check them first" and "these are the things that are cheap to check, so let's check those, too". And you solve half of the problems like this, quickly. For the other half, you take a step back, check a few more things, etc. And when the things really get complicated, then you start going step by step, verifying one step at a time. But it would be a mistake to say that you wasted your time by trying to solve the problem before you had all the facts. You did something that works well on average, it's just that problem we had today was an outlier. (I am not saying that this is completely useless. Sometimes people make the mistake of going in circles without taking a step back and checking their premises, so it is good to remind them to collect more data. But you need to be adaptive about that, which is something that in my case the high-level description abstracted away.)

Jocko Willink talking about OODA loops, paraphrase

The f-86 wasn't as good at individual actions, but it could transition between them faster than the MiG-15

analogous to how end-to-end algorithms, llm agents, and things optimized for the tech demo, are "impressive single actions", but not as good for long term tasks

Two tools is a lot more complex than one, not just +1 or *2

When you have two tools, you have think about their differences, or about specifically what they each let you do, and then pattern match to your current problem, before using it. With one tool, you don't have to understand the shape of its contents at all, because it's the only tool and you already know you want to use it.

 

Concrete example, doing groceries

Let's compare the amount of information you need to remember with 1 vs 2 tools. You want food (task), you're going to get it from a supermar... (read more)

3Dagon
I tend to use nlogn (N things, times logN overhead) as my initial complexity estimate for coordinating among "things".  It'll, of course, vary widely with specifics, but it's surprising how often it's reasonably useful for thinking about it.

Twitter doesn't incentivize truth-seeking

Twitter is designed for writing things off the top of your head, and things that others will share or reply to. There are almost no mechanisms to reward good ideas, to punish bad ones, nor for incentivizing the consistency of your views, nor any mechanism for even seeing whether someone updates their beliefs, or whether a comment pointed out that they're wrong.

(The fact that there are comments is really really good, and it's part of what makes Twitter so much better than mainstream media. Community Notes is great to... (read more)

when chatting with an LLM, do you wonder what its purpose is in the responses it gives?  I'm pretty sure it's "predict a plausible next token", but I don't know how I'll know to change my belief.

 

I think "it has a simulated purpose, but whether it has an actual purpose is not relevant for interacting with it".

My intuition is that the token predicter doesn't have a purpose, that it's just answering, "what would this chatbot that I am simulating, respond with?"
For the chatbot character (the Simulacrum) it's, "What would a helpful chatbot want in th... (read more)

personality( ground_truth ) --> stated_truth

Or, what someone says is a subset of what they're really thinking, transformed by their personality.
(personality plus intentions, context, emotions, etc.)

(You can tell I'm not a mathematician because I didn't express this in latex xD  but I feel like there's a very elegant Linear Algebra description where, ground truth is a high dimensional object, personality transforms to make it low-dimensional enough to communicate, and changes its "angle" (?) / vibe so it fits their goals better)

So, if you know someo... (read more)

3Viliam
Yes, that's how they signal good relations. Also, it keeps the communication channels open, just in case they might want to say something useful in future.
3Dagon
There have been a number of debates (which I can't easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism.  It's both, at different times to different degrees, and often not explicit about what the goals are. The practical outcome seems spot-on.  With some people you can have the meta-conversation about what they want from an interaction, with most you can't, and you have to make your best guess, which you can refine or change based on their reactions. Out of curiosity, when chatting with an LLM, do you wonder what its purpose is in the responses it gives?  I'm pretty sure it's "predict a plausible next token", but I don't know how I'll know to change my belief.

Feature-level thinking is natural, yet deeply wrong

In this Shortform I talked about why interactive systems that include AI are more powerful than end-to-end AI-powered algorithms. But it misses something big, a concept where actually most of my surprising ideas about UI design have come from.

The raw data underneath any given UI, and the ability to define functions that "do things at certain points" to that data, is very elementary relative to the richness of that UI, and the things that it lets you do.

Something mundane like "article -> list of (index, ... (read more)

Cognitive workspaces, not end-to-end algorithms

It's summer 2020 and gpt3 is superhuman in many ways and probably could take over the world but only when its alien goo flows through the right substrate, channeled into the right alchemical chambers, then forked and rejoined and reshaped in the right way, but somehow nobody can build this.

 

 

 

 

Robustness via pauseability

Normal agent-systems (like what you design with LangChain) are brittle in that you're never sure if they'll do the job. A more robust system is one where you can stop execut... (read more)

1AtillaYasar
Jocko Willink talking about OODA loops, paraphrase analogous to how end-to-end algorithms, llm agents, and things optimized for the tech demo, are "impressive single actions", but not as good for long term tasks
3AtillaYasar
this shortform: https://www.lesswrong.com/posts/MCBQ5B5TnDn9edAEa/atillayasar-s-shortform?commentId=QCmaJxDHz2fygbLgj was spawned from what was initially a side-note in the above one.

When is philosophy useful?

 

Meta

This post is useful to me because 1) it helped me think more clearly about whether and how exactly philosophy is useful, 2) I can read it later and get those benefits again.

 

The problem

Doing philosophy and reading/writing is often so impractical, people do it just for the sake of doing it. When you write or read a bunch of stuff about X, get categories and lists and definitions, it feels like you're making progress on X, but are you really?

Joseph Goldstein (meditation teacher) at the beginning of his lecture about M... (read more)

Posts as nodes -- what's beautiful about LessWrong

I'm new to this site as a writer (and a writer in general), and I read LW's user guide, to think more clearly about what kind of articles are expected and about why people are here. Direct quote:

LessWrong is a good place for someone who:

  • values curiosity, learning, self-improvement, figuring out what's actually true (rather than just what you want to be true or just winning arguments)
  • will change their mind or admit they're wrong in response to compelling evidence or argument
  • wants to work collaboratively with
... (read more)

I agree that morality and consensus are in principle not the same -- Nazis or any evil society is an easy counterexample.
(One could argue Nazis did not have the consensus of the entire world, but you can then just imagine a fully evil population.)

But for one, simply rejecting civilization and consensus based on "you have no rigorous definition of them and also look at Genghis Khan/Nazis this proves that governments are evil" is like, basically, putting the burden of proof on the side that is arguing for civilization and common sense morality, which is susp... (read more)

Despite being "into" AI safety for a while, I haven't picked a side. I do believe it's extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years.

But any effort spent on pinning down one's "p(doom)" is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, "how to think about things in general, and how to make philosophical progress".

Memetics and iteration of ideas

 

Meta

Publish or keep as draft?

I kept changing the title and moving between draft and public, after reading the description of Shortforms, it seemed super obvious that this is the best format.

Shipping chapters as self-contained posts

Ideally, this will be like an annotated map of many many posts. The more ideas exist as links, the more condensed and holistic this post can be.
It also lets me treat ideas as discrete discussable objects, like this: "I believe X as I argued  here <link>, therefor Y <link>, whi... (read more)

But I'm curious what you think is a better word or term for referring to "iteration on ideas", as this is one of the things I'm actively trying to sort out by writing this post.

It's just a pointer to a concept, I'm not relying on the analogy to genes.

So I've been thinking more about this...

I think you completely missed the angle of, civilizational coordination via people updating on the state of the world and on what others are up to.

(To be fair I literally wrote in The Gist, "speed up memetic evolution", lol that's really dumb, also explicitly advocated for "memetic evolution" multiple times throughout)

 

Communication is not exactly "sharing information"

Communication is about making sure you know where you each stand and that you resolve to some equilibrium, not that you tell each other your life ... (read more)

TLDR:
Here's all the ways in which you're right, and thanks for pointing these things out!


At a meta-level, I'm *really* excited by just how much I didn't see your criticism coming. I thought I was thinking carefully, and that iterating on my post with Claude (though it didn't write a single word of it!) was taking out the obvious mistakes, but I missed so much. I have to rethink a lot about my process of writing this.

I strongly agree that I need a *way* more detailed model of what "memetic evolution" looks like, when it's good vs bad, and why, whether there... (read more)

1AtillaYasar
So I've been thinking more about this... I think you completely missed the angle of, civilizational coordination via people updating on the state of the world and on what others are up to. (To be fair I literally wrote in The Gist, "speed up memetic evolution", lol that's really dumb, also explicitly advocated for "memetic evolution" multiple times throughout)   Communication is not exactly "sharing information" Communication is about making sure you know where you each stand and that you resolve to some equilibrium, not that you tell each other your life story and all the object level knowledge in your head. Isn't this exactly what you're doing when going around telling people "hey guys big labs are literally building gods they don't understand nor control, this is bad and you should know it" ? I should still dig into what that looks like exactly and when it's done well vs badly (for example you don't tell people how exactly OpenAI is building gods, just that they are). I'd argue that if Youtube had a chatbot window embedded in the UI which can talk about contents of a video, this would be a very positive thing, because generally it would increase people's clarity about and ability to parse, contents of videos.   Clarity of ideas is not just "pure memetic evolution" Think of the type of activity that could be described as "doing good philosophy" and "being a good reader". This process is iterative too: absorb info from world -> share insight/clarified version of info -> get feedback -> iterate again -> affect world state -> repeat. It's still in the class of "unpredictable memetic phenomena", but it's very very different from what happens on the substrate of mindless humans scrolling TikTok, guided by the tentacles of a recommendation algorithm. Even a guy typing something into a comment box, constantly re-reading and re-editing and re-considering, will land on (evolve towards) unpredictable ideas (memes). That's the point!

@Tamsin Leake 

I've written a post about my thoughts related to this, but I haven't gone specifically into whether UI tools help alignment or capabilities more. It kind of touches on "sharing vs keeping secret" in a general way, but not head-on such that I can just write a tldr here, and not along the threads we started here. Except maybe "broader discussion/sharing/enhanced cognition gives more coordination but risks world-ending discoveries being found before coordination saves us" -- not a direct quote.

But I found it too difficult to think about, an... (read more)

Things I learned/changed my mind about thanks to your reply:

1) Good tools allow experimentation which yields insights that can (unpredictably) lead to big advancements in AI research.
o1 is an example, where basically an insight discovered by someone playing around (Chain Of Thought) made its way into a model's weights 4 (ish?) years later by informing its training.
2) Capabilities overhang getting resolved, being seen as a type of bad event that is preventable.

 

This is a crux in my opinion:

It is bad for cyborg tools to be broadly available because that

... (read more)
1AtillaYasar
@Tamsin Leake  I've written a post about my thoughts related to this, but I haven't gone specifically into whether UI tools help alignment or capabilities more. It kind of touches on "sharing vs keeping secret" in a general way, but not head-on such that I can just write a tldr here, and not along the threads we started here. Except maybe "broader discussion/sharing/enhanced cognition gives more coordination but risks world-ending discoveries being found before coordination saves us" -- not a direct quote. But I found it too difficult to think about, and it (feeling like I have to reply here first) was blocking me from digging into other subjects and developing my ideas, so I just went on with it. https://www.lesswrong.com/posts/GtZ5NM9nvnddnCGGr/ai-alignment-via-civilizational-cognitive-updates

(edit: thank you for your comment! I genuinely appreciate it.)

"""I think (not sure!) the damage from people/orgs/states going "wow, AI is powerful, I will try to build some" is larger than the upside of people/orgs/states going "wow, AI is powerful, I should be scared of it"."""
^^ Why wouldn't people seeing a cool cyborg tool just lead to more cyborg tools? As opposed to the black boxes that big tech has been building?
I agree that in general, cyborg tools increase hype about the black boxes and will accelerate timelines. But it still reduces discourse lag.... (read more)

6Tamsin Leake
I was making a more general argument that applies mainly to powerful AI but also to all other things that might help one build powerful AI (such as: insights about AI, cyborg tools, etc). These things-that-help have the downside that someone could use them to build powerful but unaligned AI, which is ultimately the thing we want to delay / reduce-the-probability-of. Whether the downside is bad enough that making them public/popular is net bad is the thing that's uncertain, but I lean towards yes, it is net bad. I believe that: * It is bad for cyborg tools to be broadly available because that'll help {people trying to build the kind of AI that'd kill everyone} more than they'll {help people trying to save the world}. * It is bad for insights about AI to spread because of the same reason. * It is bad for LLM assistants to be broadly available for the same reason. I don't think I'm particularly relying on that assumption?? I don't understand what sounded like I think this. In any case, I'm not making strict "only X are Y" or "all X are Y" statements; I'm making quantitative "X are disproportionately more Y" statements. Well, yes. And at that point the world is much more doomed; the world has to be saved ahead of that. To increase the probability that we have time to save the world before people find out, we want to buy time. I agree it's inevitable, but it can be delayed. Making tools and insights broadly available hastens the bursting of the dam, which is bad; containing them delays the bursting of the dam, which is good.

I laughed, I thought, I learned, I was inspired  :)   fun article!

How to do a meditation practice called "metta", which is usually translated as "loving kindness".

```md
# The main-thing-you-do:
 - kindle emotions of metta, which are in 3 categories:
   + compassion (wishing lack of suffering)
   + wishing well-being  (wanting them to be happy for its own sake)
   + empathatic joy  (feeling their joy as your own)
 - notice those emotions as they arise, and just *watch them* (in a mindfulness-meditation kinda way)
 - repeat

# How to do that:
 - think of someone *for whom i... (read more)

My summary:

What they want:
Build human-like AI (in terms of our minds), as opposed to the black-boxy alien AI that we have today.

Why they want it:
Then our systems, ideas, intuitions, etc., of how to deal with humans and what kind of behaviors to expect, will hold for such AI, and nothing insane and dangerous will happen.
Using that, we can explore and study these systems, (and we have a lot to learn from system at even this level of capability), and then leverage them to solve the harder problems that come with aligning superintelligence.

(in case you want to... (read more)

Is retargetable enough to be deployed to solve many useful problems and not deviate into dangerous behavior, along as it is used by a careful user.

Contains a typo.

along as it is ==> as long as it is

I compressed this article for myself while reading it, by copying bits of text and highlighting parts with colors, and I uploaded screenshots of the result in case it's useful to anyone.

https://imgur.com/a/QJEtrsF 

5Raemon
Neat. That's an interest technique, thanks for sharing.
Answer by AtillaYasar76

Disagreement.

I disagree with the assumption that AI is "narrow". In a way GPT is more generally intelligent than humans, because of the breadth of knowledge and type of outputs, and it's actually humans who outperform AI (by a lot) at certain narrow tasks.

And an assistance can include more than asking a question and receiving an answer. It can be exploratory with the right interface to a language model.

(Actually my stories are almost always exploratory, where I try random stuff, change the prompt a little, and recursively play around like that, to see what... (read more)

The form at this link <https://docs.google.com/forms/d/e/1FAIpQLSdU5IXFCUlVfwACGKAmoO2DAbh24IQuaRIgd9vgd1X8x5f3EQ/closedform> says "The form Refine Incubator Application is no longer accepting responses.
Try contacting the owner of the form if you think this is a mistake."
so I suggested changing the parts where it says to sign up, to a note about applications not being accepted anymore.

How can I apply?

Unfortunately, applications are closed at the moment.

I’m opening an incubator called Refine for conceptual alignment research in London, which will be hosted by Conjecture. The program is a three-month fully-paid fellowship for helping aspiring independent researchers find, formulate, and get funding for new conceptual alignment research bets, ideas that are promising enough to try out for a few months to see if they have more potential.

(note: applications are currently closed)

1AtillaYasar
The form at this link <https://docs.google.com/forms/d/e/1FAIpQLSdU5IXFCUlVfwACGKAmoO2DAbh24IQuaRIgd9vgd1X8x5f3EQ/closedform> says "The form Refine Incubator Application is no longer accepting responses. Try contacting the owner of the form if you think this is a mistake." so I suggested changing the parts where it says to sign up, to a note about applications not being accepted anymore.