AtillaYasar

Wiki Contributions

Comments

Sorted by

Things I learned/changed my mind about thanks to your reply:

1) Good tools allow experimentation which yields insights that can (unpredictably) lead to big advancements in AI research.
o1 is an example, where basically an insight discovered by someone playing around (Chain Of Thought) made its way into a model's weights 4 (ish?) years later by informing its training.
2) Capabilities overhang getting resolved, being seen as a type of bad event that is preventable.

 

This is a crux in my opinion:

It is bad for cyborg tools to be broadly available because that'll help {people trying to build the kind of AI that'd kill everyone} more than they'll {help people trying to save the world}.

I need to look more into the specifics of AI research and of alignment work and what kind of help a powerful UI actually provides, and hopefully write a post some day.
(But my intuition is, the fact that cyborg tools help both capabilities and alignment, is bad, and whether I open source code or not shouldn't hinge on narrowing down this ratio, it should overwhelmingly favor alignment research)

 

Cheers.

(edit: thank you for your comment! I genuinely appreciate it.)

"""I think (not sure!) the damage from people/orgs/states going "wow, AI is powerful, I will try to build some" is larger than the upside of people/orgs/states going "wow, AI is powerful, I should be scared of it"."""
^^ Why wouldn't people seeing a cool cyborg tool just lead to more cyborg tools? As opposed to the black boxes that big tech has been building?
I agree that in general, cyborg tools increase hype about the black boxes and will accelerate timelines. But it still reduces discourse lag. And part of what's bad about accelerating timelines is that you don't have time to talk to people and build institutions --- and, reducing discourse lag would help with that.

"""By not informing the public that AI is indeed powerful, awareness of that fact is disproportionately allocated to people who will choose to think hard about it on their own, and thus that knowledge is more likely to be in reasonabler hands (for example they'd also be more likely to think "hmm maybe I shouldn't build unaligned powerful AI")."""
^^ You make 3 assumptions that  I disagree with:
1) Only reasonable people who think hard about AI safety will understand the power of cyborgs
2) You imply a cyborg tool is a "powerful unaligned AI", it's not, it's a tool to improve bandwidth and throughput between any existing AI (which remains untouched by cyborg research) and the human
3) That people won't eventually find out. One obvious way is that a weak superintelligence will just build it for them. (I should've made this explicit, that I believe that capabilities overhang is temporary, that inevitably "the dam will burst", that then humanity will face a level of power they're unaware of and didn't get a chance to coordinate against. (And again, why assume it would be in the hands of the good guys?))

I laughed, I thought, I learned, I was inspired  :)   fun article!

How to do a meditation practice called "metta", which is usually translated as "loving kindness".

```md
# The main-thing-you-do:
 - kindle emotions of metta, which are in 3 categories:
   + compassion (wishing lack of suffering)
   + wishing well-being  (wanting them to be happy for its own sake)
   + empathatic joy  (feeling their joy as your own)
 - notice those emotions as they arise, and just *watch them* (in a mindfulness-meditation kinda way)
 - repeat

# How to do that:
 - think of someone *for whom it is easy to feel such emotions*
   + so a pet might be more suited than a romantic partner, because emotions for the latter are more complex. it's about how readily or easily you can feel such emotions
 - kindle emotions by using these 2 methods  (whatever works):
   + imagine them being happy, not being sick, succeeding in life, etc.
     - can do more esoteric imaginations too, like, having a pink cord of love or something connecting to that person's heart from your own, idk i just made this up just now
   + repeat phrases like, "may you be happy", "may you succeed in life", "i hope yu get lots of grandkids who love you", "i hope you never get sick or break your arm", etc.  :)
```

here are guided meditation recordings for doing this practice: https://annakaharris.com/friendly-wishes/ 

they're 5-7 minutes and designed for children. so easy to follow, but it works for me too its not dumbed-down, whcih maybe is unlikely to begin with since emotions are just emotions

My summary:

What they want:
Build human-like AI (in terms of our minds), as opposed to the black-boxy alien AI that we have today.

Why they want it:
Then our systems, ideas, intuitions, etc., of how to deal with humans and what kind of behaviors to expect, will hold for such AI, and nothing insane and dangerous will happen.
Using that, we can explore and study these systems, (and we have a lot to learn from system at even this level of capability), and then leverage them to solve the harder problems that come with aligning superintelligence.

(in case you want to copy-paste and share this)
Article by Conjecture, from february 25th 2023.
Title: `Cognitive Emulation: A Naive AI Safety Proposal`
Link: <https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal>

(Note on this comment: I posted (something like) the above on Discord, and am copying it to here because I think it could be useful. Though I don't know if this kind of non-interactive comment is okay.)

Is retargetable enough to be deployed to solve many useful problems and not deviate into dangerous behavior, along as it is used by a careful user.

Contains a typo.

along as it is ==> as long as it is

I compressed this article for myself while reading it, by copying bits of text and highlighting parts with colors, and I uploaded screenshots of the result in case it's useful to anyone.

https://imgur.com/a/QJEtrsF 

Answer by AtillaYasar76

Disagreement.

I disagree with the assumption that AI is "narrow". In a way GPT is more generally intelligent than humans, because of the breadth of knowledge and type of outputs, and it's actually humans who outperform AI (by a lot) at certain narrow tasks.

And an assistance can include more than asking a question and receiving an answer. It can be exploratory with the right interface to a language model.

(Actually my stories are almost always exploratory, where I try random stuff, change the prompt a little, and recursively play around like that, to see what the AI will come up with)

"Specific"

Related to the above: in my opinion thinking of specific tools is the wrong framing. Like how a gun is not a tool to kill a specific person, it kills whoever you point it at. And a language model completes whichever thought or idea you start, effectively reducing the time you need to think.

So the most specific I can get is I'd make it help me build tooling (and I already have). And the better the tooling the more "power" the AI can give you (as George Hotz might put it).

For example I've built a simple webpage with ChatGPT despite knowing almost no javascript. What does this mean? It means my scope as a programmer just changed from Python to every language on earth (sort of), and it's the same for any piece of knowledge, since ChatGPT can explain things pretty well. So I can access any concept or piece of understanding much more quickly, and there are lots of things Google searches simply don't work for.

The form at this link <https://docs.google.com/forms/d/e/1FAIpQLSdU5IXFCUlVfwACGKAmoO2DAbh24IQuaRIgd9vgd1X8x5f3EQ/closedform> says "The form Refine Incubator Application is no longer accepting responses.
Try contacting the owner of the form if you think this is a mistake."
so I suggested changing the parts where it says to sign up, to a note about applications not being accepted anymore.

How can I apply?

Unfortunately, applications are closed at the moment.

Load More