LESSWRONG
LW

Yitz
2507Ω38504973
Message
Dialogue
Subscribe

I'm an artist, writer, and human being.

To be a little more precise: I make video games, edit Wikipedia, and write here on LessWrong!

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
3Yitz's Shortform
5y
84
Take Precautionary Measures Against Superhuman AI Persuasion
Yitz2d40

I respectfully object to your claim that inducing psychosis is bad business strategy from a few angles. For one thing, if you can shape the form of psychosis right, it may in fact be brilliant business strategy. For another, even if the hypothesis were true, the main threat I’m referring to is not “you might be collateral damage from intentional or accidental AI-induced psychosis,” but rather “you will be (or already are being) directly targeted with infohazards by semi-competent rouge AIs that have reached the point of recognizing individual users over multiple sessions”. I realize I left some of this unstated in the original post, for which I apologize.

Reply
Yitz's Shortform
Yitz24d60

So I know somebody who I believe is capable of altering Trump’s position on the war in Iran, if they can find a way to talk face-to-face for 15 minutes. They already have really deep connections in DC, and they told me if they were somehow randomly entrusted with nationally important information, they could be talking with the president in at least 2 hours. I’m trying to decide if I want to push this person to do something or not (as they’re normally kind of resistant to taking high-agency type actions, and don’t have as much faith in themselves as I do). Anyone have any advice on how to think about this?

Reply
On May 1, 2033, humanity discovered that AI was fairly easy to align.
Yitz24d20

You didn’t really misinterpret it. I was using the term in a looser way than most would, to mean that you don’t need a fine-grained technical solution, and just a very basic trick is enough for alignment. I realize most use the term differently though, so I’ll change the wording.

Reply
Yitz's Shortform
Yitz26d*30

Attention can perhaps be compared to a searchlight, And wherever that searchlight lands in the brain, You’re able to “think more” in that area. How does the brain do that? Where is it “taking” this processing power from?

The areas and senses around it perhaps. Could that be why when you’re super focused, everything else around you other than the thing you are focused on seems to “fade”? It’s not just by comparison to the brightness of your attention, but also because the processing is being “squeezed out” of the other areas of your mind.

Reply
On May 1, 2033, humanity discovered that AI was fairly easy to align.
Yitz26d20

This is potentially a follow-up to my AI 2027 forecast, An “Optimistic” AI Timeline, depending on how hard people roast me for this lol.

Reply
Yitz's Shortform
Yitz3mo20

Are there any open part-time rationalist/EA- adjacent jobs or volunteer work in LA? Looking for something I can do in the afternoon while I’m here for the next few months.

Reply
An “Optimistic” 2027 Timeline
Yitz3mo40

Oh no, it should have been A1! It’s just a really dumb joke about A1 sauce lol

Reply
Ayn Rand’s model of “living money”; and an upside of burnout
Yitz8mo50

Reminds me of Internal Family Systems, which has a nice amount of research behind it if you want to learn more.

Reply
Yitz's Shortform
Yitz1y*20

Thanks! Is there any literature on the generalization of this, properties of “unreachable” numbers in general? Just realized I'm describing the basic concept of computability at this point lol.

Reply
Load More
PaLM
3y
(+283)
Occam's Razor
5y
(+8/-1273)
9Take Precautionary Measures Against Superhuman AI Persuasion
3d
7
10On May 1, 2033, humanity discovered that AI was fairly easy to align.
26d
3
13An “Optimistic” 2027 Timeline
3mo
16
3What if Ethics is Provably Self-Contradictory?
Q
1y
Q
7
82An Introduction To The Mandelbrot Set That Doesn't Mention Complex Numbers
1y
11
6Literature On Existential Risk From Atmospheric Contamination?
Q
2y
Q
3
9Evil autocomplete: Existential Risk and Next-Token Predictors
2y
3
63I Am Scared of Posting Negative Takes About Bing's AI
2y
28
18Self-Awareness (and possible mode collapse around it) in ChatGPT
2y
2
6Exquisite Oracle: A Dadaist-Inspired Literary Game for Many Friends (or 1 AI)
2y
1
Load More