All of Kronopath's Comments + Replies

The kind of employers that would not be okay with you streaming your work on Twitch are usually also the kind of employers that would not be okay with you hiring randos to sit behind you staring at confidential info on your screen during the work day.

This is really only suitable for people who are entrepreneurs/small business owners with less concerns over confidentiality, or have enough rapport with their employer for them to be ok with this.

I have to admit, i rolled my eyes when I saw that you worked in financial risk management. Not because what you did was stupid—far from It—but because of course this is the kind of cultural environment in which this would work.

If you did this in a job that wasn’t heavily invested in a culture of quantitative risk management, it would likely cause a likely-permanent loss of trust that would be retaliated against in subtle ways. You’d get a reputation as “the guy that plays nasty/tricky games when he doesn’t get his way” which would make it harder to collaborate with people.

So godspeed, glad it worked for you, but beware applying this in other circumstances and cultures.

Sure, I agree GPT-3 isn't that kind of risk, so this is maybe 50% a joke. The other 50% is me saying: "If something like this exists, someone is going to run that code. Someone could very well build a tool that runs that code at the press of a button."

Equally one could make a claim from the true ending, that you do not run the generated code.

Meanwhile, bored tech industry hackers:

“Show HN: Interact with the terminal in plain English using GPT-3”

https://news.ycombinator.com/item?id=34547015

2Veedrac
I don't particularly care that people are running GPT-3 code (except inasmuch as it makes ML more profitable), and don't think it helps if we lose focus on what the actual ground-truth concerns are. I want to encourage analysis that gets at deeper similarities than this.  GPT-3 code does not pose an existential risk, and members of the public couldn't stop it being an existential risk if it was by not using it to help run shell commands anyway, because, if nothing else, GPT-3, ChatGPT and Codex are all public. Beyond the fact GPT-3 is specifically not risky in this regard, it'd be a shame if people primarily took away ‘don't run code from neural networks’, rather than something more sensible like ‘the more powerful models get, the more relevant their nth-order consequences become’. The model in the story used code output because it's an especially convenient tool lying around, but it didn't have to, because there are lots of ways text can influence the world. Code is just particularly quick, accessible, precise, and predictable.

Do we have to convince Yann LeCun? Or do we have to convince governments and the public?

(Though I agree that the word "All" is doing a lot of work in that sentence, and that convincing people of this may be hard. But possibly easier than actually solving the alignment problem?)

A thought: could we already have a case study ready for us?

Governments around the world are talking about regulating tech platforms. Arguably Facebook's News Feed is an AI system and the current narrative is that it's causing mass societal harm due to it optimizing for clicks/likes/time on Facebook/whatever rather than human values.

See also:

... (read more)
1[anonymous]
That's how you turn a technical field into a cesspit of social commentary and political virtue signaling. Think less AGI-Overwatch committee or GPU-export ban and more "Big business bad!", "AI racist!", "Human greed the real problem!"
Razied100

All we'd have to do is to convince people that this is actually an AI alignment problem.

That's gonna be really hard, people like Yann lecun (head of Facebook AI) see these problems as evidence that alignment is actually easy. "See, there was a problem with the algorithm, we noticed it and we fixed it, what are you so worried about? This is just a normal engineering problem to be solved with normal engineering means." Convincing them that this is actually an early manifestation of a fundamental difficulty that becomes deadly at high capability levels will be really hard.

On Wednesday, the lead scientist walks into the lab to discover that the AI has managed to replicate itself several times over, buttons included. The AIs are arranged in pairs, such that each has its robot hand hovering over the button of its partner.

"The AI wasn't supposed to clone itself!" thinks the scientist. "This is bad, I'd better press the stop button on all of these right away!"

At this moment, the robot arms start moving like a swarm of bees, pounding the buttons over and over. If you looked at the network traffic between each computer, you'd see ... (read more)

1tailcalled
I disagree with this, since B isn't "amount of buttons pressed and AIs shut down", but instead "this AI's button got pressed and this AI shut down". There are, as I mentioned, some problems with this utility function too, but it's really supposed to be a standin for a more principled impact measure.

Are we sure that OpenAI still believes in "open AI" for its larger, riskier projects? Their recent actions suggest they're more cautious about sharing their AI's source code, and projects like GPT-3 are being "released" via API access only so far. See also this news article that criticizes OpenAI for moving away from its original mission of openness (which it frames as a bad thing).

In fact, you could maybe argue that the availability of OpenAI's APIs acts as a sort of pressure release valve: it allows some people to use their APIs instead of investing in d... (read more)

This is a fair criticism of my criticism.

9Kenny
I'm glad you thought so! Your criticism is very fair too. And I'm generally curious about why people 'bounce off' the "rationalist community". I'm also mostly a lurker, particularly IRL. And I think a big part of that is the kind of thing you described. But I do want to do better at being open to really trying weird ideas (and in real life too!). (I'm pretty weird to my acquaintances, friends, and family already.) I've already found this 'trick' pretty useful. I haven't had anyone offer a (radically) honest answer to my asking them for a cheerful price. I suspect that the people I've asked don't fully understand that the question is sincere and shouldn't be answered in the context of 'standard' social norms. And that's too bad! I've asked because I'm serious and sincere about wanting to remove any obstacles (or as many as possible) to us making a particular exchange.
Kronopath*442

To me this post may very well be a good example of some of the things that make me uncomfortable about the rationalist community, and why I so far have chosen to engage with it very minimally and mostly stay a lurker. At the risk of making a fool of myself, especially since it’s late and I didn’t read the whole post thoroughly (partly because you gave me an excuse not to halfway through) I’m going to try to explain why.

I don’t charge friends for favours, nor would I accept payment if offered. I’m not all that uncomfortable with the idea of “social capital”... (read more)

Kenny*120

I think it's important to keep in mind a few things about this (or any other 'weird' social rule/trick/technology/norm/etc.):

  1. It doesn't have to be used all the time, let alone frequently, often, or even at all!
  2. It doesn't have to replace any other form of trading favors (i.e. exchanging social/friendship capital)!

It seems like you're imagining a world, or even just a single relationship/friendship, where each person is frequently, or always, using cheerful pricing instead of all of the existing social/friendship favor trading forms.

But I'd be surprised... (read more)

I had to double-check the date on this. This was written in 2017? It feels more appropriate to 2020, where both the literal and metaphorical fires have gotten extremely out of hand.

6Raemon
It becomes relevant every year around this time. :) :/ :O