LESSWRONG
LW

Paperclip Minimizer
1228940
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments
Paperclip Minimizer6y10

How does this interact with time preference ? As stated, an elementary consequence of this theorem is that either lending (and pretty much every other capitalist activity) is unprofitable, or arbitrage is possible.

Reply
Two Small Experiments on GPT-2
Paperclip Minimizer6y00

That would be a good argument if it were merely a language model, but if it can answer complicated technical questions (and presumably any other question), then it must have the necessary machinery to model the external world, predict what it would do in such and such circumstances, etc.

[This comment is no longer endorsed by its author]Reply
Two Small Experiments on GPT-2
Paperclip Minimizer6y10

My point is, if it can answer complicated technical questions, then it is probably a consequentialist that models itself and its environment.

[This comment is no longer endorsed by its author]Reply
How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative?
Paperclip Minimizer6y40

But this leads to a moral philosophy question: are time-discounting rates okay, and is your future self actually less important in the moral calculus than your present self ?

Reply
Two Small Experiments on GPT-2
Paperclip Minimizer6y-10

If an AI can answer a complicated technical question, then it evidently has the ability to use resources to further its goal of answering said complicated technical question, else it couldn't answer a complicated technical question.

[This comment is no longer endorsed by its author]Reply
Blackmail
Paperclip Minimizer6y50

But don't you need to get a gears-level model of how blackmail is bad to think about how dystopian a hypothetical legal-blackmail sociey is ?

Reply
Two Small Experiments on GPT-2
Paperclip Minimizer6y40

There was discussion of tips on how to produce good Moloch content in the /r/slatestarcodex subreddit.

Reply
Two Small Experiments on GPT-2
Paperclip Minimizer6y30

The world being turned in computronium computing in order to solve the AI alignment problem would certainly be an ironic end to it.

[This comment is no longer endorsed by its author]Reply
Implications of GPT-2
Paperclip Minimizer6y20

My point is that it would be a better idea to put as prompt "What follows is a transcript of a conversation between two people:".

Reply
Blackmail
Paperclip Minimizer6y50

Note the framing. Not “should blackmail be legal?” but rather “why should blackmail be illegal?” Thinking for five seconds (or minutes) about a hypothetical legal-blackmail society should point to obviously dystopian results. This is not a subtle. One could write the young adult novel, but what would even be the point.

Of course, that is not an argument. Not evidence.

What ? From a consequentialist point of view, of course it is. If a policy (and "make blackmail legal" is a policy) probably have bad consequences, then it is a bad policy.

Reply
Load More
No wikitag contributions to display.
3Effective Altruism, YouTube, and AI (talk by Lê Nguyên Hoang)
7y
0
3rattumb debate: Are cognitive biases a good thing ?
7y
0
12The Craft And The Codex
7y
7
15Some Remarks on the Nature of Political Conflict
7y
6
7spaced repetition & Darwin's golden rule
7y
3
13Loss aversion is not what you think it is
7y
14
3How many philosophers accept the orthogonality thesis ? Evidence from the PhilPapers survey
7y
26
5Dissolving Scotsmen
7y
11