LESSWRONG
LW

3923
Gurkenglas
2577Ω8213128642
Message
Dialogue
Subscribe

I operate by Crocker's rules.

I try to not make people regret telling me things. So in particular:
- I expect to be safe to ask if your post would give AI labs dangerous ideas.
- If you worry I'll produce such posts, I'll try to keep your worry from making them more likely even if I disagree. Not thinking there will be easier if you don't spell it out in the initial contact.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Charbel-Raphaël's Shortform
Gurkenglas2d96

I infer they didn't get "The most forbidden technique". Try again with e.g. "Never train an AI to hide its thoughts."?

Reply
jacob_drori's Shortform
Gurkenglas6d65

What don't LLMs linearly represent?

Reply
Stars are a rounding error
Gurkenglas6d20

Sure, just put everyone in stasis until the batteries are refilled.

Reply
Stars are a rounding error
Gurkenglas6d110

The mass of a black hole is proportional to its radius, not to its volume like with rocks. So you can't make a two-dimensional mesh of black holes, only a one-dimensional mesh.

To capture dark energy aka the expansion of the universe, take some masses and let them expand apart, there's your potential energy. If you ever capture all of it, you can go collect the matter that has stopped expanding away from you.

Reply1
The Sadism Spectrum and How to Access It
Gurkenglas10d30

Finally, there is a bit of an antistrategy (exemplified by Athens, Riyadh, and Istanbul), which simply doesn’t work. People punish cooperators at high rates and thereby harm themselves and everyone else. Why would anyone think that’s a good idea?

Good question! I am confused that they didn't ask the participants. This could have just been a translation effect. When the control questions show the participants don't understand the setup, they get to try again until they succeed... I would try staring at the raw data to try and come up with a hypothesis, but I don't see it anywhere... I guess I'll send an author an email.

Reply
Markets in Democracy: What happens when you can sell your vote?
Gurkenglas11d*20

My purpose for the first is to not let a raider steal his entry fee too.

My purpose for the second is to effectively require consensus for any motions to pass, while leaving a way out of deadlock that is neutral in expectation.

Reply
Markets in Democracy: What happens when you can sell your vote?
Gurkenglas11d20

You could have the proceeds from new governance tokens go to the governors instead of the treasury. You could make any motion that doesn't reach consensus split the DAO: If p% were in favor, the company has a p% chance of being split up among those in favor, and a 1-p% chance of being split up among those opposed.

Reply
Markets in Democracy: What happens when you can sell your vote?
Gurkenglas11d20

You mean, are 100 founders enough that 60% cannot coordinate a raid? You'll have trouble telling whether 60 of the founders are a rich guy in sixty trenchcoats.

Reply
Load More
5Gurkenglas's Shortform
6y
30
83I'm offering free math consultations!
9mo
7
24A Brief Theology of D&D
4y
2
65Would you like me to debug your math?
4y
16
22Domain Theory and the Prisoner's Dilemma: FairBot
4y
5
7Changing the AI race payoff matrix
5y
2
68Using GPT-N to Solve Interpretability of Neural Networks: A Research Agenda
Ω
5y
Ω
11
43Mapping Out Alignment
Ω
5y
Ω
0
18What are some good public contribution opportunities? (100$ bounty)
Q
5y
Q
1
5Gurkenglas's Shortform
6y
30
41Implications of GPT-2
7y
28
Load More
Reflective category theory
3 years ago
(+100)
Reflective category theory
3 years ago
(+193/-111)
Reflective category theory
3 years ago
(+11/-13)
Reflective category theory
3 years ago
(+344/-78)
Reflective category theory
3 years ago
(+5)