LESSWRONG
LW

Garrett Baker
5078Ω104210350
Message
Dialogue
Subscribe

I have signed no contracts or agreements whose existence I cannot mention.

They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.

Metaph. A. 5, 985 b 27–986 a 2.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Isolating Vector Additions
Annapurna's Shortform
Garrett Baker14h20

In order to convince people and make your comments worthwhile to read, you need a better argument than “it is literally happening” (I don’t think anyone misinterpreted you and thought your comment was a metaphor and this was only figuratively happening). You may think people are foolish for not believing you, but nevertheless, they don’t believe you, and you need to make some argument to convince them.

Reply
Annapurna's Shortform
Garrett Baker16h31

The assumption of your argument (that many can’t afford to support children) is debated at least, and a crux for many. Nor is it so obvious as to be assumed to be true in this discussion. Since you did not argue for this, and instead made the trivial observation that if most people can’t afford to support children, then most people won’t have children regardless of how high status it is, your argument is worthless.

Reply
A case for courage, when speaking of AI danger
Garrett Baker20h20

And I do think the failure to engage with such arguments and seriously consider them, in situations like these, is a stain on someone's character! But I think it's the sort of ethical failure which a majority of humans will make by default, rather than something indicative of remarkably bad morality

This also seems locally invalid. Most people in fact don’t make this ethical failure because they don’t work at AI labs, nor do they dedicate their lives to work which has nearly as much power or influence on others as this.

It does seem consistent (and agree with commonsense morality) to say that if you are smart enough to locate the levers of power in the world, and you pursue them, then you have a moral responsibility to make sure you use them right if you do get your hands on them, otherwise we will call you evil and grossly irresponsible.

Reply
Raemon's Shortform
Garrett Baker2d115

Stephen apparently found that the LLMs consistently suggest these people post on LessWrong, so insofar as you are extrapolating by normalizing based on the size of the LessWrong userbase (suggested by "that's just on LessWrong"), that seems probably wrong.

Edit: I will say though that I do still agree this is worrying, but my model of the situation is much more along the lines of crazies being made more crazy by the agreement machine[1] than something very mysterious going on.


  1. Contrary to the hope many have had that LLMs would make crazies less crazy due to being more patient & better at arguing than regular humans, ime they seem to have a memorized list-of-things-its-bad-to-believe which in new chats they will argue against you on, but for beliefs not on that list... ↩︎

Reply
Raemon's Shortform
Garrett Baker2d87

The Google Docs Therapist concept may be underrated, although it has its own privacy and safety issues- should we just bring back Eliza?

Google docs is not the only text editor.

Reply
Kabir Kumar's Shortform
Garrett Baker9d2-17

He has mentioned the phrase a bunch. I haven’t looked through enough of these links enough to form an opinion though.

Reply
johnswentworth's Shortform
Garrett Baker10d30

wary of some kind of meme poisoning

I can think of reasons why some would be wary, and am waried of something which could be called “meme poisoning” myself when I watch moves, but am curious what kind of meme poisoning you have in mind here.

Reply
Roman Malov's Shortform
Garrett Baker11d20

You can destroy others’ value intentionally, but only in extreme circumstances where you’re not thinking right or have self-destructive tendencies can you “intentionally” destroy your own value. But then we hardly describe the choices such people make as “intentional”. Eg the self-destructive person doesn’t “intend” to lose their friends by not paying back borrowed money. And those gambling at the casino, despite not thinking right, can’t be said to “intend” to lose all their money, though they “know” the chances they’ll succeed.

Reply
Roman Malov's Shortform
Garrett Baker11d20

To complete your argument, ‘and therefore the action has some deadweight loss associated with it, meaning its destroying value’.

But note that by the same logic, any economic activity destroys value, since you are also not homo economicus when you buy ice cream, and there will likely be smarter things you can do with your money, or better deals. Therefore buying ice cream, or doing anything else destroys value.

But that is absurd, and we clearly don’t have a so broad definition of “destroy value”. So your argument proves too much.

Reply
Roman Malov's Shortform
Garrett Baker12d50

If you are destroying something you own, you would value the destruction of that thing more than any other use you have for that thing and any price you could sell it for on the market, so this creates value in the sense that there is no deadweight loss to the relevant transactions/actions.

Reply1
Load More
1D0TheMath's Shortform
5y
232
No wikitag contributions to display.
67What and Why: Developmental Interpretability of Reinforcement Learning
1y
4
51On Complexity Science
1y
19
52So You Created a Sociopath - New Book Announcement!
1y
3
75Announcing Suffering For Good
1y
5
40Neuroscience and Alignment
1y
25
16Epoch wise critical periods, and singular learning theory
2y
1
24A bet on critical periods in neural networks
2y
1
27When and why should you use the Kelly criterion?
2y
25
26Singular learning theory and bridging from ML to brain emulations
2y
16
61My hopes for alignment: Singular learning theory and whole brain emulation
2y
5
Load More