1 min read

1

This is a special post for quick takes by Neil . Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
23 comments, sorted by Click to highlight new comments since:
[-]Neil 6254

A functionality I'd like to see on LessWrong: the ability to give quick feedback for a post in the same way you can react to comments (click for image). When you strong-upvote or strong-downvote a post, a little popup menu appears offering you some basic feedback options. The feedback is private and can only be seen by the author. 

I've often found myself drowning in downvotes or upvotes without knowing why. Karma is a one-dimensional measure, and writing public comments is a trivial inconvience: this is an attempt at middle ground, and I expect it to make post reception clearer. 

See below my crude diagrams.

I suggested something similar a few months back as a requirement for casting strong votes.

Strong upvote, but I won't tell you why.

Relatedly, in-line private feedback. I saw a really good design for alerting typos here.

Yeah, that's an excellent idea. I often spot typos in posts, but refrain from writing a comment unless I collect like three. Thanks for sharing!

[-]Neil 133

Bonus song in I have been a good Bing: "Claude's Anguish", a 3-minute death-metal song whose lyrics were written by Claude when prompted with "how does the AI feel?": https://app.suno.ai/song/40fb1218-18fa-434a-a708-1ce1e2051bc2/ (not for the faint of heart)

I hate death metal. This is a great song!

FHI at Oxford
by Nick Bostrom (recently turned into song):

the big creaky wheel
a thousand years to turn

thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent

yet in this magisterial inefficiency
there are spaces and hiding places
for fragile weeds to bloom
and maybe bear some singular fruit

like the FHI, a misfit prodigy
daytime a tweedy don
at dark a superhero
flying off into the night
cape a-fluttering
to intercept villains and stop catastrophes

and why not base it here?
our spandex costumes
blend in with the scholarly gowns
our unusual proclivities
are shielded from ridicule
where mortar boards are still in vogue

I'm glad "thought that faster" is the slowest song of the album. Also where's the "Eliezer Yudkowsky" in the "ft. Eliezer Yudkowsky"? I didn't click on it just to see Eliezer's writing turned into song, I came to see Eliezer sing. Missed opportunity. 

Poetry and practicality

I was staring up at the moon a few days ago and thought about how deeply I loved my family, and wished to one day start my own (I'm just over 18 now). It was a nice moment.

Then, I whipped out my laptop and felt constrained to get back to work; i.e. read papers for my AI governance course, write up LW posts, and trade emails with EA France. (These I believe to be my best shots at increasing everyone's odds of survival).

It felt almost like sacrilege to wrench myself away from the moon and my wonder. Like I was ruining a moment of poetry and stillwatered peace by slamming against reality and its mundane things again.

But... The reason I wrenched myself away is directly downstream from the spirit that animated me in the first place. Whether I feel the poetry now that I felt then is irrelevant: it's still there, and its value and truth persist. Pulling away from the moon was evidence I cared about my musings enough to act on them.

The poetic is not a separate magisterium from the practical; rather the practical is a particular facet of the poetic. Feeling "something to protect" in my bones naturally extends to acting it out. In other words, poetry doesn't just stop. Feel no guilt in pulling away. Because, you're not.

Is Superintelligence by Nick Bostrom outdated?

Quick question because I don't have enough alignment knowledge to tell: is Superintelligence outdated? It was published nearly 10 years ago and a lot has happened since then. If it is outdated, a re-edition of the book might be wise, if only because that'll make the book more attractive. Because of the fast-moving nature of the field,I admit to not having read the book because the release date made me hesitate (I figured online resources would be more up-to-date). 

[-]Neil 2-13

Can we have a black banner for the FHI? Not a person, still seems appropriate imo.

What is magic?

Presumably we call whatever we can't explain "magic" before we understand it, at which point it becomes simply a part of the natural world. This is what many fantasy novels fail to account for; if we actually had magic, we wouldn't call it magic. There are thousands of things in the modern world that would definitely enter the criteria for magic of a person living in the 13th Century. 

So we do have magic; but why doesn't it feel like magic? I think the answer to this question is to be found in how evenly distributed our magic is. Almost everyone in the world benefits from the magic that is electricity; it's so common and so many people have it that it isn't considered magic. It's not magic because everyone has it, and so it isn't more impressive than an eye or an opposable thumb. In fantasy novels, the magic tends to be concentrated into a single caste of people. 

Point being: if everyone were a wizard, we wouldn't call ourselves wizards, because wizards are more magical than the average person by definition. 


Entropy dictates that everything will be more or less evenly distributed, and so worlds from the fantasy books are very unlikely to appear in our universe. Magic as I've loosely defined it here does not exist and it is freakishly unlikely to. We can dream though.  

Eliezer Yudkowsky is kind of a god around here, isn't he? 

Would you happen to know what percentage of total upvotes on this website are attributed to his posts? It's impressive how many sheer good ideas written in clear form that he's had to come up with to reach that level. Cool and everything, but isn't it ultimately proof that LessWrong is still in its fledgling stage (which it may never leave), as it depends so much on the ideas of its founder? I'm not sure how one goes about this, but expanding the LessWrong repertoire in a consequential way seems like a good next step for LessWrong. Perhaps that includes changing the posts in the Library... I don't know. 

Anyhow thanks for this comment, it was great reading!

Eliezer Yudkowsky is kind of a god around here, isn't he?

The Creator God, in fact. LessWrong was founded by him.

All of the Sequences are worth reading.

Right, but if LessWrong is to become larger, it might be a good idea to stop leaving his posts as the default (the Library, the ones being recommended in the front page, etc.) I don't doubt that his writing is worth reading and I'll get to it, I'm just offering an outsider's view on this whole situation, which seems a little stagnant to me in a way. 

That last reply of mine, a reply to a reply to a Shortform post I made, can be found after just a little scrolling on the main page of LessWrong. I should be a nobody to the algorithm, yet I'm not. My only point is that LessWrong seems big because it has a lot of posts but it isn't growing as much as it should be. That may be because the site is too focused on a single set of ideas, and that shooes some people away. I think it's far from being an echo chamber, but it's not as lively as I would think it should be.

As I've noted though, I'm a humble outsider and have no idea what I'm talking about. I'm only writing this because often outsider advice is valuable as there's no chance in getting trapped into echo thinking at all. 

I think there is another reason why it doesn't feel like magic, and in order to find it, we have to find the element that changed the least: The human body and brain didn't get affected by the industrial revolution, and humans are the most important part of any societal shift.

[This comment is no longer endorsed by its author]Reply

What do you mean? What I read is: magic is subjective, and since the human brain hasn't changed in 200,000 years nothing will ever feel like magic. I'm not sure that's what you meant though, could you explain? 

I'll admit, I didn't actually think all that well here.

I'm still new to this, but I can say I love a culture where there is a button for retracting statements without deleting them. I will most likely have to use it a lot as I progress around here.

I'm working on a non-trivial.org project meant to assess the risk of genome sequences by comparing them to a public list of the most dangerous pathogens we know of. This would be used to assess the risk from both experimental results in e.g. BSL-4 labs and the output of e.g. protein folding models. The benchmarking would be carried out by an in-house ML model of ours. Two questions to LessWrong: 

1. Is there any other project of this kind out there? Do BSL-4 labs/AlphaFold already have models for this? 

2. "Training a model on the most dangerous pathogens in existence" sounds like an idea that could backfire horribly. Can it backfire horribly? 

We can't negociate with something smarter than us 

Superintelligence will outsmart us or it isn't superintelligence. As such, the kind of AI that would truly pose a threat to us is also an AI we cannot negotiate with.

No matter what arguments we make, superintelligence will have figured them out first. We're like ants trying to appeal to a human, and the human can understand pheromones but we can't understand human language. It's entirely up to the human and its own arguments whether we get squashed or not. 

Worth reminding yourself of this from time to time, even if it's obvious. 

Counterpoints: 

  1. It may not take a true superintelligence to kill us all, meaning we could perhaps negociate with a pre-AGI machine
  2. The "we cannot negociate" part is not taking into account the fact that we are the Simulators and thus technically have ultimate power over it