All of Kayden's Comments + Replies

Kayden20

Thanks for the suggestion, works great!

Kayden40

I agree. It's not easy to search for specific episodes on the Nonlinear library. I open it in something like Google podcasts and then search in page for keywords. It is cumbersome as you said. They did mention in their original announcement post that the to-do goals are 1. Creating a compilation of top-of-all-time posts for all 3 forums, 2. Creating forum-specific feed,  and 3. Creating tags and a searchable archive.

Since they've done the first two, I hope it's not long before they add the tags functionality. 

For Nonlinear, there's a threshold to... (read more)

Kayden50

Have you looked at the Non-linear library? I find them far better than the otherwise-robotic sounding audio by usual TTS. Plus, it's automated and podcasts are available pretty soon after any post is published. I like it as I like to go on long walks and listen to LW or Alignment forum posts. 

Also, there's an audio collection of the top-of-all-time posts from LW, the Alignment forum, and the EA forum.

1Evan R. Murphy
I tried Nonlinear Library awhile ago but had trouble finding my groove with it. I recall finding it cumbersome to search for episodes/posts. Is there a good way to do that? It's good to know the episodes are available soon after a post comes out. That was another doubt I had about Nonlinear, not knowing if the post I wanted would be on there yet. Do you know about how long after a post is published it takes it to appear on Nonlinear? 
Kayden10

The readability difference, when compared to DALL-E 2, is laughable. 

They have provided some examples after the references section, including some direct comparisons with DALL-E 2 for text in images. Also, PartiPrompts looks like a good collection of novel prompts for eval.

Kayden70

(I don't think many people ever bother to use that particular gwern.net feature, but that's at least partially due to the link bibliography being in the metadata block, and as we all know, no one ever reads the metadata.)

I don't have any idea whether people use that feature or not, but I definitely love it. One of my fav things about browsing gwern.net.

I was directed to the story of Clippy from elsewhere (rabbit hole from the Gary Marcus vs SSC debate) and was pleasantly surprised with the reader mode (I had not read gwern.net for months). Then, I came her... (read more)

7gwern
/sheds tears of joy that someone actually uses the link-bibliographies and noticed the reader mode
Kayden30

Agreed. Look at the wars of just the past 100 years, the Spanish flu, and the damage caused by ignorant statements of a few famous or powerful people during the COVID-19 pandemic. We start to see a picture where a handful of people are capable of causing a large amount of damage, even if they didn't anticipate it. If they set their mind to it, as probably with the Ukraine war at the moment, then the amount of destruction is very asymmetrically proportioned to the number of people responsible for it.

Kayden43

I assumed that there will come a time when the AGI has exhausted consuming all available human-collected knowledge and data. 

My reasoning for the comment was something like 

"Okay, what if AGI happens before we've understood the dark matter and dark energy? AGI has incomplete models of these concepts (Assuming that it's not able to develop a full picture from available data - that may well be the case, but for a placeholder, I'm using dark energy. It could be some other concept we only discover in the year prior to the AGI creation and have relati... (read more)

Kayden20

I'm 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19. 

The generational shift is a concern to me because as we start losing people who've accumulated decades of knowledge (of which only a small fraction is available to read/watch), it's possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there's a lot of utility in coming up... (read more)

Kayden51

I mostly agree with the points written here. It's actually on the (Section A; Point1) that I'd like to have more clarification on:

AGI will not be upper-bounded by human ability or human learning speed.  Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains

When we have AGI working on hard research problems, it sounds akin to decades of human-level research compressed up into just a few days or maybe even less, perhaps. That may be possible, but often, the bottleneck is not th... (read more)

I think Yudkowsky would argue that on a scale from never learning anything to eliminating half your hypotheses per bit of novel sensory information, humans are pretty much at the bottom of the barrel.

When the AI needs to observe nature, it can rely on petabytes of publicly available datasets from particle physics to biochemistry to galactic surveys. It doesn't need any more experimental evidence to solve human physiology or build biological nanobots: we've already got quantum mechanics and human DNA sequences. The rest is just derivation of the consequence... (read more)

Answer by Kayden80

ChinAI takes a different approach: it bets on the proposition that for many of these issues, the people with the most knowledge and insight are Chinese people themselves who are sharing their insights in Chinese. Through translating articles and documents from government departments, think tanks, traditional media, and newer forms of “self-media,” etc., ChinAI provides a unique look into the intersection between a country that is changing the world and a technology that is doing the same.

ChinAI might be of interest to you.

Kayden30

From what I've seen so far, Imagen is more "straightforward" and does a better job generating an image describing the text than DALE-2. But DALE-2 seems to be producing prettier images (which makes sense given it was fine-tuned for aesthetics),

There's a Github repo up already, so I hope we'll be able to try an Open source version and actually test on the same prompts as DALE-2. 

1Logan Zoellner
It'll be interesting to see Imagen fine-tuned on laion aesthetic