All of metachirality's Comments + Replies

I fear that, while it might be a good idea to discourage LSD, it would make things even worse to discourage transitioning.

Highly Advanced Epistemology 101?

probably doesn't change much, but janus' claude generated comment was the first mention of claude acting like a base model on LW

It ought to be a top-level post on the EA forum as well.

2habryka
(Someone is welcome to link post, but indeed I am somewhat hoping to avoid posting over there as much, as I find it reliably stressful in mostly unproductive ways) 

Well that's because it's meant to be quantifying over linear equations. and are not meant to be replaced but and are.

i is often used as an index in math, similar to how it is used as an index in for loops.

What would an event optimized for this sort of thing look like?

3Joseph Miller
Unconferences are a thing for this reason

Why not generate it after it's posted publically?

6Raemon
Reasoning is: * Currently it takes 40-60 seconds to generate jargon (we've experimented with ways of trimming that down but it's gonna be at least 20 seconds) * I want authors to actually review the content before it goes live. * Once authors publish the post, I expect very few of them to go back and edit it more. * If it happens automagically during draft saving, then by the time you get to "publish post", there's a natural step where you look at the autogenerated jargon, check if it seems reasonable, approve the ones you like and then hit "publish" * Anything that adds friction to this process I expect to dramatically reduce how often authors bother to engage with it.

Aaaa! I'm used to Arial or whatever Windows' default display font is. The larger stroke weight is rather uncomfortable to me.

4habryka
We previously had Calibri for Windows (indeed a very popular Windows system font). Gill Sans (which we now ship to all operating systems) is a quite popular MacOS and iOS system font. I currently think there are some weird rendering issues on Windows, but if that's fixed, my guess is you would get used to it quickly enough. Gill Sans is not a rare font on the internet.

Yarvin was not part of the CCRU. I think Land and Yarvin only became associates post-CCRU.

1yams
updated, thanks!

Maybe make a post on the EA forum?

It seems like if the SCP hypothesis is true, block characters should cause it to act strangely.

5Lao Mein
It does! 'What is \'████████\'?\n\nThis term comes from the Latin for "to know". It' 'What is \'████████\'?\n\n"████████" is a Latin for "I am not",'   Putting it in the middle of code causes it to sometimes spontaneously switch to an SCP story ' for i in █████.\n\n"I\'m not a scientist!"\n\n- Dr' ' for i in █████,\n\n[REDACTED]\n\n[REDACTED]\n\n[REDACTED] [REDACTED]\n\n[REDACTED]'

Does it not have any sort of metadata telling you where it comes from?

My only guess is that some of it is probably metal lyrics.

Is this an LLM generation or part of the training data?

2Lao Mein
This is from OpenWebText, a recreation of GPT2 training data. "@#&" [token 48193] occured in 25 out of 20610 chunks. 24 of these were profanity censors ("Everyone thinks they’re so f@#&ing cool and serious") and only contained a single instance, while the other was the above text (occuring 3299 times!), which was probably used to make the tokenizer, but removed from the training data. I still don't know what the hell it is. I'll post the full text if anyone is interested.

I don't see how 3 follows.

1Hudjefa
Si, I have the same difficulty. However, sources indicate that Socrates/Plato/others didn't brush it aside as inconsequential.  I tried googling, but haven't found anything that could be considered a solution.

That's because we don't have the intelligence to exterminate ants (without causing more problems.)

On another note, if an artificial superintelligence needed a human for something, it would probably be able to find someone it could convince on the spot, no pre-built religion needed.

1p4rziv4l
*probably. Maybe it'll start looking for people who are pre-aligned. Religion is also a useful single word, which carries the most meaning per bit to a normie. Maybe just enough to make them take it seriously. I believe there is something to be taken seriously about it.

We have nothing to offer. Anything we can do, an artificial superintelligence can do better, with space and energy and atoms we irritatingly take up.

0p4rziv4l
That's pretty pessimistic. I am looking for things I could do to help Superintelligence. Crucially, we won't understand why they need us to do things they ask us to do. Ants take up a lot of space, yet we don't systematically hunt them down, they are pretty orthogonal to our values. We find cats and dogs friendly and worthwhile. However, wolves and sabertooth tigers are gone.

Why would we want to worship AI?

-2p4rziv4l
Because Superintelligence is more powerful than us. If you can't beat them, join them. Maybe Superintelligence will help us terraform Mars if we also perform some favors. Worshipping is a provocative way of saying aligning ourselves with Superintelligence's goals.

I think the thing that actually makes people more rational is thinking of them as principles you can apply to your own life rather than abstract notions, which is hard to communicate in a Wikipedia page about Dutch books.

1Closed Limelike Curves
Sure, but you gotta start somewhere, and a Wikipedia article would help.

Emmett Shear might also count, but he might merely be rationalist-adjacent.

IMO trying the problem yourself before researching it makes you appreciate what other people have already done even more. It's pretty easy to fall victim to hindsight bias if you haven't experienced the difficulty of actually getting anywhere.

they figure out planting and then rationally collaborate with each other?

I feel like they would end up converging on the same problems that plague human sociality.

I think asociality might prevent the development of altruistic ethics.

Also it's hard to see how an asocial species would develop civilization.

1[anonymous]
same, but not sure, i was in the process of adding a comment about that they figure out planting and then rationally collaborate with each other? these might depend on 'degree of (a)sociality'. hard for me to imagine a fully asocial species though they might exist and i'd be interested to see examples. chatgpt says..

This reminds me of Moravec's paradox.

You should read Greg Egan's excellent novel Permutation City.

1VictorLJZ
Will do, have heard great things about it!

I think working on safety roles at capabilities orgs is mostly mutually exclusive with a pause, so I don't think this is that remarkable.

Sorta? Usually the idea is that the presence or absence of hardware determines the anthropic probability of being that conscious process, otherwise you would expect to be some random arbitrary Boltzmann brain-like conscious.

Also this is an immediate corollary of the mathematical universe hypothesis, which says our universe is a mathematical structure.

I feel like you're not giving enough credit to Greg Egan since he came up with all the philosophy himself.

2Yair Halberstadt
Possibly, but some of the missteps just feel too big to ignore. Like what on earth is going on in the second half of the book?

Actually, we should hope that LW is very wrong about AI and alignment is easy.

I remember going to a city and seeing someone on the subway loudly threatening nonexistent people. I wasn't scared, I just felt bad that in all likelihood, the world had failed this person through no fault of their own.

I like this format and framing of "90% of what matters" and someone should try doing it with other subjects.

Decision theory/trade reasons

I think this still means MIRI is correct when it comes to the expected value though

4ryan_greenblatt
If you're a longtermist, sure. If you just want to survive, not clearly.

The thing that got me was Pause AI trying to coalition with people against AI art. I don't really have anything against the idea of a pause but Pause AI seems a bit simulacrum level 2 for me.

I don't think I'm really looking for something like that, since it doesn't touch on the perception of music as much as it does the reasons why we have it.

3Ben Pace
I did find it and we sent him an email, hope he reads it and joins :)

Sure, I just prefer a native bookmarking function.

I wish I could bookmark comments/shortform posts.

2faul_sname
Yes, that would be cool. Next to the author name of a post orcomment, there's a post-date/time element that looks like "1h 🔗". That is a copyable/bookmarkable link.

You can actually use this to do the sleeping beauty experiment IRL and thereby test SIA vs SSA. Unfortunately you can only get results if you're the one being put under.

This sort of begs the question of why we don't observe other companies assassinating whistleblowers.

2lc
Robin Hanson has apparently asked the same thing. It seems like such a bizarre question to me: * Most people do not have the constitution or agency for criminal murder * Most companies do not have secrets large enough that assassinations would reduce the size of their problems on expectation * Most people who work at large companies don't really give a shit if that company gets fined or into legal trouble, and so they don't have the motivation to personally risk anything organizing murders to prevent lawsuits

I think there should be a way to find the highest rated shortform posts.

9habryka
You can! Just go to the all-posts page, sort by year, and the highest-rated shortform posts for each year will be in the Quick Takes section:  2024:  2023:  2022: 

I like to phrase it as "the path to simplicity involves a lot of detours." Yes, Newtonian mechanics doesn't account for the orbit of Mercury but it turned out there was an even simpler, more parsimonious theory, general relativity, waiting for us.

We don't actually know if it's GPT 4.5 for sure. It could be an alternative training run that preceded the current version of ChatGPT 4 or even a different model entirely.

2faul_sname
It might be informative to try to figure out when its knowledge cutoff is (right now I can't do so, as it's at it's rate limit).

I think it disambiguates by saying it's specifically a crux as in "double crux"

7Arjun Panickssery
If I understand the term "double crux" correctly, to say that something is a double crux is just to say that it is "crucial to our disagreement."

Copied from a reply on lukehmiles' short form:

The hypothesis I would immediately come up with is that less traditionally masculine AMAB people are inclined towards less physical pursuits.

If it is related to IQ, however, this is less plausible, although perhaps some sort of selection effect is happening here.

The hypothesis I would immediately come up with is that less traditionally masculine AMAB people are inclined towards less physical pursuits.

This feels like Scott Alexander could've written something about, and it has the same revelatory quality.

I assume OP thought that there was some specific place in the training data the LLM was replicating.

5gwern
Indeed, and my point is that that seems entirely probable. He asked for a dictionary definition of words like 'cat' for children, and those absolutely exist online and are easy to find, and I gave an example of one for 'cat'. (And my secondary point was that ironically, you might argue that GPT is generalizing and not memorizing... because its definition is so bad compared to an actual Internet-corpus definition for children, and is bad in that instantly-recognizable ChatGPTese condescending talking-down bureaucrat smarm way. No human would ever define 'cat' for 11yos like that. If it was 'just memorizing', the definitions would be better.)
1Bill Benzon
I was assuming lots of places widely spread. What I was curious about was a specific connection in the available data between the terms I used in my prompts and the levels of language. gwern's comment satisfies that concern.
Load More