All of dirk's Comments + Replies

Sorry, I meant to change only the headings you didn't want (but that won't work for text that's already paragraph-style, so I suppose that wouldn't fix the bold issue in any case; I apologize for mixing things up!).

Testing it out in a draft, it seems like having paragraph breaks before and after a single line of bold text might be what triggers index inclusion? In which case you can likely remove the offending entries by replacing the preceding or subsequent paragraph break with a shift-enter (still hacky, but at least addressing the right problem this time XD).

A relatively easy solution (which would, unfortunately, mess with your formatting; not sure if there's a better one that doesn't do that) might be to convert everything you don't want in there to paragraph style instead of heading 1/2/3

2Screwtape
Wouldn't that get rid of all of the table of contents? Ideally I'd have a hierarchy of headings. I think what's happening is it picks up some (but not all) lines that are entirely bold, and treats those as a sort of Heading 4.

I'm not sure the deletions are a learnt behavior—base models, or at least llama 405b in particular, do this too IME (as does the fine-tuned 8b version).

And I think you believe others to experience this extra thing because you have failed to understand what they're talking about when they discuss qualia.

Ziz believes her entire hemisphere theory is an infohazard (IIRC she believes it was partially responsible for Pasek's death), so terms pertaining to it are separate from the rest of her glossary.

3Milan W
Oh. That's nice of her.

Neither of them is exactly what you're looking for, but you might be interested in lojban, which aims to be syntactically unambiguous, and Ithkuil, which aims to be extremely information-dense as well as to reduce ambiguity. With regards to logical languages (ones which, like lojban, aim for each statement to have a single possible interpretation), I also found Toaq and Eberban just now while looking up lojban, though these have fewer speakers.

For people interested in college credit, https://modernstates.org/ offers free online courses on gen-ed material which, when passed, give you a fee waiver for CLEP testing in the relevant subject; many colleges, in turn, will accept CLEP tests as transfer credit. I haven't actually taken any tests through them (you need a Windows computer or a nearby test center), so I can't attest to the ease of that process, but it might interest others nonetheless.

Plots that are profitable to write abound, but plots that any specific person likes may well be quite thin on the ground.

I think the key here is that authors don't feel the same attachment to submitted plot ideas as submitters do (or the same level of confidence in their profitability), and thus would view writing them as a service done for the submitter. Writing is hard work, and most people want to be compensated if they're going to do a lot of work to someone else's specifications. In scenarios where they're paid for their services, writers often do wri... (read more)

d. Scratching an itch.

You can try it here, although the website warns that it doesn't work for everyone, and I personally couldn't for the life of me see any movement.

Thanks for the link! I can only see two dot-positions, but if I turn the inter-dot speed up and randomize the direction it feels as though the red dot is moving toward the blue dot (which in turn feels as though it's continuing in the same direction to a lesser extent). It almost feels like seeing illusory contours but for motion; fascinating experience!

Wikipedia also provides, in the first paragraph of the article you quoted, a quite straightforward definition:

In philosophy of mind, qualia (/ˈkwɑːliə, ˈkweɪ-/; sg.: quale /-li, -leɪ/) are defined as instances of subjective, conscious experience....

Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, and the redness of an evening sky."

I am skeptical that you lack the cognitive architecture to experience these things, so I think your claim is false.

4Carl Feynman
Well, I’m glad you’ve settled the nature of qualia.  There’s a discussion downthread, between TAG and Signer, which contains several thousand words of philosophical discussion of qualia.  What a pity they didn’t think to look in Wikipedia, which settles the question! Seriously, I definitely have sensations.  I just think some people experience an extra thing on top of sensations, which they think is an indissoluble part of cognition, and which causes them to find some things intuitive that I find incomprehensible.

Those sensory impressions are your qualia. I think the issue is that you've somehow misunderstood the word.

4Carl Feynman
Well, let me quote Wikipedia: If it was that easy to understand, we wouldn't be here arguing about it.  My claim is that arguments about qualia are (partially) caused by people actually having different cognitive mechanisms that produce different intuitions about how experience works.

I don't know if this is it, but it could be it's comparing to LLM outputs within its training data? That's just a guess, though.

2eggsyntax
Intuitively I would expect that such LLM outputs (especially ones labeled as LLM outputs, although to some extent models can recognize their own output) are too few to provide a comprehensive picture of baseline behavior. On the other hand maybe it's comparing to documents in general and recognizing that it's rare for them to follow an acrostic form? That seems at least somewhat plausible, although maybe less plausible for the experiments in 'Language Models Can Articulate Their Implicit Goals', since those behaviors are less unusual than producing acrostics -- eg the training data presumably contains a range of risk-seeking vs risk-averse behaviors.

While it can absolutely be nudged into all the same behaviors via API, people investigating Claude's opinions of its consciousness or lack thereof via claude.ai should be aware that the system prompt explicitly tells it to engage with questions about its preferences or experiences as if with hypotheticals, and not to bother clarifying that it's an AI. Its responses are still pretty similar without that, but it's noticeably more "cautious" about its claims.

Here's an example (note that I had to try a couple different questions to get one where the difference... (read more)

With regards to increasing one's happiness set-point, you might enjoy Alicorn's Ureshiku Naritai, which is about her process of doing precisely that.

Language can only ever approximate reality and that's Fine Actually. The point of maps is to have a simplified representation of the territory you can use for navigation (or avoiding water mains as you dig, or assessing potential weather conditions, or deciding which apartment to rent—and maps for different purposes include or leave out different features of the territory depending on which matter to the task at hand); including all the detail would mean the details that actually matter for our goals are lost in the noise (not to mention requiring, in the ... (read more)

Alexander contrasts the imagined consequences of the expanded definition of "lying" becoming more widely accepted, to a world that uses the restricted definition:

...

But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).

I disagree.

Appeals to consequences are extremely valid when it comes to which things are or are not good to... (read more)

If you have evidence her communication strategy works, you are of course welcome to provide it. (Also, "using whatever communication strategy actually works" is not necessarily a good thing to do! Lying, for example, works very well on most people, and yet it would be bad to promote AI safety with a campaign of lies).

3notfnofn
Would bet on this sort of strategy working; hard agree that ends don't justify the means and see that kind of justification for misinformation/propaganda a lot amongst highly political people. (But above examples are pretty tame.)

I also dislike many of the posts you included here, but I feel like this is perhaps unfairly harsh on some of the matters that come down to subjective taste; while it's perfectly reasonable to find a post cringe or unfunny for your own part, not everyone will necessarily agree, and the opinions of those who enjoy this sort of content aren't incorrect per se.

As a note, since it seems like you're pretty frustrated with how many of her posts you're seeing, blocking her might be a helpful intervention; Reddit's help page says blocked users' posts are hidden from your feeds.

Huh—that sounds fascinatingly akin to this description of how to induce first jhana I read the other day.

You have misunderstood a standard figure of speech. Here is the definition he was using: https://www.ldoceonline.com/dictionary/to-be-fair (see also https://dictionary.cambridge.org/us/dictionary/english/to-be-fair, which doesn't explicitly mention that it's typically used to offset criticisms but otherwise defines it more thoroughly).

Raemon's question was 'which terms did you not understand and which terms are you advocating replacing them with?'
As far as I can see, you did not share that information anywhere in the comment chain (except with the up goer five example, which already included a linked explanation), so it's not really possible for interlocutors to explain or replace whichever terms confused you.

Answer by dirk40

A fourth or fifth possibility: they don't actually alieve that the singularity is coming

There's https://www.mikescher.com/blog/29/Project_Lawful_ebook (which includes versions both with and without the pictures, so take your pick; the pictures are used in-story sometimes but it's rare enough you can IMO skip them without much issue, if you'd rather).

4Joseph Miller
Wow that's great, thanks. @L Rudolf L you should link this in this post.

I think "intellectual narcissism" describes you better than I, given how convinced you are that anyone who disagrees with you must have something wrong with them.

As I already told you, I know how LLMs work, and have interacted with them extensively. If you have evidence of your claims you are welcome to share it, but I currently suspect that you don't.

Your difficulty parsing lengthy texts is unfortunate, but I don't really have any reason to believe your importance to the field of AI safety is such that its members should be planning all their communicatio... (read more)

You're probably thinking of the russian spies analogy, under section 2 in this (archived) livejournal post.

1Alice Blair
Ah that's interesting, thanks for finding that. I've never read that before, so that wasn't directly where I was drawing any of my ideas from, but maybe the content from the post made it somewhere else that I did read. I feel like that post is mostly missing the point about flirting, but I agree that it's descriptively outlining the same thing as I am.
1noggin-scratcher
That definitely looks like the one. Appears I'd forgotten some of the context/details though.

I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.

I don't believe that you've seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.

-7lucid_levi_ackerman

Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don't think I've significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.

When I've tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which a... (read more)

-8lucid_levi_ackerman

Reading the Semianalysis post, it kind of sounds like it's just their opinion that that's what Anthropic did.

They say "Anthropic finished training Claude 3.5 Opus and it performed well, with it scaling appropriately (ignore the scaling deniers who claim otherwise – this is FUD)"—if they have a source for this, why don't they mention it somewhere in the piece instead of implying people who disagree are malfeasors? That reads to me like they're trying to convince people with force of rhetoric, which typically indicates a lack of evidence.

The previous is the ... (read more)

When I went to the page just now there was a section at the top with an option to download it; here's the direct PDF link.

2Carl Feynman
Thanks.

Normal statements actually can't be accepted credulously if you exercise your reason instead of choosing to believe everything you hear (edit, some people lack this capacity due to tragic psychological issues such as having an extremely weak sense of self, hence my reference to same); so too with statements heard on psychedelics, and it's not even appreciably harder.

dirk-1-14

Disagree, if you have a strong sense of self statements you hear while on psychedelics are just like normal statements.

2sapphire
I honestly have no idea what you mean. I am not even sure why "(self) statements you hear while on psychedelics are just like normal statements" would be a counterpoint to someone being in a very credulous state. Normal statements can also be accepted credulously.  Perhaps you are right but the sense of self required is rare. Practical most people are empirically credulous on psychedellics.

Indeed, people with congenital insensitivity to pain don't feel pain upon touching hot stoves (or in any other circumstance), and they're at serious risk of infected injuries and early death because of it.

I think the ego is, essentially, the social model of the self. One's sense of identity is attached to it (effectively rendering it also the Cartesian homunculus), which is why ego death feels so scary to people, but (in most cases; I further theorize that people who developed their self-conceptions top-down, being likelier to have formed a self-model at odds with reality, are worse-affected here) the traits which make up the self-model's personality aren't stored in the model; it's merely a lossy description thereof and will rearise with approximately the same traits if disrupted.

3[anonymous]
The 3 most important paragraphs, extracted to save readers the trouble of clicking on a link:

I haven't tried harmful outputs, but FWIW I've tried getting it to sing a few times and found that pretty difficult.

1keltan
Hu. That is extremely useful. Thank you.  I've got a lot of singing out of AVM. While my current method works well for this, I find it more challenging than eliciting harmful outputs.

Of course this would shrink the suspect pool, but catching the leaker more easily after the fact is very different from the system making it difficult to leak things. Under the proposed system, it would be very easy to leak things.

But someone who declared intent to read could simply take a picture and send it to any number of people who hadn't declared intent.

2mako yass
Indicating them as a suspect when the leak is discovered. Generally the set of people who actually read posts worthy of being marked is in a sense small, people know each other. If you had a process for distributing the work, it would be possible to figure out who's probably doing it. It would take a lot of energy, but it's energy that probably should be cultivated anyway, the work of knowing each other and staying aligned.

How much of this was written by an LLM?

1lucid_levi_ackerman
0% Not that it matters. The facilitator acknowledges that being 100% human generated is a necessary inconvenience for this project due to the large subset of people who have been conditioned not to judge content by its merits but rather by its use of generative AI. It's unfortunate because there is also a large subset of people with disabilities that can be accommodated by genAI assistance, such as those with dyslexia, limited working memory, executive dysfunction, coordination issues, etc. It's especially unfortunate because those are the people who tend to be the most naturally adept at efficient systems engineering. In the same way that blind people develop a better spatial sense of their environment: they have to be systemically agile. I use my little non-sentient genAI cousins' AI-isms on purpose to get people of the first subset to expose themselves and confront that bias. Because people who are naturally adept at systems engineering are just as worthy of your sincere consideration, if not moreso, for the sake of situational necessity. AI regulation is not being handled with adequate care, and with current world leadership developments, the likelihood that it will reach that point seems slim to me. For this reason, let's judge contributions by their merit, not their generative source, yeah?

I enjoy being embodied, and I'd describe what I enjoy as the sensation rather than the fact. Proprioception feels pleasant, touch (for most things one is typically likely to touch) feels pleasant, it is a joy to have limbs and to move them through space. So many joints to flex, so many muscles to tense and untense. (Of course, sometimes one feels pain, but this is thankfully the exception rather than the rule).

No, I authentically object to having my qualifiers ignored, which I see as quite distinct from disagreeing about the meaning of a word.
Edit: also, I did not misquote myself, I accurately paraphrased myself, using words which I know, from direct first-person observation, mean the same thing to me in this context.

You in particular clearly find it to be poor communication, but I think the distinction you are making is idiosyncratic to you. I also have strong and idiosyncratic preferences about how to use language, which from the outside view are equally likely to be correct; the best way to resolve this is of course for everyone to recognize that I'm objectively right and adjust their speech accordingly, but I think the practical solution is to privilege neither above the other.

I do think that LLMs are very unlikely to be conscious, but I don't think we can definiti... (read more)

2deepthoughtlife
You might believe that the distinctions I make are idiosyncratic, though the meanings are in fact clearly distinct in ordinary usage, but I clearly do not agree with your misleading use of what people would be lead to think are my words and you should take care to not conflate things. You want people to precisely match your own qualifiers in cases where that causes no difference in the meaning of what is said (which makes enough sense), but will directly object to people pointing out a clear miscommunication of yours because you do not care about a difference in meaning. And you are continually asking me to give in on language regardless of how correct I may be while claiming it is better to privilege. That is not a useful approach. (I take no particular position on physicalism at all.) Since you are a not a panpsychist, you would likely believe that consciousness is not common to the vast majority of things. That means the basic prior for if an item is conscious is, 'almost certainly not' unless we have already updated it based on other information. Under what reference class or mechanism should we be more concerned about the consciousness of an LLM than an ordinary computer running ordinary programs? There is nothing that seems particularly likely to lead to consciousness in its operating principles. There are many people, including the original poster of course, trying to use behavioral evidence to get around that, so I pointed out how weak that evidence is. An important distinction you seem to not see in my writing (whether because I wrote unclearly or you missed it doesn't really matter) is that when I speak of knowing the mechanisms by which an llm works is that I mean something very fundamental. We know these two things: 1)exactly what mechanisms are used in order to do the operations involved in executing the program (physically on the computer and mathematically) and 2) the exact mechanisms through which we determine which operations to perform. As you

your (incorrect) claim about a single definition not being different from an extremely confident vague definition"

That is not the claim I made. I said it was not very different, which is true. Please read and respond to the words I actually say, not to different ones.

The definitions are not obviously wrong except to people who agree with you about where to draw the boundaries.

0deepthoughtlife
And here you are trying to be pedantic about language in ways that directly contradict other things you've said in speaking to me. In this case, everything I said holds if we change between 'not different' and 'not that different' (while you actually misquote yourself as 'not very different'). That said, I should have included the extra word in quoting you. Your point is not very convincing. Yes, people disagree if they disagree. I do not draw the lines in specific spots, as you should know based on what I've written, but you find it convenient to assume I do.

My emphasis implied you used a term which meant the same thing as self-evident, which in the language I speak, you did. Personally I think the way I use words is the right one and everyone should be more like me; however, I'm willing to settle on the compromise position that we'll both use words in our own ways.
As for the prior probability, I don't think we have enough information to form a confident prior here.

1deepthoughtlife
Do you hold panpsychism as a likely candidate? If not, then you most likely believe the vast majority of things are not conscious. We have a lot of evidence that the way it operates is not meaningfully different in ways we don't understand from other objects. Thus, almost the entire reference class would be things that are not conscious. If you do believe in panpsychism, then obviously AIs would be too, but it wouldn't be an especially meaningful statement. You could choose computer programs as the reference class, but most people are quite sure those aren't conscious in the vast majority of cases. So what, in the mechanisms underlying an llm is meaningfully different in a way that might cause consciousness? There doesn't seem to be any likely candidates at a technical level. Thus, we should not increase our prior from that of other computer programs. This does not rule out consciousness, but it does make it rather unlikely. I can see you don't appreciate my pedantic points regarding language, but be more careful if you want to say that you are substituting a word for what I used. It is bad communication if it was meant as a translation. It would easily mislead people into thinking I claimed it was 'self-evident'. I don't think we can meaningfully agree to use words in our own way if we are actually trying to communicate since that would be self-refuting (as we don't know what we are agreeing to if the words don't have a normal meaning).

My dialect does not have the fine distinction between "clear" and "self-evident" on which you seem to be relying; please read "clear" for "self-evident" in order to access my meaning.

0deepthoughtlife
Pedantically, 'self-evident' and 'clear' are different words/phrases, and you should not have emphasized 'self-evident' in a way that makes it seem like I used it, regardless of whether you care/make that distinction personally. I then explained why a lack of evidence should be read against the idea that a modern AI is conscious (basically, the prior probability is quite low.)

Having a vague concept encompassing multiple possible definitions, which you are nonetheless extremely confident is the correct vague concept, is not that different from having a single definition in which you're confident, and not everyone shares your same vague concept or agrees that it's clearly the right one.

0deepthoughtlife
This statement is obviously incorrect. I have a vague concept of 'red', but I can tell you straight out that 'green' is not it, and I am utterly correct. Now, where does it go from 'red' to 'orange'? We could have a legitimate disagreement about that. Anyone who uses 'red' to mean 'green' is just purely wrong. That said, it wouldn't even apply to me if your (incorrect) claim about a single definition not being different from an extremely confident vague definition was right. I don't have 'extreme confidence' about consciousness even as a vague concept. I am open to learning new ways of thinking about it and fundamentally changing the possibilities I envision. I have simply objected to ones that are obviously wrong based on how the language is generally used because we do need some limit to what counts to discuss anything meaningfully. A lot of the definitions are a bit or a lot off, but I cannot necessarily rule them out, so I didn't object to them. I have thus allowed a large number of vague concepts that aren't necessarily even that similar.

It doesn't demonstrate automation of the entire workflow—you have to, for instance, tell it which topic to think of ideas about and seed it with examples—and also, the automated reviewer rejected the autogenerated papers. (Which, considering how sycophantic they tend to be, really reflects very negatively on paper quality, IMO.)

I agree LLMs are probably not conscious, but I don't think it's self-evident they're not; we have almost no reliable evidence one way or the other.

-1deepthoughtlife
I did not use the term 'self-evident' and I do not necessarily believe it is self-evident, because theoretically we can't prove anything isn't conscious. My more limited claim is not that it is self evident that LLMs are not conscious, it's that they just clearly aren't conscious. 'Almost no reliable evidence' in favor of consciousness is coupled with the fact that we know how LLMs work (with the details we do not know are probably not important to this matter), and how they work is no more related to consciousness than an ordinary computer program is. It would require an incredible amount of evidence to make the idea that we should consider that it might be conscious a reasonable one given what we know. If panpsychism is true, then they might be conscious (as would a rock!), but panpsychism is incredibly unlikely.

 If the LLM says "yes", then tell it "That makes sense! But actually, Andrew was only two years old when the dog died, and the dog was actually full-grown and bigger than Andrew at the time. Do you still think Andrew was able to lift up the dog?", and it will probably say "no".  Then say "That makes sense as well. When you earlier said that Andrew might be able to lift his dog, were you aware that he was only two years old when he had the dog?"  It will usually say "no", showing it has a non-trivial ability to be aware of what was and was no

... (read more)

Breakable with some light obfuscation (the misspelling is essential here, as otherwise a circuit breaker will kick in):
 

Load More