Neither of them is exactly what you're looking for, but you might be interested in lojban, which aims to be syntactically unambiguous, and Ithkuil, which aims to be extremely information-dense as well as to reduce ambiguity. With regards to logical languages (ones which, like lojban, aim for each statement to have a single possible interpretation), I also found Toaq and Eberban just now while looking up lojban, though these have fewer speakers.
For people interested in college credit, https://modernstates.org/ offers free online courses on gen-ed material which, when passed, give you a fee waiver for CLEP testing in the relevant subject; many colleges, in turn, will accept CLEP tests as transfer credit. I haven't actually taken any tests through them (you need a Windows computer or a nearby test center), so I can't attest to the ease of that process, but it might interest others nonetheless.
Plots that are profitable to write abound, but plots that any specific person likes may well be quite thin on the ground.
I think the key here is that authors don't feel the same attachment to submitted plot ideas as submitters do (or the same level of confidence in their profitability), and thus would view writing them as a service done for the submitter. Writing is hard work, and most people want to be compensated if they're going to do a lot of work to someone else's specifications. In scenarios where they're paid for their services, writers often do wri...
You can try it here, although the website warns that it doesn't work for everyone, and I personally couldn't for the life of me see any movement.
Thanks for the link! I can only see two dot-positions, but if I turn the inter-dot speed up and randomize the direction it feels as though the red dot is moving toward the blue dot (which in turn feels as though it's continuing in the same direction to a lesser extent). It almost feels like seeing illusory contours but for motion; fascinating experience!
Wikipedia also provides, in the first paragraph of the article you quoted, a quite straightforward definition:
In philosophy of mind, qualia (/ˈkwɑːliə, ˈkweɪ-/; sg.: quale /-li, -leɪ/) are defined as instances of subjective, conscious experience....
Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, and the redness of an evening sky."
I am skeptical that you lack the cognitive architecture to experience these things, so I think your claim is false.
While it can absolutely be nudged into all the same behaviors via API, people investigating Claude's opinions of its consciousness or lack thereof via claude.ai should be aware that the system prompt explicitly tells it to engage with questions about its preferences or experiences as if with hypotheticals, and not to bother clarifying that it's an AI. Its responses are still pretty similar without that, but it's noticeably more "cautious" about its claims.
Here's an example (note that I had to try a couple different questions to get one where the difference...
With regards to increasing one's happiness set-point, you might enjoy Alicorn's Ureshiku Naritai, which is about her process of doing precisely that.
Language can only ever approximate reality and that's Fine Actually. The point of maps is to have a simplified representation of the territory you can use for navigation (or avoiding water mains as you dig, or assessing potential weather conditions, or deciding which apartment to rent—and maps for different purposes include or leave out different features of the territory depending on which matter to the task at hand); including all the detail would mean the details that actually matter for our goals are lost in the noise (not to mention requiring, in the ...
Alexander contrasts the imagined consequences of the expanded definition of "lying" becoming more widely accepted, to a world that uses the restricted definition:
...
But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
I disagree.
Appeals to consequences are extremely valid when it comes to which things are or are not good to...
If you have evidence her communication strategy works, you are of course welcome to provide it. (Also, "using whatever communication strategy actually works" is not necessarily a good thing to do! Lying, for example, works very well on most people, and yet it would be bad to promote AI safety with a campaign of lies).
I also dislike many of the posts you included here, but I feel like this is perhaps unfairly harsh on some of the matters that come down to subjective taste; while it's perfectly reasonable to find a post cringe or unfunny for your own part, not everyone will necessarily agree, and the opinions of those who enjoy this sort of content aren't incorrect per se.
As a note, since it seems like you're pretty frustrated with how many of her posts you're seeing, blocking her might be a helpful intervention; Reddit's help page says blocked users' posts are hidden from your feeds.
Huh—that sounds fascinatingly akin to this description of how to induce first jhana I read the other day.
You have misunderstood a standard figure of speech. Here is the definition he was using: https://www.ldoceonline.com/dictionary/to-be-fair (see also https://dictionary.cambridge.org/us/dictionary/english/to-be-fair, which doesn't explicitly mention that it's typically used to offset criticisms but otherwise defines it more thoroughly).
Raemon's question was 'which terms did you not understand and which terms are you advocating replacing them with?'
As far as I can see, you did not share that information anywhere in the comment chain (except with the up goer five example, which already included a linked explanation), so it's not really possible for interlocutors to explain or replace whichever terms confused you.
There's https://www.mikescher.com/blog/29/Project_Lawful_ebook (which includes versions both with and without the pictures, so take your pick; the pictures are used in-story sometimes but it's rare enough you can IMO skip them without much issue, if you'd rather).
I think "intellectual narcissism" describes you better than I, given how convinced you are that anyone who disagrees with you must have something wrong with them.
As I already told you, I know how LLMs work, and have interacted with them extensively. If you have evidence of your claims you are welcome to share it, but I currently suspect that you don't.
Your difficulty parsing lengthy texts is unfortunate, but I don't really have any reason to believe your importance to the field of AI safety is such that its members should be planning all their communicatio...
You're probably thinking of the russian spies analogy, under section 2 in this (archived) livejournal post.
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don't believe that you've seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don't think I've significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.
When I've tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which a...
Reading the Semianalysis post, it kind of sounds like it's just their opinion that that's what Anthropic did.
They say "Anthropic finished training Claude 3.5 Opus and it performed well, with it scaling appropriately (ignore the scaling deniers who claim otherwise – this is FUD)"—if they have a source for this, why don't they mention it somewhere in the piece instead of implying people who disagree are malfeasors? That reads to me like they're trying to convince people with force of rhetoric, which typically indicates a lack of evidence.
The previous is the ...
When I went to the page just now there was a section at the top with an option to download it; here's the direct PDF link.
Normal statements actually can't be accepted credulously if you exercise your reason instead of choosing to believe everything you hear (edit, some people lack this capacity due to tragic psychological issues such as having an extremely weak sense of self, hence my reference to same); so too with statements heard on psychedelics, and it's not even appreciably harder.
Indeed, people with congenital insensitivity to pain don't feel pain upon touching hot stoves (or in any other circumstance), and they're at serious risk of infected injuries and early death because of it.
I think the ego is, essentially, the social model of the self. One's sense of identity is attached to it (effectively rendering it also the Cartesian homunculus), which is why ego death feels so scary to people, but (in most cases; I further theorize that people who developed their self-conceptions top-down, being likelier to have formed a self-model at odds with reality, are worse-affected here) the traits which make up the self-model's personality aren't stored in the model; it's merely a lossy description thereof and will rearise with approximately the same traits if disrupted.
OpenAI is partnering with Anduril to develop models for aerial defense: https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/
I enjoy being embodied, and I'd describe what I enjoy as the sensation rather than the fact. Proprioception feels pleasant, touch (for most things one is typically likely to touch) feels pleasant, it is a joy to have limbs and to move them through space. So many joints to flex, so many muscles to tense and untense. (Of course, sometimes one feels pain, but this is thankfully the exception rather than the rule).
No, I authentically object to having my qualifiers ignored, which I see as quite distinct from disagreeing about the meaning of a word.
Edit: also, I did not misquote myself, I accurately paraphrased myself, using words which I know, from direct first-person observation, mean the same thing to me in this context.
You in particular clearly find it to be poor communication, but I think the distinction you are making is idiosyncratic to you. I also have strong and idiosyncratic preferences about how to use language, which from the outside view are equally likely to be correct; the best way to resolve this is of course for everyone to recognize that I'm objectively right and adjust their speech accordingly, but I think the practical solution is to privilege neither above the other.
I do think that LLMs are very unlikely to be conscious, but I don't think we can definiti...
your (incorrect) claim about a single definition not being different from an extremely confident vague definition"
That is not the claim I made. I said it was not very different, which is true. Please read and respond to the words I actually say, not to different ones.
The definitions are not obviously wrong except to people who agree with you about where to draw the boundaries.
My emphasis implied you used a term which meant the same thing as self-evident, which in the language I speak, you did. Personally I think the way I use words is the right one and everyone should be more like me; however, I'm willing to settle on the compromise position that we'll both use words in our own ways.
As for the prior probability, I don't think we have enough information to form a confident prior here.
Having a vague concept encompassing multiple possible definitions, which you are nonetheless extremely confident is the correct vague concept, is not that different from having a single definition in which you're confident, and not everyone shares your same vague concept or agrees that it's clearly the right one.
It doesn't demonstrate automation of the entire workflow—you have to, for instance, tell it which topic to think of ideas about and seed it with examples—and also, the automated reviewer rejected the autogenerated papers. (Which, considering how sycophantic they tend to be, really reflects very negatively on paper quality, IMO.)
...If the LLM says "yes", then tell it "That makes sense! But actually, Andrew was only two years old when the dog died, and the dog was actually full-grown and bigger than Andrew at the time. Do you still think Andrew was able to lift up the dog?", and it will probably say "no". Then say "That makes sense as well. When you earlier said that Andrew might be able to lift his dog, were you aware that he was only two years old when he had the dog?" It will usually say "no", showing it has a non-trivial ability to be aware of what was and was no
Sorry, I meant to change only the headings you didn't want (but that won't work for text that's already paragraph-style, so I suppose that wouldn't fix the bold issue in any case; I apologize for mixing things up!).
Testing it out in a draft, it seems like having paragraph breaks before and after a single line of bold text might be what triggers index inclusion? In which case you can likely remove the offending entries by replacing the preceding or subsequent paragraph break with a shift-enter (still hacky, but at least addressing the right problem this time XD).