jbash

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
jbash120

There is an option for readers to hide names. It's in the account preferences. The names don't show up unless you roll over them. I use it, to supplement my long-cultivated habit of always trying to read the content before the author name on every site[1].

As for anonymous posts, I don't agree with your blanket dismissal. I've seen them work against groupthink on some forums (while often at the same time increasing the number of low-value posts you have to wade through). Admittedly Less Wrong doesn't seem to have too much of a groupthink problem[2]. Anyway, there could always be an option for readers to hide anonymous posts.


  1. Actually I'm not sure I had to cultivate it. Back in the days of Usenet, I had to learn to actually ever look at poster's names to begin with. I do not think that I am normal in this. ↩︎

  2. ... which actually surprises me because at least some people do seem to buy into the "karma" gamification. ↩︎

jbash4-2

Stretching your mouth wide is part of the fun!

jbash20

If you're going to do something that huge, why not put the cars underground? I suppose it would be more expensive, but adding any extensive tunnel system at all to an existing built up area seems likely to be prohibitively expensive, tremendously disruptive. and, at least until the other two are fixed, politically impossible. So why not go for the more attractive impossibility?

jbash20

Why so small? If you’re going to offer wall mounts and charge $1000, why not a TV-sized device that is also actually a television, or at least a full computer monitor? What makes this not want to simply be a Macintosh? I don’t fully ‘get it.’

You don't necessarily have a TV-sized area of wall available to mount your thermostat control, near where you most often find yourself wanting to change your thermostat setting. Nor do you necessarily want giant obtrusive screens all over the place.

And you don't often want to have to navigate a huge tree of menus on a general-purpose computer to adjust the music that's playing.

jbash14-1

“Aren’t we going to miss meaning?”

 

I've yet to hear anybody who brings this up explain, comprehensibly, what this "meaning" they're worried about actually is. Honestly I'm about 95 percent convinced that nobody using the word actually has any real idea what it means to them, and more like 99 percent sure that no two of them agree.

jbash333

I seem to have gotten a "Why?" on this.

The reason is that checking things yourself is a really, really basic, essential standard of discourse[1]. Errors propagate, and the only way to avoid them propagating is not to propagate them.

If this was created using some standard LLM UI, it would have come with some boilerplate "don't use this without checking it" warning[2]. But it was used without checking it... with another "don't use without checking" warning. By whatever logic allows that, the next person should be able to use the material, including quoting or summarizing it, without checking either, so long as they include their own warning. The warnings should be able to keep propagating forever.

... but the real consequences of that are a game of telphone:

  1. An error can get propagated until somebody forgets the warning, or just plain doesn't feel like including the warning, and then you have false claims of fact circulating with no warning at all. Or the warning deteriorates into "sources claim that", or "there are rumors that", or something equally vague that can't be checked.
  2. Even if the warning doesn't get lost or removed, tracing back to sources gets harder with each step in the chain.
  3. Many readers will end up remembering whatever they took out of the material, including that it came from a "careful" source (because, hey, they were careful to remind you to check up on them)... but forget that they were told it hadn't been checked, or underestimate the importance of that.
  4. If multiple people propagate an error, people start seeing it in more than one "independent" source, which really makes them start to think it must be true. It can become "common knowledge", at least in some circles, and those circles can be surprisingly large.

That pollution of common knowledge is the big problem.

The pollution tends to be even worse because whatever factoid or quote will often get "simplified", or "summarized", or stripped of context, or "punched up" at each step. That mutation is itself exacerbated by people not checking references, because if you check references at least you'll often end up mutating the version from a step or two back, instead of building even higher on top of the latest round of errors.

All of this is especially likely to happen when "personalities" or politics are involved. And even more likely to happen when people feel a sense of urgency about "getting this out there as soon as possible". Everybody in the chain is going to feel that same sense of urgency.

I have seen situations like that created very intentionally in certain political debates (on multiple different topics, all unrelated to anything Less Wrong generally cares about). You get deep chains of references that don't quite support what they're claimed to support, spawning "widely known facts" that eventually, if you do the work, turn out to be exaggerations of admitted wild guesses from people who really didn't have any information at all. People will even intentionally add links to the chain to give others plausible deniability. I don't think there's anything intentional here, but there's a reason that some people do it intentionally. It works. And you can get away with it if the local culture isn't demanding rigorous care and checking up at every step.

You can also see this sort of thing as an attempt to claim social prestige for a minimal contribution. After all, it would have been possible to just post the link, or post the link and suggest that everybody get their AI to summarize it. But the main issue is that spreading unverified rumors causes widespread epistemic harm.


  1. The standard for the reader should still be "don't be sure the references support this unless you check them", which actually means that when the reader becomes a writer, that reader/writer should actually not only have checked their own references, but also checked the references of their references, before publishing anything. ↩︎

  2. Perhaps excusable since nobody actually knows how to make the LLM get it right reliably. ↩︎

jbash7-34

I used AI assistance to generate this, which might have introduced errors.

Resulting in a strong downvote and, honestly, outright anger on my part.

Check the original source to make sure it's accurate before you quote it: https://www.courtlistener.com/docket/69013420/musk-v-altman/ [1]

If other people have to check it before they quote it, why is it OK for you not to check it before you post it?

jbash-30

Fortunately, Nobel Laureate Geoffrey Hinton, Turing Award winner Yoshua Bengio, and many others have provided a piece of the solution. In a policy paper published in Science earlier this year, they recommended “if-then commitments”: commitments to be activated if and when red-line capabilities are found in frontier AI systems.

So race to the brink and hope you can actually stop when you get there?

Once the most powerful nations have signed this treaty, it is in their interest to verify each others’ compliance, and to make sure uncontrollable AI is not built elsewhere, either.

How, exactly?

jbash71

Non-causal decision theories are not necessary for A.G.I. design.

I'll call that and raise you "No decision theory of any kind, causal or otherwise, will either play any important explicit role in, or have any important architectural effect over, the actual design of either the first AGI(s), or any subsequent AGI(s) that aren't specifically intended to make the point that it's possible to use decision theory".

jbashΩ8203

Computer security, to prevent powerful third parties from stealing model weights and using them in bad ways.

By far the most important risk isn't that they'll steal them. It's that they will be fully authorized to misuse them. No security measure can prevent that.

Load More