All of Blueberry's Comments + Replies

I don't see it as sneering at all.

I'm not sure what you mean by "senpai noticed me" but I think it is absolutely critical, as AI becomes more familiar to hoi polloi, that prominent newspapers report on AI existential risk.

The fact that he even mentions EY as the one who started the whole thing warms my EY-fangirl heart - a lot of stuff on AI risk does not mention him.

I also have no idea what you mean about Clippy - how is it misunderstood? I think it's an excellent way to explain.

Would you prefer this?

https://www.vice.com/en/article/4a33gj/ai-controlled-dr... (read more)

2rotatingpaguro
See https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer

I like Metz. I'd rather have EY, but that won't happen.

This exactly. Having the Grey Lady report about AI risk is a huge step forward and probably decreased the chance of us dying by at least a little.

Blueberry
0-18

This is completely false, as well as irrelevant.

  • he did not "doxx" Scott. He was going to reveal Scott's full name in a news article about him without permission, which is not by any means doxxing, it's news reporting. News is important and news has a right to reveal the full names of public figures.

  • this didn't happen, because Scott got the NYT to wait until he was ready before doing so.

  • the article on rationalism isn't a "hit piece" even if it contains some things you don't like. I thought it was fair and balanced.

  • none of this is relevant, and i

... (read more)

Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don't believe this is possible.

1Orion Anderson
"Writing code" feels underspecified here. I think it is clear that LLM's will be (perhaps already are) superhuman at writing some types of code for some purposes in certain contexts. What line are you trying to assert will not be crossed when you say you don't think it's possible for them to be superhuman at writing code?

oh wow, thanks!

She didn't get the 5D thing - it's not that the messengers live in five dimensions, they were just sending two-dimensional pictures of a three-dimensional world.

LLMs' answers on factual questions are not trustworthy; they are often hallucinatory.

Also, I was obviously asking you for your views, since you wrote the comment.

Sorry, 2007 was a typo. I'm not sure how to interpret the ironic comment about asking an LLM, though.

5Max H
It was not meant as irony or a joke. Both questions you asked are simple factual questions that you could have answered quickly on your own using an LLM or a traditional search engine.
Blueberry
-1-2

OTOH, if you sent back Attention is all you need

What is so great about that 2007 paper?

People didn't necessarily have a use for all the extra compute

Can you please explain the bizarre use of the word "compute" here? Is this a typo? "compute" is a verb. The noun form would be "computing" or "computing power."

4Max H
The paper is from 2017, not 2007. It's one of the foundational papers that kicked off the current wave of transformer-based AI. The use of compute as a noun is pretty standard, see e.g. this post Algorithmic Improvement Is Probably Faster Than Scaling Now. (Haven't checked, but chatGPT or Bing could probably have answered both of these questions for you.)
Blueberry
1-14

Yudkowsky makes a few major mistakes that are clearly visible now, like being dismissive of dumb, scaled, connectionist architectures

I don't think that's a mistake at all. Sure, they've given us impressive commercial products, but no progress towards AGI, so the dismissiveness is completely justified.

3Veedrac
This doesn't feel like a constructive way to engage with the zeitgeist here. Obviously Yudkowsky plus most people here disagree with you on this. As such, if you want to engage productively on this point, you should find a place better set up to discuss whether NNs uninformatively dead-end. Two such places are the open thread or a new post where you lay out your basic argument.

there would be no way to glue these two LLMs together to build an English-to-Japanese translator such that training the "glue" takes <1% of the comput[ing] used to train the independent models?

Correct. They're two entirely different models. There's no way they could interoperate without massive computing and building a new model.

(Aside: was that a typo, or did you intend to say "compute" instead of "computing power"?)

6faul_sname
It historically has been shown that one can interpolate between a vision model and a language model[1]. And, more recently, it has been shown that yes, you can use a fancy transformer to map between intermediate representations in your image and text models, but you don't have to do that and in fact it works fine[2] to just use your frozen image encoder, then a linear mapping (!), then your text decoder. I personally expect a similar phenomenon if you use the first half of an English-only pretrained language model and the second half of a Japanese-only pretrained language model -- you might not literally be able to use a linear mapping as above, but I expect you could use a quite cheap mapping. That said, I am not aware of anyone who has actually attempted the thing so I could be wrong that the result from [2] will generalize that far. Yeah, I did mean "computing power" there. I think it's just a weird way that people in my industry use words.[3] 1. ^ Example: DeepMind's Flamingo, which demonstrated that it was possible at all to take pretrained language model and a pretrained vision model, and glue them together into a multimodal model, and that doing so produced SOTA results on a number of benchmarks. See also this paper, also out of DeepMind. 2. ^ Per  Linearly Mapping from Image to Text Space 3. ^ For example, see this HN discussion about it. See also the "compute" section of this post, which talks about things that are "compute-bound" rather than "bounded on the amount of available computing power". Why waste time use lot word when few word do trick?

I don't see why fitting a static and subhuman mind into consumer hardware from 2023 means that Yudkowsky doesn't lose points for saying you can fit a learning (implied) and human-level mind into consumer hardware from 2008.

Because one has nothing to do with the other. LLMs are getting bigger and bigger, but that says nothing about whether a mind designed algorithmically could fit on consumer hardware.

Yeah, one example is the view that AGI won't happen, either because it's just too hard and humanity won't devote sufficient resources to it, or because we recognize it will kill us all.

I really disagree with this article. It's basically just saying that you drank the LLM Kool-Aid. LLMs are massively overhyped. GPT-x is not the way to AGI.

This article could have been written a dozen years ago. A dozen years ago, people were saying the same thing: "we've given up on the Good Old-Fashioned AI / Douglas Hofstadter approach of writing algorithms and trying to find insights! it doesn't give us commerical products, whereas the statistical / neural network stuff does!"

And our response was the same as it is today. GOFAI is hard. No one expected t... (read more)

81a3orn
For what it's worth, I'm at least somewhat an LLM-plateau-ist -- on balance at least somewhat dubious we get AGI from models in which 99% of compute is spent on next-word prediction in big LLMs. I really think Nostalgebrists take has merit and the last few months have made me think it has more merit. Yann LeCunn's "LLMs are an off-ramp to AGI" might come back to show his forsight. Etc etc. But it isn't just LLM progress which has hinged on big quantities of compute. Everything in deep learning -- ResNets, vision Transformers, speech-to-text, text-to-speech, AlphaGo, EfficientZero, Dota5, VPT, and so on -- has used more and more compute. I think at least some of this deep learning stuff is an important step to human-like intelligence, which is why I think this is good evidence against Yudkowsky If you think none of the DL stuff is a step, then you can indeed maintain the compute doesn't matter, of course, and that I am horribly wrong. But if you think the DL stuff is an important step, it becomes more difficult to maintain.

I don't agree with that. Neutral-genie stories are important because they demonstrate the importance of getting your wish right. As yet, deep learning hasn't taken us to AGI, and it may never, and even if it does, we may still be able to make them want particular things or give them particular orders or preferences.

Here's a great AI fable from the Air Force:

[This is] a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation ... "We've never run that exp

... (read more)

Can you give me an example of a NLP "program" that influences someone, or link me to a source that discusses this more specifically? I'm interested but, as I said, skeptical, and looking for more specifics.

1ChristianKl
In this case, I doubt that there writing that get's to the heart of the issue that accessible to people without an NLP or hypnosis background. I'm also from Germany so a lot of the sources from which I actually learned are German. As far as programming and complexity there a nice chart of what taught in a 3 day workshop with nested loops: http://nlpportal.org/nlpedia/images/a/a8/Salesloop.pdf If you generally want to get an introduction into hypnosis I recommend "Monsters and Magical Sticks: There is No Such Thing as Hypnosis" by Steven Heller.

I'd guess it was more likely to be emotional stuff relating to living with people who once had such control over you. I can't stand living at my parents' for very long either... it's just stressful and emotionally draining.

What pragmatist said. Even if you can't break it down step by step, can you explain what the mechanism was or how the attack was delivered? Was it communicated with words? If it was hidden how did your friend understand it?

0ChristianKl
The basic framework is using nested loops and metaphors. If a AGI for example wanted to get someone to get them out of the cage it could tell a highly story about some animal named Fred and part of the story is that it's very important that a human released that animal from the cage. If the AGI then later speaks about Fred it brings up the positively feeling concept of releasing things from cages. That increases the chances of listener then releasing the AGI. Alone this won't be enough, but over time it's possible to build up a lot of emotionally charged metaphors and then chain them together in an instance to work together. In practice getting it to work isn't easy.
2ChristianKl
I don't have a recording of the event to break it down to a level where I can explain that in a step by step fashion. Even if I would think it would take some background in hypnosis or NLP to follow a detailed explanation. Human minds often don't do what we would intuitively assume they would do and unlearning to trust all those learned ideas about what's supposed to happen isn't easy. If you think that attacks generally happen in a way that you can easily understand by reading an explanation, then you ignore most of the powerful attacks.

Sorry, I couldn't tell what was a quote and what wasn't.

Polyamory is usually defined as honest nonmonogamy. In other words, any time someone is dating two people openly, that's poly. It's how many humans naturally behave. It doesn't require exposure to US poly communities, or any community in general for that matter.

0gwern
Many humans behave in a serial monogamy manner - which is not poly. Many humans behave in a covert polygamy manner - which is not poly. Whether there is very much left after that which matches US polyamory, I wouldn't know...

As you discuss in the dropbox link, this is a pretty massive selection bias. I'd suggest that this invalidates any statement made on the basis of these studies about "poly people," since most poly people seem not to be included. People all over the world are poly, in every country, of every race and class.

It's as if we did a study of "rationalists" and only included people on LW, ignoring any scientists, policy makers, or evidence-based medical researchers, simply because they didn't use the term "rationalist."

You state:

Whi

... (read more)
0gwern
I think that remains to be seen, unless one is quietly defining away polyamory as a dull negation of monogamy. I didn't state that; Klesse did. Between you and Klesse, I know who I will put more weight on.

I know that the antidepressant Wellbutrin, which is a stimulant, has been associated with a small amount of weight loss over a few months, though I'm not sure if this has been shown to stay for longer. That's an off-label use though.

I'd guess that any stimulant would show weight loss in the short-term. Is there some reason this wouldn't stay long-term?

2CronoDAS
There are a lot of drugs that people develop tolerances to when used over long periods of time (the body's various feedback mechanisms recalibrate themselves to compensate for the drug's presence), but I can't say with any authority that this applies to mild stimulant use and weight loss.

How is Mormonism attractive? You don't even get multiple wives anymore. And most people think you're crazy.

What about a small amount of mild stimulant use?

0CronoDAS
I dunno. The FDA did approve a couple of drugs this year, but they might only be intended for short-term use.

Why would you not want to be someone who wears a cloak often? And whatever those reasons are, why wouldn't they prevent you from wearing a cloak after you buy it?

0AdeleneDawner
If it follows the pattern of the vest and the cane, I'll want to wear it All The Time, whether that's a good idea for signaling and aesthetic reasons or not - and I'm not sure it would be a good idea on either of those counts, but sensory considerations often trump those when it comes to things that I actually own and have experienced and gotten used to at all. In other words: Right now I'm physically comfortable not wearing a cloak. If I get it and it's as awesome along the physically-comfortable axis as I expect it will be, then I will quickly become the kind of person who is not physically comfortable when not wearing a cloak, and if it's socially unacceptable to wear a cloak, or socially unacceptable to wear a cloak with my vest that I'm now uncomfortable when I'm not wearing, then that change could be a problem. (For values of 'socially unacceptable' that include 'changes how people react to me in ways that are sufficiently bad'.) If I could predict what peoples' reaction to me-wearing-a-cloak would be without actually wearing a cloak to find out, this would be less of a problem, but as of right now I don't know that they'd react acceptably.

it's very, very likely there's life outside our solar system, but I don't have any evidence of it

If there's no evidence of it (circumstantial evidence included), what makes you think it's very likely?

1MugaSofer
I assume they meant direct evidence. It's truth or falseness do not affect us, we must extrapolate from our knowledge of the structure of the universe.

Poly groups tend to be well-educated well-paid white people

I'm baffled by this. Are you saying most studies tend to be done on this group? Do you mean in the US? Are you referring to groups who call themselves poly, or the general practice of honest nonmonogamy?

0gwern
Yes, yes, and former.

Are you polysaturated yet? Most people seem to find 2-3 to be the practical limit.

7Alicorn
I don't see very much of the two boyfriends who don't live in my house, so no. (They have other girlfriends to keep them occupied.)

I actually never was asked to say the Pledge in any US school I went to, and I've never even seen it said. I'm pretty sure this is limited to some parts of the country and is no longer as universal as it may have been once. If someone did go to one such school, they and their parents would have the option of simply not saying the Pledge, transferring to a different school (I doubt private or religious schools say it), or homeschooling/unschooling.

2AdeleneDawner
As another datapoint, the pledge was announced over the loudspeaker but students weren't required to recite it at the first high school I went to (though we were required to stand respectfully and most everybody still did the salute even if they didn't recite), and theoretically required for any student that didn't have a religious exemption note at the second high school I went to. I have a funny story about the second situation, too. I'd been one of the ones who didn't say the pledge, before I moved, and decided that I wasn't going to change that unless they made me. The result of this was that the other students in my homeroom class stopped saying it, too - first the ones nearest me, then the ones next to them, and so on across the room. I happened to have a desk in one corner of the room, and by the end of the year a handful of the students in the other corner of the room were the only ones still saying the pledge, and they generally shouted it, raucously or sarcastically depending on their mood. (Makes a pretty interesting complement to the Asch conformity test, come to think of it.)

For what it's worth, I've never seen it said in any of the US schools I've attended. It's not universal.

I've played that game, using various shaped blocks that the Customer has to assemble in a specific pattern. It's great.

There's also the variation with an Intermediary, and the Expert and Customer can only communicate with the Intermediary, who moves back and forth between rooms and can't write anything down.

Precommiting is useful in many situations, one being where you want to make sure you do something in the future when you know something might change your mind. In Cialdini's "Influence," for instance, he discusses how saying in public "I am not going to smoke another cigarette" is helpful in quitting smoking.

If you think you might change your mind, then surely you would want to have the freedom to do so?

The whole point is that I want to remove that freedom. I don't want the option of changing my mind.

Another classic example is the general who burned his ships upon landing so there would be no option to retreat, to make his soldiers fight harder.

If you know what you're doing, the Phd example is not more than a 5 minute process - I've walked people through worse things in about that time.

Please elaborate!

4jimmy
I "cheated" a bit, in that I had them spend ~15-20 minutes with a chat bot that taught them some skills for getting in touch with those parts of their mind. Actually working through the problem was a few minutes of text chat that basically pointed out that there was no magic option and that they needed to let go of the problem emotions. All the real magic was in putting them in the state of mind to shut up and listen. I talk about it a bit here
Blueberry
-20

I don't want to eat anything steaklike unless it came from a real, mooing, cow. I don't care how it's killed.

I'm worried I'm overestimating my resistance to advertising, so I'm hereby precommitting to this in writing.

8ArisKatsaris
Why?
9Dmytry
I think you overestimate how much you'll care about this post in few years. On top of this - see, you are acting in a mildly self destructive manner. The vat grown steak can be considerably safer, or taste better, but you pre-commit anyway without even tasting it. Clearly this pre-commitment not to maximize the utility in the future, is a net expected loss of utility. That's the issue. Evil is generally self destructive, the more evil, the more self destructive it is. I believe that's in part because it is hard to define self in such a way that the evil is only hurting the others like you but not self, future self, parts of self, etc. That's just not easy to define, and not easy to process. Take extreme example, psychopaths. They are very self destructive. They do things on spur of the moment at expense of their future selves - perfectly rational selfish action as the future selves are to some extent different people - but not effective for an agent. There is not much more reason to care about future yourself, than to care about anyone else.
3TraderJoe
[comment deleted]

Would "servant" not otherwise be justified?

1Nornagest
It's fairly benign, but looks a little archaic -- not so archaic that it'd have to be medieval, though. The rest of the phrasing is fairly modern, or I'd probably have assumed it was a quote from anywhere from the Enlightenment up to the Edwardian period. It has the ring of something a Victorian aphorist might say.

That's a good reminder but I'm not sure how it applies here.

0Eugine_Nier
A quote that calls the holder of a potentially wrong belief a "skeptic" rather than a "believer" is more useful since it makes you more likely to identify with him.
Blueberry
160

It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.

More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.

0TraderJoe
[comment deleted]
8Eugine_Nier
Indeed, I posted this quote partially out of annoyance at a certain type of analysis I kept seeing in the MoR threads. Namely, person X benefited from the way event Y turned out; therefore, person X was behind event Y. After all, thinking like this about real life will quickly turn one into a tin-foil-hat-wearing conspiracy theorist.
3Ezekiel
Depends on the infinity. Ordinal infinities change when you add one to them. If we're restricting ourselves to actual published fiction, I present Cory Doctorow's Someone Comes to Town, Someone Leaves Town. The protagonist's parents are a mountain and a washing machine, it gets weirder from there, and the whole thing is played completely straight.

Yes, "sweet" is a great description. Why, how would you describe it?

Also judging from his other quotes I'm pretty sure that's not what he meant...

I guess, but that seems like a strange interpretation seeing as the speaker says he's no longer "a skeptic" in general.

The point of rationality isn't to better argue against beliefs you consider wrong but to change your existing beliefs to be more correct.

Thanks... I'm still going through the most recent Callahan novels. Jake Stonebender does kinda have a temper.

Downvoted and upvoted the counterbalance (which for some reason was at -1 already; someone didn't follow your instructions). You're surprised people like power?

0duckduckMOO
no, it's the specific description of the feeling that surprised me: "sweet." edit: and thanks for helping me out.

I suspect you overestimate how much most people like cows...

3Dmytry
You'll like the cows when there's vat grown steak and we make the ads for it :) . (I do some CGI for advertising).

There's no doubt that killing cows like we do now will be outlawed after we find another way to have the steak.

No doubt at all? I'd put money on this being wrong. Why would it be outlawed?

Including the problems like 'how to have a steak without killing a cow'.

I'm not sure that's the relevant problem. The more important problem is "how can we get more and better steaks cheaper?"

5Dmytry
There are various laws on treatment of animals already. Ineffective and poorly adhered to, but there are. Yet more important problem is how we make the most profit. Once there's notable grown-in-a-vat steak industry, you can be sure that the ethics of killing cows will be explained to you via fairly effective advertising. Especially if it costs somewhat more and consequently brings better income for same % markup.

I must be misinterpreting this, because it appears to say "religion is obvious if you just open your eyes." How is that a rationality quote?

7TheOtherDave
LW's standards for rationality quotes vary, but in any case this does allow for the reading of endorsing allowing perceived evidence to override pre-existing beliefs, if one ignores the standard connotations of "skeptic" and "missionary".

Ok, but... wouldn't the same objection apply to virtually any action/adventure movie or novel? Kick Ass, all the Die Hard movies, anything Tarantino, James Bond, Robert Ludlum's Bourne Identity novels and movies, et cetera. They all have similar violent scenes.

0katydee
I can't think of any point in Die Hard where John McClane kills prisoners in cold blood (in fact, there are two times where he almost dies because he tries to arrest terrorists instead of just shooting them). And I do consider all such scenes objectionable-- for instance, in Serenity, when Zny fubbgf gur fheivivat Nyyvnapr thl sebz gur fuvc gung qrfgeblrq Obbx'f frggyrzrag, or when gur Bcrengvir fnlf ur vf hanezrq, fb Zny whfg chyyf n tha naq fubbgf uvz, I had the same squicky reaction.

Jura gurl'er nobhg gb encr Wraavsre? Ur qrfreirf gung naq vg'f frys-qrsrafr

0katydee
Ertneqyrff bs jurgure fbzrbar "qrfreirf vg," zheqrevat pncgvirf va tehrfbzr naq rkpehpvngvat znaaref vf orlbaq gur cnyr. Gung'f nyfb abg frys-qrsrafr ol nal fgnaqneq gung V xabj bs, fvapr gur crefba va dhrfgvba jnf nyernql haqre gurve pbageby.
Load More