I like Metz. I'd rather have EY, but that won't happen.
This exactly. Having the Grey Lady report about AI risk is a huge step forward and probably decreased the chance of us dying by at least a little.
This is completely false, as well as irrelevant.
he did not "doxx" Scott. He was going to reveal Scott's full name in a news article about him without permission, which is not by any means doxxing, it's news reporting. News is important and news has a right to reveal the full names of public figures.
this didn't happen, because Scott got the NYT to wait until he was ready before doing so.
the article on rationalism isn't a "hit piece" even if it contains some things you don't like. I thought it was fair and balanced.
none of this is relevant, and i
Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don't believe this is possible.
oh wow, thanks!
She didn't get the 5D thing - it's not that the messengers live in five dimensions, they were just sending two-dimensional pictures of a three-dimensional world.
LLMs' answers on factual questions are not trustworthy; they are often hallucinatory.
Also, I was obviously asking you for your views, since you wrote the comment.
Sorry, 2007 was a typo. I'm not sure how to interpret the ironic comment about asking an LLM, though.
OTOH, if you sent back Attention is all you need
What is so great about that 2007 paper?
People didn't necessarily have a use for all the extra compute
Can you please explain the bizarre use of the word "compute" here? Is this a typo? "compute" is a verb. The noun form would be "computing" or "computing power."
Yudkowsky makes a few major mistakes that are clearly visible now, like being dismissive of dumb, scaled, connectionist architectures
I don't think that's a mistake at all. Sure, they've given us impressive commercial products, but no progress towards AGI, so the dismissiveness is completely justified.
Maybe you're an LLM.
there would be no way to glue these two LLMs together to build an English-to-Japanese translator such that training the "glue" takes <1% of the comput[ing] used to train the independent models?
Correct. They're two entirely different models. There's no way they could interoperate without massive computing and building a new model.
(Aside: was that a typo, or did you intend to say "compute" instead of "computing power"?)
I don't see why fitting a static and subhuman mind into consumer hardware from 2023 means that Yudkowsky doesn't lose points for saying you can fit a learning (implied) and human-level mind into consumer hardware from 2008.
Because one has nothing to do with the other. LLMs are getting bigger and bigger, but that says nothing about whether a mind designed algorithmically could fit on consumer hardware.
Yeah, one example is the view that AGI won't happen, either because it's just too hard and humanity won't devote sufficient resources to it, or because we recognize it will kill us all.
I really disagree with this article. It's basically just saying that you drank the LLM Kool-Aid. LLMs are massively overhyped. GPT-x is not the way to AGI.
This article could have been written a dozen years ago. A dozen years ago, people were saying the same thing: "we've given up on the Good Old-Fashioned AI / Douglas Hofstadter approach of writing algorithms and trying to find insights! it doesn't give us commerical products, whereas the statistical / neural network stuff does!"
And our response was the same as it is today. GOFAI is hard. No one expected t...
I don't agree with that. Neutral-genie stories are important because they demonstrate the importance of getting your wish right. As yet, deep learning hasn't taken us to AGI, and it may never, and even if it does, we may still be able to make them want particular things or give them particular orders or preferences.
Here's a great AI fable from the Air Force:
...[This is] a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation ... "We've never run that exp
Can you give me an example of a NLP "program" that influences someone, or link me to a source that discusses this more specifically? I'm interested but, as I said, skeptical, and looking for more specifics.
I'd guess it was more likely to be emotional stuff relating to living with people who once had such control over you. I can't stand living at my parents' for very long either... it's just stressful and emotionally draining.
What pragmatist said. Even if you can't break it down step by step, can you explain what the mechanism was or how the attack was delivered? Was it communicated with words? If it was hidden how did your friend understand it?
How did the attack happen? I'm skeptical.
Sorry, I couldn't tell what was a quote and what wasn't.
Polyamory is usually defined as honest nonmonogamy. In other words, any time someone is dating two people openly, that's poly. It's how many humans naturally behave. It doesn't require exposure to US poly communities, or any community in general for that matter.
As you discuss in the dropbox link, this is a pretty massive selection bias. I'd suggest that this invalidates any statement made on the basis of these studies about "poly people," since most poly people seem not to be included. People all over the world are poly, in every country, of every race and class.
It's as if we did a study of "rationalists" and only included people on LW, ignoring any scientists, policy makers, or evidence-based medical researchers, simply because they didn't use the term "rationalist."
You state:
...Whi
I know that the antidepressant Wellbutrin, which is a stimulant, has been associated with a small amount of weight loss over a few months, though I'm not sure if this has been shown to stay for longer. That's an off-label use though.
I'd guess that any stimulant would show weight loss in the short-term. Is there some reason this wouldn't stay long-term?
How is Mormonism attractive? You don't even get multiple wives anymore. And most people think you're crazy.
What about a small amount of mild stimulant use?
Why would you not want to be someone who wears a cloak often? And whatever those reasons are, why wouldn't they prevent you from wearing a cloak after you buy it?
it's very, very likely there's life outside our solar system, but I don't have any evidence of it
If there's no evidence of it (circumstantial evidence included), what makes you think it's very likely?
Poly groups tend to be well-educated well-paid white people
I'm baffled by this. Are you saying most studies tend to be done on this group? Do you mean in the US? Are you referring to groups who call themselves poly, or the general practice of honest nonmonogamy?
Are you polysaturated yet? Most people seem to find 2-3 to be the practical limit.
I actually never was asked to say the Pledge in any US school I went to, and I've never even seen it said. I'm pretty sure this is limited to some parts of the country and is no longer as universal as it may have been once. If someone did go to one such school, they and their parents would have the option of simply not saying the Pledge, transferring to a different school (I doubt private or religious schools say it), or homeschooling/unschooling.
For what it's worth, I've never seen it said in any of the US schools I've attended. It's not universal.
I've played that game, using various shaped blocks that the Customer has to assemble in a specific pattern. It's great.
There's also the variation with an Intermediary, and the Expert and Customer can only communicate with the Intermediary, who moves back and forth between rooms and can't write anything down.
Precommiting is useful in many situations, one being where you want to make sure you do something in the future when you know something might change your mind. In Cialdini's "Influence," for instance, he discusses how saying in public "I am not going to smoke another cigarette" is helpful in quitting smoking.
If you think you might change your mind, then surely you would want to have the freedom to do so?
The whole point is that I want to remove that freedom. I don't want the option of changing my mind.
Another classic example is the general who burned his ships upon landing so there would be no option to retreat, to make his soldiers fight harder.
If you know what you're doing, the Phd example is not more than a 5 minute process - I've walked people through worse things in about that time.
Please elaborate!
I don't want to eat anything steaklike unless it came from a real, mooing, cow. I don't care how it's killed.
I'm worried I'm overestimating my resistance to advertising, so I'm hereby precommitting to this in writing.
Would "servant" not otherwise be justified?
That's a good reminder but I'm not sure how it applies here.
It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.
More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.
Yes, "sweet" is a great description. Why, how would you describe it?
Awesome! Good to know.
I guess, but that seems like a strange interpretation seeing as the speaker says he's no longer "a skeptic" in general.
The point of rationality isn't to better argue against beliefs you consider wrong but to change your existing beliefs to be more correct.
Thanks... I'm still going through the most recent Callahan novels. Jake Stonebender does kinda have a temper.
Downvoted and upvoted the counterbalance (which for some reason was at -1 already; someone didn't follow your instructions). You're surprised people like power?
I suspect you overestimate how much most people like cows...
There's no doubt that killing cows like we do now will be outlawed after we find another way to have the steak.
No doubt at all? I'd put money on this being wrong. Why would it be outlawed?
Including the problems like 'how to have a steak without killing a cow'.
I'm not sure that's the relevant problem. The more important problem is "how can we get more and better steaks cheaper?"
I must be misinterpreting this, because it appears to say "religion is obvious if you just open your eyes." How is that a rationality quote?
Ok, but... wouldn't the same objection apply to virtually any action/adventure movie or novel? Kick Ass, all the Die Hard movies, anything Tarantino, James Bond, Robert Ludlum's Bourne Identity novels and movies, et cetera. They all have similar violent scenes.
Jura gurl'er nobhg gb encr Wraavsre? Ur qrfreirf gung naq vg'f frys-qrsrafr
I don't see it as sneering at all.
I'm not sure what you mean by "senpai noticed me" but I think it is absolutely critical, as AI becomes more familiar to hoi polloi, that prominent newspapers report on AI existential risk.
The fact that he even mentions EY as the one who started the whole thing warms my EY-fangirl heart - a lot of stuff on AI risk does not mention him.
I also have no idea what you mean about Clippy - how is it misunderstood? I think it's an excellent way to explain.
Would you prefer this?
https://www.vice.com/en/article/4a33gj/ai-controlled-dr... (read more)