Quick Takes

It is possible that state tracking could be the next reasoning-tier breakthrough in frontier model capabilities. I believe that there exists strong evidence in favor of this being the case. 

State space models already power the fastest available voice models, such as Cartesia's Sonic (time-to-first-audio advertised as under 40ms). There are examples of SSMs such as Mamba, RWKV, and Titans outperforming transformers in research settings. 

Flagship LLMs are also bad at state tracking, even with RL for summarization. Forcing an explicit... (read more)

Elizabeth5434

You will always oversample from the most annoying members of a class.

This is inspired by recent arguments on twitter about how vegans and poly people "always" bring up those facts. I content that it's simultaneous true that most vegans and poly people are either not judgmental, but it doesn't matter because that's not who they remember. Omnivores don't notice the 9 vegans who quietly ordered an unsatisfying salad, only the vegan who brought up factoring farming conditions at the table. Vegans who just want to abstain from animal products remember the omniv... (read more)

Showing 3 of 16 replies (Click to show all)
2Ben Pace
Not that this is directly relevant to your thesis comparing different groups today; but I do assume that Judaism had a massive evangelical period in its early growth (e.g. 2,000 years ago) that let it get so big that it could afford to pivot to being less evangelical today.
4DirectedEvolution
Most of the critical comments I see on HN involve accusing LW of being a cult, being too stupid to realize people can't be fully rational, or being incredibly arrogant and overconfident about analysis based on ass-numbers and ill-researched personal opinion. I don't see that much engagement with LW arguments around AI specifically.
Linch20

On Twitter at least, a fair number of the cult allegations seem to be from (honestly fairly cult-ish people themselves) who don't like what LW people say about AI, at least in the threads I'm likely to follow. But I defer to your greater HN expertise!

leogao50

everyone is a few hops away from everyone else. this applies in both directions: when I meet random people they always have some weak connection to other people I know, but also when I think of a collection of people as a cluster, most specific pairs of people within that cluster barely know each other except through other people in the cluster.

It’s worth noting that, though it’s true that for a sufficiently large cluster most pairs of people are not strongly connected, they are significantly more likely to be connected than in a random graph. This is the high clustering coefficient property of small-world graphs like the social graph.

xAI's safety team is 3 people.

Showing 3 of 11 replies (Click to show all)
7leogao
I want to defend interp as a reasonable thing for one to do for superintelligence alignment, to the extent that one believes there is any object level work of value to do right now. (maybe there isn't, and everyone should go do field building or something. no strong takes rn.) I've become more pessimistic about the weird alignment theory over time and I think it's doomed just like how most theory work in ML is doomed (and at least ML theorists can test their theories against real NNs, if they so choose! alignment theory has no AGI to test against.) I don't really buy that interp (specifically ambitious mechinterp, the project of fully understanding exactly how neural networks work down to the last gear) has been that useful for capabilities insights to date. fmpov, the process that produces useful capabilities insights generally operates at a different level of abstraction than mechinterp operates at. I can't talk about current examples for obvious reasons but I can talk about historical ones. with Chinchilla, it fixes a mistake in the Kaplan paper token budget methodology that's obvious in hindsight; momentum and LR decay, which have been around for decades, are based on intuitive arguments from classic convex optimization; transformers came about by reasoning about the shape and trajectory of computers and trying to parallelize things as much as possible. also, a lot of stuff Just Works and nobody knows why. one analogy that comes to mind is if your goal is to make your country's economy go well, it certainly can't hurt to become really good friends with a random subset of the population to understand everything they do. you'll learn things about how they respond to price changes or whether they'd be more efficient with better healthcare or whatever. but it's probably a much much higher priority for you to understand how economies respond to the interest rate, or tariffs, or job programs, or so on, and you want to think of people as crowds of homo economicus wit

My current view is that alignment theory should work on deep learning as soon as it comes out, if it's the good stuff, and if it doesn't, it's not likely to be useful later unless it helps produce stuff that works on deep learning. Wentworth, Ngo, and Causal Incentives are the main threads that already seem to have achieved this somewhat. SLT and DEC seem potentially relevant.

I'll think about your argument for mechinterp. If it's true that the ratio isn't as catastrophic as I expect it to turn out to be, I do agree that making microscope AI work would be incredible in allowing for empiricism to finally properly inform rich and specific theory.

4Cole Wyeth
This seems reasonable. Personally, I’m not that worried about capabilities increases from mech interp, I simply don’t except it to work very well. 
lemonhope170

Long have I searched for an intuitive name for motte & bailey that I wouldn't have to explain too much in conversation. I might have finally found it. The "I was merely saying fallacy". Verb: merelysay. Noun: merelysayism. Example: "You said you could cure cancer and now you're merelysaying you help the body fight colon cancer only."

Showing 3 of 4 replies (Click to show all)
1Drake Morrison
I would guess something like historical momentum is the reason people keep using it. Nicholas Shackel coined the term in 2005, then it got popularized in 2014 from SSC. 20 years is a long time for people to be using the term.
sjadler53

20 years is a long time sure, but I don’t think would be good argument for keeping it! (I understand you’re likely just describing, not justifying)

Motte & bailey has a major disadvantage of “nobody who hears it for the first time has any understanding of what it means”

Even as someone who knows the concept, I’m still not even 100% positive that motte and bailey do in fact mean “overclaim and retreat” respectively

People are welcome to use the terms they want, of course. But I’d think there should be a big difference between M&B and some simpler name in order to justify M&B

19sjadler
"Overclaim and retreat" also seems better than motte & bailey imo

Multiple times have I seen an argument like this:

Imagine a fully materialistic universe strictly following some laws, which are such that no agent from inside the universe is able to fully comprehend them...

(https://www.lesswrong.com/posts/YTmNCEkqvF7ZrnvoR/zombies-substance-dualist-zombies?commentId=iQfr65fKr5nSFriCs)

I wonder if that is possible? For computables, it is always possible to construct a quine (standing for the agent) with arbitrary embedded contents (for the rest of the universe/laws/etc), and it wouldn't even be that large - it only needs to... (read more)

JBlack20

All you need is a bounded universe with laws having complexity greater than can be embedded within that bound, and that premise holds.

You can even have a universe containing agents with unbounded complexity, but laws with infinite complexity describing a universe that only permits agents with finite complexity at any given time.

Lun261

Someone has posted about a personal case of vision deterioration after taking lumina and a proposed mechanism of action. I learned about lumina on lesswrong a few years back, so sharing this link.

https://substack.com/home/post/p-168042147

For the past several months I have been slowly losing my vision, and I may be able to trace it back to taking the Lumina Probiotic. Or rather, one of its byproducts that isn’t listed in the advertising

I don't know enough about this to make an informed judgement on the accuracy of the proposed mechanism. 

Showing 3 of 10 replies (Click to show all)

Will they help me test my own mouth to determine whether it's even in the realm of possibility? Me, a complete nobody who came out of nowhere with a well written hit piece full of plausible but hopefully completely wrong conjecture about the most horrifying thing that could happen?

I think it's very likely that they'd at least want to talk to you. If they couldn't rule out your proposed theory, I'd guess they're already equipped to test for it and probably would want to given that a lot of their own people are using the product.

4dirk
In a previous instance when someone suggested Lumina was unsafe, Lumina threatened to sue them for libel. (Extremely bad behavior, which I condemn). Based on this, I suspect contacting the company would not go well. My condolences re: your vision deterioration; I hope you're able to find a solution.
1garloid64
Elaborate. I can't find any information on substance diffusion from the oral mucosa to cross reference with the concentration of formate that could be expected from 10^??? bacteria living in the crevices between your teeth and gums. It would make me feel a lot better to be wrong, since the differential diagnosis is significantly less grim with slow formate poisoning removed. I'd throw down that hundred bucks just for the reassurance, even suffering as I am with medical bills. From this. And the bullet? I choose a BB, to shoot just your eye out. It's only fair :^) (I wouldn't actually it's just a bit of dark humor for you)

If the singularity occurs over two years, as opposed to two weeks, then I expect most people will be bored throughout much of it, including me. This is because I don't think one can feel excited for more than a couple weeks. Maybe this is chemical.

Nonetheless, these would be the two most important years in human history. If you ordered all the days in human history by importance/'craziness', then most of them would occur within these two years.

So there will be a disconnect between the objective reality and how much excitement I feel.

Showing 3 of 6 replies (Click to show all)
13Thane Ruthenis
Not necessarily. If humans don't die or end up depowered in the first few weeks of it, it might instead be a continuous high-intensity stress state, because you'll need to be paying attention 24/7 to constant world-upturning developments, frantically figuring out what process/trend/entity you should be hitching your wagon to in order to not be drowned by the ever-rising tide, with the correct choice dynamically changing at an ever-increasing pace. "Not being depowered" would actually make the Singularity experience massively worse in the short term, precisely because you'll be constantly getting access to new tools and opportunities, and it'd be on you to frantically figure out how to make good use of them. The relevant reference class is probably something like "being a high-frequency trader": This is pretty close to how I expect a "slow" takeoff to feel like, yep.

This comment has been tumbling around in my head for a few days now. It seems to be both true and bad. Is there any hope at all that the Singularity could be a pleasant event to live through?

11ACCount
Wartime is often described as "months of boredom punctuated by moments of terror". The moments where your life is on the line and seconds feel like hours are few and far in between. If they weren't, you wouldn't last long. 

"Changing Planes" by Ursula LeGuin is worth a read if you're looking for a book that's got interesting alignment ideas (specifically what to do with power, not how to get it), while simultaneously being extremely chill. It might actually be the only chill book that I (with a fair degree of license) consider alignment relevant.

To anyone currently going through NeurIPS rebuttals fun for the first time, some advice:

Firstly, if you're feeling down about reviews, remember that peer review has been officially shown to be a ridiculous random number generator in an RCT - half of spotlight papers are rejected by another review committee! Don't tie your self-worth to whether the roulette wheel landed on black or red. If their critiques don't make sense, they often don't (and were plausibly written by an LLM). And if they do make sense (and remember to control for your defensiveness), the... (read more)

I deserved all the smoke sent my way this time lol. Next time!

My writing is sloppy. Can anyone please suggest any resources where I can get feedback on my writing, or personalized instructions that will improve my processes to make me a better writer?

In the meantime I'll try to adopt this "one simple trick": each time I write a piece, I will read it out aloud to myself. If it is "tough on the ear" or I stumble while sight reading it, I will edit the offending section until it is neither.

Also, I'll continue to get LLMs to summarize the points in a given piece. If there's something I feel is missing in it's summary or ... (read more)

Showing 3 of 6 replies (Click to show all)

I'm no writer or editor but you could email me. I check my email every few days lemonhope@fastmail.com

1CstineSublime
Good question, no, no one advised me to use this technique but I use it as a last resort. I frequently feel that I am misunderstood in communication. Often I feel like people's replies to me sound like replies from totally different conversations or statement/questions to the one I just made. If an LLM seems to imply the focus is different or overemphasizes something I didn't see as significant, then I see no reason to believe that isn't indicative that humans will be dragged away by that too.
1Loki zen
It may well be. It's been my observation that what distracts/confuses them doesn't necessarily line up with what confuses humans, but it might still be better than your guess if you think your guess is pretty bad

The 50M H100 equivalent compute by 2030 figure tweeted by Musk is on trend (assuming a 2028 slowdown), might cost about $300bn in total (for the training systems built in 2025-2030 for one AI company, including the buildings and power infrastructure).

If the current trend of compute scaling continues to 2028, there will be 160x more compute per training system than the 100K H100s of 2024. It will require 5 GW of power and cost about $140bn in compute hardware and an additional $60bn in buildings, power, and cooling infrastructure[1].

However, if the slowdown... (read more)

1anaguma
By power, do you mean the cost of electrical equipment etc.? The cost the of energy itself is a relatively small. The average price of electricity in the US is $0.13/kWh, which is $36.11/GJ. So even if you had a 5 GW datacenter running continuously for a year, the energy cost is only $5.7bn.

Power infrastructure that might need to be built is gas generators or power plants, substations, whatever the buildings themselves need. Generators are apparently added even when not on-paper strictly necessary, as backup power. They are also faster to setup than GW-scale grid interconnection, so could be important for these sudden giant factories where nobody is quite sure 4 years in advance that they will be actually built at a given scale.

Datacenter infrastructure friction and cost will probably both smooth out the slowdown and disappear as a funding co... (read more)

For the last approx. 3.5 years, I’ve been splitting my time between my emotional coaching practice and working for a local startup. I’m still doing the coaching, but I felt like it was time to move on from the startup, which left me with the question of what to do with the freed-up time and reduced money.

Over the years, people have told me things like “you should have a Patreon” or have otherwise wanted to support my writing. Historically, I’ve had various personal challenges with writing regularly, but now I decided to take another shot at it. I... (read more)

6kaiwilliams
Cool! PSA: If you ever want to start a Patreon specifically (rather than through Substack), it may be worth making the page in the next week or so, before the default cut goes from 8% to 10%. Source

Thanks for the hint! I did consider a dual Substack/Patreon approach earlier but decided I couldn't be bothered with the cross-posting. I'll consider if it'd be worth publishing a page soon just so I can reserve myself a cheaper rate for the future.

lc*2726

Much like how all crashes involving self-driving cars get widely publicized, regardless of rarity, for a while people will probably overhype instances of AIs destroying production databases or mismanaging accounting, even after those catastrophies become less common than human mistakes.

Showing 3 of 5 replies (Click to show all)

"Metalhead" from Black Mirror is a relevant contemporary art piece.

I for one find Spot spooky as hell. I would go as far as to say that I have heard others express discomfort toward Boston Dynamics demo videos. 

Also, sentry guns and UAVs seem like strong examples of extant scary robots. Maybe see also autonomousweaponswatch.org .

4Garrett Baker
People are definitely afraid of these robots.
2[comment deleted]
Screwtape1511

Every so often I see the following:

Adam: Does anyone know how to X?
Bella: I asked ChatGPT, and it said you Y then Z.
Adam: Urgh, I could ask ChatGPT myself, why bother to speak up if you don't have anything else to add?

I'm sort of sympathetic to Adam- I too know how to ask ChatGPT things myself- but I think he's making a mistake. Partially because prompting good answers is a bit of a skill (one that's becoming easier and easier as time goes on, but still a bit of a skill.) Mostly because I'm not sure if he's reinforcing people to not answer with LLM answers... (read more)

Showing 3 of 7 replies (Click to show all)

Possible justifications for Bella's response in slightly different hypotheticals.

  1. Maybe X is a good fit for an LLM. So Adam could have asked an LLM himself. Bella is politely reinforcing the behavior of checking with an LLM before asking a human.
  2. Maybe Adam doesn't have a subscription to a good LLM, or is away from keyboard, or doesn't know how to use them well. Not relevant here, from Adam's response, but Bella might not know that.
  3. Maybe Adam is asking the question for the secondary purpose of building social bonds. Then Bella's response achieves that objective. Compare giving Adam flowers, does he say "Urgh, I could buy flowers myself"?
1CstineSublime
I think this is part of a broader problem about asking questions and is not limited to LLM. The broader topic I've been thinking about a lot  recently is "How to ask for help?". The better way to ask for help often involves being specific and targeted about who you ask for help. In this example Adam is casting a wide net, he's not asking a domain expert on X how to do X. Casting a wide net is always going to get a lot of attempts at helpful answers form people who know nothing about X. The helpful-but-clueless to expert ratio will often increase drastically the more esoteric X is. It's probably pretty easy to find someone credible who knows how to cook a half-decent Spaghetti Bolognese, but, what about a Mousaka which is slightly more esoteric is going to be a bit harder. I am one of only two people in my very broad face-to-face friendship group that has ever written code in GLSL, and I'm not very good at it, so if a third friend wanted to learn about GLSL I probably won't be a good person to ask. I believe people like Bella are genuine in their desire and intention to help. I also sympathize with Adam's plight, but I think he is the problem. I sympathize because, for example, I don't know anything about the legal structures for startup financing in my company. I wouldn't even know if this is something that I should talk to an account or a lawyer about. So I understand Adam's plight: not even knowing where to begin asking how to do X necessitates casting a wide net: going to general online communities, posting to social media, asking friends if they "know someone know who knows someone who knows how to X". And then you're bound to catch a lot of Bellas in that net: people genuinely trying to help, but maybe also too enthusiastic to rush in for their participation trophy by asking ChatGPT. And the less said about people who when you ask for recommendations online give you a title of a book without any explanation about why it is relevant, why it is good, or how
8cousin_it
Which behavior he's reinforcing is not up to him, it depends on the learner as well. Let's take an analogy. Alice tells Bob "don't steal", and Bob interprets it as "don't get caught stealing". Who's in the wrong here? Bob, of course. He's the one choosing to ignore the intent of the request, like a misaligned AI. Same for people who misinterpret "don't post AI slop" as "get better at passing off AI slop as human". How such people can become genuinely aligned is a good question, and I'm not sure it can be done reliably with reinforcement, because all reinforcement has this kind of problem.
Lao Mein*110

I previously made a post that hypothesized a combination of the extra oral ethanol from Lumina and genetic aldehyde deficiency may lead to increased oral cancer risks in certain population. It has been cited in a recent post about Lumina potentially causing blindness in humans.

I've found that hypothesis less and less plausible ever since I published that post. I still think it is theoretically possible in a small proportion (extremely bad oral hygene) of the aldehyde deficient population, but even then it is very unlikely to raise the oral cancer incidence... (read more)

Yeah, my idea is just based on physical proximity. There's no way systemic concentrations would be enough, plus the E. Coli in the gut produce way more formate in total given the much larger surface area... yet I can't ignore that my mouth is directly below my eyes. I'm totally willing to bet on it, though I don't know how you'd judge something like this. Formate optic neuropathy doesn't necessarily have specific signs, though in the two case reports it does follow a progressive course and then suddenly get much worse. Is it just based on whether I end up ... (read more)

Raemon185

Every now and then I'm like "smart phones are killing America / the world, what can I do about that?". 

Where I mean: "Ubiquitous smart phones mean most people are interacting with websites in a fair short attention-space, less info-dense-centric way. Not only that, but because websites must have a good mobile version, you probably want your website to be mobile-first or at least heavily mobile-optimized, and that means it's hard to build features that only really work when users have a large amount of screen space."

I'd like some technological solution... (read more)

Showing 3 of 12 replies (Click to show all)
1Sinclair Chen
shortform video has some epistemic benefits. you get a chance to see the body language and emotional affect of people, which transfers much more information and makes it harder to just flat out lie. more importantly, everpresent access to twitter allows me to quickly iterate on my ideas and get instant feedback on every insane thought that flows through my head. this is not a path i recommend for most people. but it is the path i've chosen.
Raemon21

I might separately criticize shortform video and twitter (sure, they definitely have benefits, I just think they also have major costs, and if we can alleviate the costs we should. This doesn't have to mean banning shortform and twitter). 

But, I think that's (mostly) a different topic that the OP.

The question here is not "is it good you can post on twitter?", it's "is it good you can post on the version of twitter that was brought into being by "most people using small-screens." (or, more accurately: is it good that we're in the world where small-screen twitter is a dominant force shaping humanity, as opposed to an ecosystem where less-small-screen-oriented social media app is more dominant)

2Garrett Baker
Ok, I guess I got confused by your calling it a "Hard Problem".

I remember a hygienist at the dentist once telling me that toothpaste isn't a huge deal and that it's the mechanical friction of the toothbrush that provides most of the value. Since being told that, after a meal, I often wet my toothbrush with water and brush for 10 seconds or so.

I just researched it some more and from what I understand, after eating, food debris that remains on your teeth forms a sort of biofilm. Once the biofilm is formed you need those traditional 2 minute long tooth brushing sessions to break it down and remove it. But it takes 30+ mi... (read more)

Reply2111
Showing 3 of 4 replies (Click to show all)
1Sinclair Chen
isn't this what toothpicks are traditionally for? sometimes i just run my fingernail through my teeth, scrape all the outward surfaces and slide it in between the teeth

My understanding is that toothpicks are for scraping the area in between teeth, not the surface of the tooth itself.

10Adam Zerner
That's a good call out about acidic food. I remember hearing that too and so don't brush after eating something pretty acidic. Also because my teeth are sensitive to acid and it hurts when I brush after eating something pretty acidic. For the general case, this excerpt from the article sounded like it was indicating that you should brush after eating.
sam3-6

I am confused about why this post on the ethics of eating honey is so heavily downvoted.

It sparked a bunch of interesting discussion in the comments (e.g. this comment by Habryka and the resulting arguments on how to weight non-human animal experiences)

It resulted in at least one interesting top-level rebuttal post.

I assume it led indirectly to this interesting short post also about how to weight non-human experiences. (this might not have been downstream of the honey post but it's a weird coincidence if isn't)

I think the original post certainly had flaws,... (read more)

Showing 3 of 13 replies (Click to show all)
3Mitchell_Porter
You say in another comment that you're going around claiming to detect LLM use in many places. I found the reasons that you gave in the case of BB, bizarrely mundane. You linked to another analysis of yours as an example of hidden LLM use, so I went to check it out. You have more evidence in the case of Alex Kesin, *maybe* even a preponderance of evidence. But there really are two hypotheses to consider, even in that case. One is that Kesin is a writer who naturally writes that way, and whose use of ChatGPT is limited to copying links without trimming them. The other is that Kesin's workflow does include the use of ChatGPT in composition or editing, and that this gave rise to certain telltale stylistic features.   The essay in question ("Don't Eat Honey") contains, by my count, two such sneers, one asserting that Donald Trump is stupid, the other asserting that Curtis Yarvin is boring. Do you not think that we could, for example, go back to the corpus of Bush-era American-college-student writings and find similar attacks on Bush administration figures, inserted into essays that are not about politics?  I am a bit worried about how fatally seductive I could find a debate about this topic to be. Clearly LLM use is widespread, and its signs can be subtle. Developing a precise taxonomy of the ways in which LLMs can be part of the writing process; developing a knowledge of "blatant signs" of LLM use and a sense for the subtle signs too; debating whether something is a false positive; learning how to analyze the innumerable aspects of the genuinely human corpus that have a bearing on these probabilistic judgments... It would be empowering to achieve sophistication on this topic, but I don't know if I can spare the time to achieve that. 
7gwern
It is in fact a mundane topic, because you are surrounded by AI slop and people relying heavily on ChatGPT writing, making it a mundane every day observation infiltrating even the heights of wordcel culture (I've now started seeing blatant ChatGPTisms in the New Yorker and New York Times), which is why you are wrong to bend over backwards to require extraordinary evidence for what have become ordinary claims (and also why your tangents and evasions are so striking). So, I am again going to ignore those, and will ask you again - you were sure that BB was not using ChatGPT, despite the linguistic tells and commonness of it: I am still waiting for an answer here.

Let me first try to convey how this conversation appears from my perspective. I don't think I've ever debated directly with you about anything, but I have an impression of you as doing solid work in the areas of your interest. 

Then, I run across you alleging that BB is using AI to write some of his articles. This catches my attention because I do keep an eye on BB's work. Furthermore, your reason for supposing that he is using AI seems bizarre to me - you think his (very occasional) "sneering" is too "dumb and cliche" to be the work of human hands. Le... (read more)

Act utilitarians choose actions estimated to increase total happiness. Rule utilitarians follow rules estimated to increase total happiness (e.g. not lying). But you can have the best of both: act utilitarianism where rules are instead treated as moral priors. For example, having a strong prior that killing someone is bad, but which can be overridden in extreme circumstances (e.g. if killing the person ends WWII).

These priors make act utilitarianism more safeguarded against bad assessments. They are grounded in Bayesianism (moral priors are updated the sam... (read more)

Load More