Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
LWLW164

What makes you confident that AI progress has stagnated at OpenAI? If you don’t have the time to explain why I understand, but what metrics over the past year have stagnated?

LWLW3-28

What if Trump is channeling his inner doctor strange and is crashing the economy in order to slow AI progress and buy time for alignment? Eliezer calls for an AI pause, Trump MAKES an AI pause. I rest my case that Trump is the most important figure in the history of AI alignment.

LWLW10

This is an uncharitable interpretation, but “good at increasingly long tasks which require no real cleverness” seems economically valuable, but doesn’t seem to be leading to what I think of as superintelligence. 

LWLW10

How does this account for the difficulty of the tasks? AFAIK even reasoning models still struggle with matrix reasoning. And most matrix puzzles (even difficult ones) are something you can do in 15-30 seconds, occasionally 4-5 minutes for sufficiently challenging ones. But even in those cases you usually figure out what to look for in the first 30-60 seconds and then spend the rest of the time on drudge.

So current agents might be capable of the 1 minute task “write a hello world program,” while not being capable of the 1 minute task “solve the final puzzle on Mensa DK.”


And if that’s the case, then agents might be capable of routine long-horizon tasks in the future (whatever that means), while still being incapable of more OOD achievements like “write Attention is all you need.”

What am I missing?

LWLW30

Oh I was actually hoping you’d reply! I may have hallucinated the exact quote I mentioned but here is something from Ulam: “Ulam on physical intuition and visualization,” it’s on Steve Hsu’s blog. And I might have hallucinated the thing about Poincaré being tested by Binet, that might just be an urban legend I didn’t verify. You can find Poincaré’s struggles with coordination and dexterity in “Men of Mathematics,” but that’s a lot less extreme than the story I passed on. I am confident in Tao’s preference for analysis over visualization. If you have the time look up “Terence Tao” on Gwern’s website.


I’m not very familiar with the field of neuroscience, but it seems to me that we’re probably pretty far from being able to provide a satisfactory answer to these questions. Is that true from your understanding of where the field is at? What sorts of techniques/technology would we need to develop in order for us to start answering these questions?

LWLW*50

From what I understand, JVN, Poincaré, and Terence Tao all had/have issues with perceptual intuition/mental visualization. JVN had “the physical intuition of a doorknob,” Poincaré was tested by Binet and had extremely poor perceptual abilities, and Tao (at least as a child) mentioned finding mental rotation tasks “hard.” 

I also fit a (much less extreme) version of this pattern, which is why I’m interested in this in the first place. I am (relatively) good at visual pattern recognition and math, but I have aphantasia and have an average visual working memory. I felt insecure about this for a while, but seeing that much more intelligent people than me had a similar (but more extreme) cognitive profile made me feel better.

Does anybody have a satisfactory explanation for this profile beyond a simplistic “tradeoffs” explanation?


Edit: Some claims about JVN/Poincare may have been hallucinated, but they are based at least somewhat on reality. See my reply to Steven

LWLW60

This is why I don’t really buy anybody who claims an IQ >160. Effectively all tested IQs over 160 likely came from a childhood test or have an SD of 20 and there is an extremely high probability that the person with said tested iq substantially regressed to the mean. And even for a test like the WAIS that claims to measure up to 160 with SD 15, the norms start to look really questionable once you go much past 140. 

I think I know one person who tested at 152 on the WISC when he was ~11, and one person who ceilinged the WAIS-III at 155 when he was 21. And they were both high-achieving, but they weren’t exceptionally high-achieving. Someone fixated on IQ might call this cope, but they really were pretty normal people who didn’t seem to be on a higher plane of existence. The biggest functional difference between them and people with more average IQs was that they had better job prospects. But they both had a lot of emotional problems and didn’t seem particularly happy.

LWLW7-3

This just boils down to “humans aren’t aligned,” and that fact is why this would never work, but I still think it’s worth bringing up. Why are you required to get a license to drive, but not to have children? I don’t mean this in a literal way, I’m just referring to how casual the decision to have children is seen by much of society. Bringing someone into existence is vastly higher stakes than driving a car. 

I’m sure this isn’t implementable, but parents should at least be screened for personality disorders before they’re allowed to have children. And sure that’s a slippery slope, and sure many of the most powerful people just want workers to furnish their quality of life regardless of the worker’s QOL. But bringing a child into the world who you can’t properly care for can lead to a lifetime of avoidable suffering.


I was just reading about “genomic liberty,” and the idea that parents would choose to make their kids iq lower than possible, that some would even choose for their children to have disabilities like them is completely ridiculous. And it just made me think “those people shouldn’t have the liberty of being parents.” Bringing another life into existence is not casual like where you work/live. And the obligation should be to the children, not the parents.

LWLW20

How far along are the development of autonomous underwater drones in America? I’ve read statements by American military officials about wanting to turn the Taiwan straight into a drone-infested death trap. And I read someone (not an expert) who said that China is racing against time to try and invade before autonomous underwater drones take off. Is that true? Are they on track?

LWLW10

MuZero doesn’t seem categorically different from AlphaZero. It has to do a little bit more work at the beginning, but if you don’t get any reward for breaking the rules: you will learn not to break the rules. If MuZero is continuously learning then so is AlphaZero. Also, the games used were still computationally simple, OOMs more simple than an open-world game, let alone a true World-Model. AFAIK MuZero doesn’t work on open-ended, open-world games. And AlphaStar never got to superhuman performance at human speed either.


 

Load More