All of interstice's Comments + Replies

Wearing a suit in an inappropriate context is like wearing a fedora. It says "I am socially clueless enough to do random inappropriate things"

This is far too broadly stated, the actual message people will take away from an unexpected suit is verrrrry context-dependent, depending on (among other things) who the suit-wearer is, who the people observing are, how the suit-wearer carries himself, the particular situation the suit is worn in, etc. etc. etc. Judging from the post it sounds like those things create an overall favorable impression for lsusr?(it's hard to tell from just a post of course, but still)

4jmh
There's a Korean expression that basicly seems to be "the look is right" or "the look fits" which seems in line with your comment. The same outfit, hat, shoes, glasses, jacket or even car for different people create a different image in other's heads. There is a different message getting sent. So if the overall point for the post is about the signaling then I suspect it is very important to consider the device one chooses to send messages like this. In other words, yes breaking some social/cultural standards to make certain points is fine but thought needs to be put into just how appropriately your chosen device/method "fits" you will probably have a fairly large impact on your success. I suspect that holds just as well if you're looking at some type of "polarizing" action as a mechanism for breaking the ice and providing some filtering for making new acquaintances and future good friends.

Yeah, I started wearing a suit in specific contexts after many months of careful consideration. It's not random at all. Everything about it is carefully considered, from the number of buttons on my jacket to the color of my shoes.

I mostly wear it around artists. Artists basically never wear suits where I live, but they really appreciate them because ① artists are particularly sensitive to aesthetic fundamentals and ② artists like creative clothing.

But I still have a problem with the post's tone because if you really internalized that "you" are the player, then your reaction to the informational content should be like "I'm a beyond-'human' uncontrollable force, BOOYEAH!!", not "I'm a beyond-human uncontrollable force, ewww tentacles😣"

Goodness maximizing as undefined without an arbitrary choice of values

By "(non-socially-constructed) Goodness" I mean the goodness of a state of affairs as it actually seems to that particular person really-deep-down. Which can have both selfish -- perhaps "arbitrary" from a certain perspective -- and non-selfish components.

2Noosphere89
Maybe the crux is I'm more skeptical of the "particular person really deep down" part of ethics.

I changed my mind about this, I actually think "lovecraftian horror" might be somewhat better than "monkey" as a mental image, but maybe "(non-socially-constructed)-Goodness-Maximizing AGI" or "void from which things spontaneously arise" or "the voice of God" could be even better?

2interstice
But I still have a problem with the post's tone because if you really internalized that "you" are the player, then your reaction to the informational content should be like "I'm a beyond-'human' uncontrollable force, BOOYEAH!!", not "I'm a beyond-human uncontrollable force, ewww tentacles😣"
2Noosphere89
IMO, I don't particularly like this framing of the player as a (non-socially-constructed)-Goodness-Maximizing AGI", if only because I view Goodness maximizing as undefined without an arbitrary choice of values, and the implied value misalignment is not what I'd call a goodness maximizer (for the reason that it's simply unaligned to what the character values). I like the voice of God/Lovecraftian horror metaphors better.

He doesn't only talk about properties but also what people actually are according to our best physical theories, which is continuous wavefunctions -- of which there are only beth-1.

1omnizoid
But not all possible people are continuous wavefunctions! 

Sadly my perception is that there are some lesswrongers who reflexively downvote anything they perceive as "weird", sometimes without thinking the content through very carefully -- especially if it contradicts site orthodoxy in an unapologetic manner.

1notfnofn
That's incredible. But how do they profit? They say they don't profit on middle eastern war markets, so they must be profiting elsewhere somehow

Liked the post btw!

Also

√1 = 2

√1 = ±1

1Recurrented
ah yeah

The question is how we should extrapolate, and in particular if we should extrapolate faster than experts currently predict. You would need to show that Willow represents unusually fast progress relative to expert predictions. It's not enough to say that it seems very impressive.

1G
I extrapolate faster, because experts were wrong about AGI "after 2050" and they were wrong about predicting explosive growth of Bitcoin. In general they are usually too conservative, so odds are experts will be wrong about quantum supremacy as well.

I don't see how your first bullet point is much evidence for the second, unless you have reason to believe that the Willow chip has a level of performance much greater than experts predicted at this point in time.

1G
I meant extrapolating developments in the future.

I think the basic reason that it's hard to make an interesting QCA using this definition is that it's hard to make a reversible CA. Reversible cellular automata are typically made using block-partitioning or a second-order method. The (classical) laws of physics also seem to have a flavor more similar to these than a GoL-style CA, in that they have independent position and velocity coordinates which each determine the time evolution of the other.

Yeah I definitely agree you should start learning as young as possible. I think I would usually advise a young person starting out to learn general math/CS stuff and do AI safety on the side, since there's way more high-quality knowledge in those fields. Although "just dive in to AI" seems to have worked out well for some people like Chris Olah, and timelines are plausibly pretty short so ¯\_(ツ)_/¯

People asked for a citation so here's one: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.kellogg.northwestern.edu/faculty/jones-ben/htm/age%2520and%2520scientific%2520genius.pdf&ved=2ahUKEwiJjr7b8O-JAxUVOFkFHfrHBMEQFnoECD0QAQ&sqi=2&usg=AOvVaw0HF9-Ta_IR74M8df7Av6Qe

Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein's annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newt... (read more)

2lemonhope
Einstein started doing research a few years before he actually had his miracle year. If he started at 26, he might have never found anything. He went to physics school at 17 or 18. You can't go to "AI safety school" at that age, but if you have funding then you can start learning on your own. It's harder to learn than (eg) learning to code, but not impossibly hard. I am not opposed to funding 25 or 30 or 35 or 40 year olds, but I expect that the most successful people got started in their field (or a very similar one) as a teenager. I wouldn't expect funding an 18-year-old to pay off in less than 4 years. Sorry for being unclear on this in original post.

It sounds pretty implausible to me, intellectual productivity is usually at its peak from mid-20s to mid-30s(for high fluid-intelligence fields like math and physics)

2interstice
People asked for a citation so here's one: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.kellogg.northwestern.edu/faculty/jones-ben/htm/age%2520and%2520scientific%2520genius.pdf&ved=2ahUKEwiJjr7b8O-JAxUVOFkFHfrHBMEQFnoECD0QAQ&sqi=2&usg=AOvVaw0HF9-Ta_IR74M8df7Av6Qe Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein's annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newton invented calculus at 24. Hmmm I guess this makes it seem more like early 20s - 30. Either way 25 is definitely in peak range, and 18 typically too young(although people have made great discoveries by 18, like Galois. But he likely would have been more productive later had he lived past 20)

Confused as to why this is so heavily downvoted.

6Richard_Kennaway
Personally, I don’t care about the shrimp. At all. The anthropomorphising is absurd, and saying “but even if the suffering were much less it would still be huge” is either a basic error of thinking or dark arts. Anchor the reader by picking a huge number, then back off and point out that it’s still huge. How about epsilon? Is that still huge? How about zero? Anyone can pluck figures out of the air, dictated by whatever will support their bottom line. I see that already one person has let this article mug his brain for $1000. His loss, though he think it gain.

These emails and others can be found in document 32 here.

4Nisan
check out exhibit 13...

but it seems that even on LW people think winning on a noisy N=1 sample is proof of rationality

It's not proof of a high degree of rationality but it is evidence against being an "idiot" as you said. Especially since the election isn't merely a binary yes/no outcome, we can observe that there was a huge republican blowout exceeding most forecasts(and in fact freddi bet a lot on republican pop vote too at worse odds, as well as some random states, which gives a larger update) This should increase our credence that predicting a republican win was rational.... (read more)

4Alexander Gietelink Oldenziel
Okay fair enough "rich idiot" was meant more tongue-in-cheek - that's not what I intended. 

Looks likely that tonight is going to be a massive transfer of wealth from "sharps"(among other people) to him. Post hoc and all, but I think if somebody is raking in huge wins while making "stupid" decisions it's worth considering whether they're actually so stupid after all.

>>  'a massive transfer of wealth from "sharps" '. 

no. That's exactly the point. 

1. there might no be any real sharps (=traders having access to real private arbitragiable information that are consistently taking risk-neutral bets on them) in this market at all.

This is because a) this might simple be a noisy, high entropy source that is inherently difficult to predict, hence there is little arbitragiable information and/or b) sharps have not been sufficiently incenticiz

2. The transfer of wealth is actually disappointing because Theo th... (read more)

That's why I said: "In expectation", "win or lose"

That the coinflip came out one way rather than another doesnt prove the guy had actual inside knowledge. He bought a large part of the shares at crazy odds because his market impact moved the price so much.

But yes, he could be a sharp in sheeps clothings. I doubt it but who knows. EDIT: I calculated the implied private odds for a rational Kelly bettor that this guy would have to have. Suffice to say these private odds seem unrealistic for election betting.

Point is that the winners contribute epistemics and the losers contribute money. The real winner is society [if the questions are about socially-relevant topics].

Good post, it's underappreciated that a society of ideally rational people wouldn't have unsubsidized, real-money prediction markets.

unless you've actually got other people being wrong even in light of the new actors' information

Of course in real prediction markets this is exactly what we see. Maybe you could think of PMs as they exist not as something that would exist in an equilibrium of ideally rational agents, but as a method of moving our society closer to such an equilibrium, subsidized by the bets of systematically irrational people. It's not a ... (read more)

That's probably the one I was thinking of.

I know of only two people who anticipated something like what we are seeing far ahead of time; Hans Moravec and Jan Leike

I didn't know about Jan's AI timelines. Shane Legg also had some decently early predictions of AI around 2030(~2007 was the earliest I knew about)

6Alexander Gietelink Oldenziel
Oh no uh-oh I think I might have confused Shane Legg with Jan Leike

Some beliefs can be worse or better at predicting what we observe, this is not the same thing as popularity.

3James Camacho
That assumes the law of non-contradiction. I could hold the belief that everything will happen in the future, and my prediction will be right every time. Alternatively, I can adjust my memory of a prediction to be exactly what I experience now. Also, predicting the future only seems useful insofar as it lets the belief propagate better. The more rational and patient the hosts are, the more useful this skill becomes. But, if you're thrown into a short-run game (say ~80yrs) that's already at an evolutionary equilibrium, combining this skill with the law of non-contradiction (i.e. only holding consistent beliefs) may get you killed.

Far enough in the future ancient brain scans would be fascinating antique artifacts like rare archaeological finds today, I think people would be interested in reviving you on that basis alone(assuming there are people-like things with some power in the future)

I like the decluttering. I think the title should be smaller and have less white space above it. Also think that it would be better if the ToC was maybe just faded a lot until mouseover, the sudden appearance/disappearance feels too sudden.

4habryka
I think making things faint enough so that the relatively small margin between main body text and the ToC wouldn't become bothersome during reading isn't really feasible. In-general, because people's screen-contrast and color calibration differs quite a lot, you don't have that much wiggle room at the lower level of opacity without accidentally shipping completely different experiences to different users. I think it's plausible we want to adjust the whitespace below the title, but I think you really need this much space above the title to not have it look cluttered together with the tags on smaller screens. On larger screens there is enough distance between the title and top right corner, but things end up much harder to parse when the tags extend into the space right above the title, and that margin isn't big enough.

No I don't think so because people could just airgap the GPUs.

Weaker AI probably wouldn't be sufficient to carry out an actually pivotal act. For example the GPU virus would probably be worked around soon after deployment, via airgapping GPUs, developing software countermeasures, or just resetting infected GPUs.

1Michael Soareverix
Is it possible to develop specialized (narrow) AI that surpasses every human at infecting/destroying GPU systems, but won't wipe us out? LLM-powered Stuxnet would be an example. Bacteria isn't smarter than humans, but it is still very dangerous. It seems like a digital counterpart could prevent GPUs and so, prevent AGI. (Obviously, I'm not advocating for this in particular since it would mean the end of the internet and I like the internet. It seems likely, however, that there are pivotal acts possible by narrow AI that prevent AGI without actually being AGI.)

This discussion is a nice illustration of why x-riskers are definitely more power-seeking than the average activist group. Just like Eskimos proverbially have 50 words for snow, AI-risk-reducers need at least 50 terms for "taking over the world" to demarcate the range of possible scenarios. ;)

Nice overview, I agree but I think the 2016-2021 plan could still arguably be described as "obtain god-like AI and use it to take over the world"(admittedly with some rhetorical exaggeration, but like, not that much)

5Eli Tyre
I think it's pretty important that the 2016 to 2021 plan was explicitly aiming to avoid unleashing godlike power. "The minimal amount of power to do a thing which is otherwise impossible", not "as much omnipotence as is allowed by physics". And similarly, the 2016 to 2021 plan did not entail optimizing the world except with regard to what is necessary to prevent dangerous AGIs. These are both in contrast to the earlier 2004 to 2016 plan. So the rhetorical exaggeration confuses things.  MIRI actually did have a plan that, in my view, is well characterized as (eventually) taking over the world, without exaggeration, that's apt to get lost if we describe a "toned down" plan as "taking over the world", because it involves taking powerful, potentially aggressive, action. 

I would be happy to take bets here about what people would say.

Sure, I DM'd you.

I think making inferences from that to modern MIRI is about as confused as making inferences from people's high-school essays about what they will do when they become president

Yeah, but it's not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that "a small group of people with overwhelming hard power" was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.

8habryka
Some things that feel incongruent with this:  * Eliezer talks a lot in the Arbital article on CEV about how useful it is to have a visibly neutral alignment target * Right now Eliezer is pursuing a strategy which does not meaningfully empower him at all (just halting AGI progress) * Eliezer complaints a lot about various people using AI alignment under the guise of mostly just achieving their personal objectives (in-particular the standard AI censorship stuff being thrown into the same bucket) * Lots of conversations I've had with MIRI employees I would be happy to take bets here about what people would say. 

I think they talked explicitly about planning to deploy the AI themselves back in the early days(2004-ish) then gradually transitioned to talking generally about what someone with a powerful AI could do.

But I strongly suspect that in the event that they were the first to obtain powerful AI, they would deploy it themselves or perhaps give it to handpicked successors. Given Eliezer's worldview I don't think it would make much sense for them to give the AI to the US government(considered incompetent) or AI labs(negligently reckless)

5Eli Tyre
Here is a video of of Eliezer, first hosted on vimeo in 2011. I don't know when it was recorded. [Anyone know if there's a way to embed the video inthe coment, so people don't have to click out to watch it?] He states explicitly:  And later in the video he says:
7habryka
I agree that very old MIRI (explicitly disavowed by present MIRI and mostly modeled as "one guy in a basement somewhere") looked a bit more like this, but I think making inferences from that to modern MIRI is about as confused as making inferences from people's high-school essays about what they will do when they become president. I don't think it has zero value in forecasting the future, but going and reading someone's high-school political science essay, and inferring they would endorse that position in the modern day, is extremely dubious. My model of them would definitely think very hard about the signaling and coordination problems that come with people trying to build an AGI themselves, and then act on those. I think Eliezer's worldview here would totally output actions that include very legible precommitments about what the AI system would be used for, and would absolutely definitely not include the ability of whoever builds AGI to just take over the world with it. Eliezer has written a lot about this stuff and clearly takes considerations like that extremely seriously.

It wasn't specified but I think they strongly implied it would be that or something equivalently coercive. The "melting GPUs" plan was explicitly not a pivotal act but rather something with the required level of difficulty, and it was implied that the actual pivotal act would be something further outside the political Overton window. When you consider the ways "melting GPUs" would be insufficient a plan like this is the natural conclusion.

doesn't require replacing existing governments

I don't think you would need to replace existing governments. Just bl... (read more)

Just block all AI projects and maintain your ability to continue doing so in the future via maintaining military supremacy.

That to me is a very very non-central case of "take over the world", if it is one at all.

This is about "what would people think when they hear that description" and I could be wrong, but I expect "the plan is to take over the world" summary would lead people to expect "replace governments" level of interference, not "coerce/trade to ensure this specific policy" - and there's a really really big difference between the two.

"Taking over" something does not imply that you are going to use your authority in a tyrannical fashion. People can obtain control over organizations and places and govern with a light or even barely-existent touch, it happens all the time.

Would you accept "they plan to use extremely powerful AI to institute a minimalist, AI-enabled world government focused on preventing the development of other AI systems" as a summary? Like sure, "they want to take over the world" as a gist of that does have a bit of an editorial slant, but not that much of one. I think ... (read more)

8Ruby
No. Because I don't think that was specified or is necessary for a pivotal act. You could leave all existing government structures intact and simply create an invincible system that causes any GPU farm larger than a certain size to melt. Or something akin to that that doesn't require replacing existing governments, but is a quite narrow intervention.

Are you saying that AIS movement is more power-seeking than environmentalist movement that spent 30M$+[...]

I think that AIS lobbying is likely to have more consequential and enduring effects on the world than environmental lobbying regardless of the absolute size in body count or amount of money, so yes.

"MIRI default plan" was "to do math in hope that some of this math will turn out to be useful".

I mean yeah, that is a better description of their publicly-known day-to-day actions, but intention also matters. They settled on math after it became clea... (read more)

4quetzal_rainbow
The point of the OP is not about effects, it's about AIS being visibly more power-seeking than other movements and causing backlash in response to visible activity.

Are you sure [...] et cetera are less power-seeking than AI Safety community?

Until recently the MIRI default plan was basically "obtain god-like AI and use it to take over the world"("pivotal act"), it's hard to get more power-seeking than that. Other wings of the community have been more circumspect but also more active in things like founding AI labs, influencing government policy, etc., to the tune of many billions of dollars worth of total influence. Not saying this is necessarily wrong but it does seem empirically clear that AI-risk-avoiders are mo... (read more)

My understanding of MIRI plan was "have a controllable, safe AI that's just powerful enough to take some action that prevents anyone else from building a more powerful and more dangerous AI". I wouldn't call that God-like or an intention to take over the world. The go-to [acknowledged as that plausible] example  is "melt all the GPUs"] Your description feels grossly inaccurate.

5quetzal_rainbow
Are you saying that AIS movement is more power-seeking than environmentalist movement that spent 30M$+ on lobbying in single 2023 and has political parties in 90 countries, in five countries - in ruling coalition? For comparison, this paper in Politico with maximally negative attitude mentions AIS lobbying around 2M$. It's like saying "NASA default plan is to spread light of consciousness across the stars", which is kinda technically true, but in reality NASA actions are not as cool as this phrase implies. "MIRI default plan" was "to do math in hope that some of this math will turn out to be useful".

Makes sense. But I think the OP is using the term to mean something different than you(centrally math and puzzle solving)

Hmm, but don't puzzle games and math fit those criteria pretty well?(I guess if you're really trying hard at either there's more legitimate contact with reality?) What would you consider a central example of a nerdy interest?

6romeostevensit
Imaginal worlds, escapism. Video games, tabletop gaming, fantasy movies and books, comics and anime, collecting things, model building or mechanically intricate things.

I wonder if "brains" of the sort that are useful for math and programming are neccessarily all that helpful here. I think intuition-guided trial and error might work better. That's been my experience dealing with chronic-illness type stuff.

2riceissa
I think I used to implicitly believe this too. I gravitate much more to math/programming than biology, and had a really hard time getting myself interested in biology/health stuff. But having been forced to learn more biology/health stuff, I seem to be able to ask questions that I don't see other people asking, and thinking thoughts that not many others are thinking, so now I think the kinds of thinking used in math/programming generalize and would be quite helpful in solving mysterious chronic illnesses. Separately, I used to mostly agree with the Elizabeth post you linked, but the biggest "win" I've had so far with my own chronic illness has had the opposite lesson, where careful thinking and learning allowed me to improve my breathing problem. Of course, I still try a bunch of random things based on intuition. But I have a sense that having a good mechanistic model of the underlying physiology will lead to the biggest cures.

I think she meant he was looking for epistemic authority figures to defer to more broadly, even if it wasn't because he thought they were better at math than him.

2Raemon
This was also my read (and, while I don't have links onhand and might be misremembering, I think he has other twitter threads that basically state this explicitly)

Some advanced meditators report that they do perceive experience as being basically discrete, flickering in and out of existence at a very high frequency(which is why it might appear continuous without sufficient attention). See e.g. https://www.mctb.org/mctb2/table-of-contents/part-i-the-fundamentals/5-the-three-characteristics/

Tangentially related: some advanced meditators report that their sense that perception has a center vanishes at a certain point along the meditative path, and this is associated with a reduction in suffering.

performance gap of trans women over women

The post is about the performance gap of trans women over men, not women.

0kromem
It implicitly does compare trans women to other women in talking about the performance similarity between men and women: "Why aren't males way smarter than females on average? Males have ~13% higher cortical neuron density and 11% heavier brains (implying 1.112/3−1=7% more area?). One might expect males to have mean IQ far above females then, but instead the means and medians are similar" So OP is saying "look, women and men are the same, but trans women are exceptional." I'm saying that identifying the exceptionality of trans women ignores the environmental disadvantage other women experience, such that the earlier claims of unexceptionable performance of women (which as I quoted gets an explicit mention from a presumption of assumed likelihood of male competency based on what's effectively phrenology) are reflecting a disadvantaged sample vs trans women. My point is that if you accounted for environmental factors the data would potentially show female exceptionality across the board and the key reason trans women end up being an outlier against both men and other women is because they are avoiding the early educational disadvantage other women experience.

I don't know enough about hormonal biology to guess a specific cause(some general factor of neoteny, perhaps??). It's much easier to infer that it's likely some third factor than to know exactly what third factor it is. I actually think most of the evidence in this very post supports the 3rd-factor position or is equivocal - testosterone acting as a nootropic is very weird if it makes you dumber, that men and women have equal IQs seems not to be true, the study cited to support a U-shaped relationship seems flimsy, that most of the ostensible damage occurs... (read more)

3lemonhope
They seemed low-T during high school though! Yeah could be a third factor though. Maybe you are right.

I buy that trans women are smart but I doubt "testosterone makes you dumber" is the explanation, more likely some 3rd factor raises IQ and lowers testosterone.

3lemonhope
Like what exactly? That seems unlikely to me. I suppose we will have results from the ongoing gender transitions soon.

I think using the universal prior again is more natural. It's simpler to use the same complexity metric for everything; it's more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn't hold.

If you want to pick out locations within some particular computation, you can just use the universal prior again, applied to indices to parts of the computation.

2Tamsin Leake
What you propose, ≈"weigh indices by kolmogorov complexity" is indeed a way to go about picking indices, but "weigh indices by one over their square" feels a lot more natural to me; a lot simpler than invoking the universal prior twice.
Load More