All of hairyfigment's Comments + Replies

I don't see how any of it can be right. Getting one algorithm to output Spongebob wouldn't cause the SI to watch Spongebob -even a less silly claim in that vein would still be false. The Platonic agent would know the plan wouldn't work, and thus wouldn't do it.

Since no individual Platonic agent could do anything meaningful alone, and they plainly can't communicate with each other, they can only coordinate by means of reflective decision theory. That's fine, we'll just assume that's the obvious way for intelligent minds to behave. But then the SI works the ... (read more)

https://arxiv.org/abs/1712.05812

It's directly about inverse reinforcement learning, but that should be strictly stronger than RLHF. Seems incumbent on those who disagree to explain why throwing away information here would be enough of a normative assumption (contrary to every story about wishes.)

this always helps in the short term,

You seem to have 'proven' that evolution would use that exact method if it could, since evolution never looks forward and always must build on prior adaptations which provided immediate gain. By the same token, of course, evolution doesn't have any knowledge, but if "knowledge" corresponds to any simple changes it could make, then that will obviously happen.

Well that's disturbing in a different way. How often do they lose a significant fraction of their savings, though? How many are unvaccinated, which isn't the same as loudly complaining about the shot's supposed risks? The apparent lack of Flat Earthers could point to them actually expecting reality to conform to their words, and having a limit on the silliness of the claims they'll believe. But if they aren't losing real money, that could point to it being a game (or a cost of belonging).

4ErioirE
I think they are genuinely unvaccinated. They believe (or profess to believe) in tons of quack medicine but AFAIK they don't spend loads of money on it. If they had a health emergency they'd still go to an ER, so they're not completely in denial of modern medicine.

The answer might be unhelpful due to selection bias, but I'm curious to learn your view of QAnon. Would you say it works like a fandom for people who think they aren't allowed to read or watch fiction? I get the strong sense that half the appeal - aside from the fun of bearing false witness - is getting to invent your own version of how the conspiracy works. (In particular, the pseudoscientific FNAF-esque idea at the heart of it isn't meant to be believed, but to inspire exegesis like that on the Kessel Run.) This would be called fanfic or "fanwank" if they admitted it was based on a fictional setting. Is there something vital you think I'm missing?

1ErioirE
To clarify, I was allowed to read fiction[1], just not on Sundays. Although my mom did disapprove of Harry Potter for a long while because 'something something glorifies occult beliefs something something'. A couple of my own hypothesis to take with a grain of salt: * One big part of the problem is the tendency of some to vastly underestimate how difficult it is to cover up anything that a lot of people know. Also a lack of fact checking. (My friend/relative/trusted neighbor told me this, therefore it must be true) * I think QAnon theories appeal to much of the same crowd as cults. If someone is willing to believe <small niche group> has secret knowledge that has failed peer review been suppressed by <Big Government/Forces of Ambiguous Evil>, they are more likely to accept the plausibility of other claims with similar appeal. So 911 conspiracy people are more likely to also believe that vaccines cause autism or <snake oil/homeopathy/fad treatment of the week> cures cancer but Big Pharma is keeping it secret, etc. I wonder if there's any good data tracking the relative frequency of this sort of thing?  In a similar vein, Utah has more MLM schemes per capita than any other state.[2] At least nobody I know believes in Flat Earth...as far as I know. 1. ^ As long as it didn't have objectionable content, like anything remotely sexual. 2. ^ https://kutv.com/news/local/follow-the-profit-how-mormon-culture-made-utah-a-hotbed-for-multi-level-marketers

There have, in fact, been numerous objections to genetically engineered plants and by implication everything in the second category. You might not realize how much the public is/was wary of engineered biology, on the grounds that nobody understood how it worked in terms of exact internal details. The reply that sort of convinced people - though it clearly didn't calm every fear about new biotech - wasn't that we understood it in a sense. It was that humanity had been genetically engineering plants via cultivation for literal millennia, so empirical facts allowed us to rule out many potential dangers.

5Steven Byrnes
Oh sorry, my intention was to refer to non-GMO plant cultivars. There do exist issues with non-GMO plant cultivars, like them getting less nutritious, or occasionally being toxic, but to my knowledge the general public has never gotten riled up about any aspect of non-GMO plant breeding, for better or worse. Like you said, we’ve been doing that for millennia. (This comment is not secretly arguing some point about AI, just chatting.)

Note that it requires the assumption that consciousness is material

Plainly not, assuming this is the same David J. Chalmers.

-5Štěpán Los

This would make more sense if LLMs were directly selected for predicting preferences, which they aren't. (RLHF tries to bridge the gap, but this apparently breaks GPT's ability to play chess - though I'll grant the surprise here is that it works at all.) LLMs are primarily selected to predict human text or speech. Now, I'm happy to assume that if we gave humans a D&D-style boost to all mental abilities, each of us would create a coherent set of preferences from our inconsistent desires, which vary and may conflict at a given time even within an individ... (read more)

The classification heading "philosophy," never mind the idea of meta-philosophy, wouldn't exist if Aristotle hadn't tutored Alexander the Great. It's an arbitrary concept which implicitly assumes we should follow the aristocratic-Greek method of sitting around talking (or perhaps giving speeches to the Assembly in Athens.) Moreover, people smarter than either of us have tried this dead-end method for a long time with little progress. Decision theory makes for a better framework than Kant's ideas; you've made progress not because you're smarter than Kant, b... (read more)

Oddly enough, not all historians are total bigots, and my impression is that the anti-Archipelago version of the argument existed in academic scholarship - perhaps not in the public discourse - long before JD. E.g. McNeill published a book about fragmentation in 1982, whereas GG&S came out in 1997.

9agp
I did not intend to imply that historians were writing racist explanations for why Europe was able to colonize most of the world - sorry if that is how it came across! Instead, I believe those views were common among mainstream society. Part of that is because there had not been a cohesive, insightful, and popular alternate explanation. McNeill is indeed one of the few historians who were investigating this question - and unfortunately I haven't read any of his work. However, I don't think that Jared Diamond was just repeating McNeill's argument because the back of my copy of Guns, Germs, and Steel has this excerpt from a review that McNeill gave the book: I dug up the full review online here. There's certainly lots of criticism in the review - particularly of that epilogue. But also pay attention to how much McNeill praises Diamond for the new ideas he brings forward. The tone of this review is radically different from those reddit threads. The modern online discourse about Diamond has amplified all of the criticisms from early reviews like McNeill's, but entirely removed all of the praise. One of the reddit threads compared Diamond to a student faking a chemistry experiment - I certainly don't think that McNeill had the same perspective! McNeill seems to have an honest disagreement with Diamond, he doesn't think that he's a fraud. Reading those reddit threads can definitely make someone develop a heuristic "to not believe any analysis that Diamond presents, since there's a significant probability that it's misleading". But I think that's a shame, because Diamond has lots of unique, well-praised insights that are missing from the discussion in those threads.

Perhaps you could see my point better in the context of Marxist economics? Do you know what I mean when I say that the labor theory of value doesn't make any new predictions, relative to the theory of supply and demand? We seldom have any reason to adopt a theory if it fails to explain anything new, and its predictive power in fact seems inferior to that of a rival theory. That's why the actual historians here are focusing on details which you consider "not central" - because, to the actual scholars, Diamond is in fact cherry-picking topics which can't provide any good reason to adopt his thesis. His focus is kind of the problem.

5agp
Ah yes, that comparison makes sense. The prologue to Guns, Germs, and Steel outlines what Diamond sees as the most common explanations for the differences between peoples, and then uses the rest of the book to show why they are wrong and to offer a different explanation. These explanations are still somewhat common today, and I believe that they were much more common in 1997 when the book was published. Even in the comments section on this post there is a suggestion that the Tasmanians' technological regression was caused by biology - a population bottleneck causing inbreeding (I'm not saying that argument is the same as the 'Darwinian' one, just that it is also an explanation stemming from biological differences).  Guns, Germs, and Steel kicked off a genre of discussion that attempted to explain why Europe took over the world without assuming biological superiority. It seems like at the time, Diamond was explaining something new.

>The first chapter that's most commonly criticized is the epilogue - where Diamond puts forth a potential argument for why Europe, and not China, was the major colonial power.  This argument is not central to the thesis of the book in any way,

It is, though, because that's a much harder question to answer. Historians think they can explain why no American civilization conquered Europe, and why the reverse was more likely, without appeal to Diamond's thesis. This renders it scientifically useless, and leaves us without any clear reason to believe it,... (read more)

2agp
Devereaux’s quote there is similar to the argument that Diamond puts forward in the epilogue of his book. Diamond argues that the geography of Europe, with lots of mountains and peninsulas, encouraged the formation of lots of smaller countries, while the geography of China encouraged one large empire. So while one emperor could end Zheng He’s voyages, Europe’s geography encouraged the countries to compete and experiment. Columbus was Italian after all, but had to go to the competing kingdom of Spain to fund his voyage. I agree that that is a harder question! Diamond doesn’t devote a ton of space to it however, the book focuses on Eurasia compared to the Americas/Africa/Oceania, and not really on Europe vs other parts of Eurasia. My point in bringing it up is not so say necessarily that Diamond is correct, it’s just that if you read Diamond’s critics you might think that half of the book is about Pizarro and half is about Zheng He - when it’s actually mostly about things like the types of grass that cows eat. I’m just trying to show that we can trust Diamond not to cherry-pick evidence when writing this article about Tasmania.

I do see selves, or personal identity, as closely related to goals or values. (Specifically, I think the concept of a self would have zero content if we removed everything based on preferences or values; roughly 100% of humans who've every thought about the nature of identity have said it's more like a value statement than a physical fact.) However, I don't think we can identify the two. Evolution is technically an optimization process, and yet has no discernible self. We have no reason to think it's actually impossible for a 'smarter' optimization process... (read more)

So, what does LotR teach us about AI alignment? I thought I knew what you meant until near the end, but I actually can't extract any clear meaning from your last points. Have you considered stating your thesis in plain English?

3Jeffrey Heninger
The Lord of the Rings tells us that the hobbit’s simple notion of goodness is more effective at resisting the influence of a hostile artificial intelligence than the more complicated ethical systems of the Wise. The miscellaneous quotes at the end are not directly connected to the thesis statement.

You left out, 'People naively thinking they can put this discussion to bed by legally requiring disclosure,' though politicians would likely know they can't stop conspiracy theorists just by proving there's no conspiracy.

2Adele Lopez
I was not trying to be comprehensive, but yes that is a plausible possibility.

Just as humans find it useful to kill a great many bacteria, an AGI would want to stop humans from e.g. creating a new, hostile AGI. In fact, it's hard to imagine an alternative which doesn't require a lot of work, because we know that in any large enough group of humans, one of us will take the worst possible action. As we are now, even if we tried to make a deal to protect the AI's interests, we'd likely be unable to stop someone from breaking it.

I like to use the silly example of an AI transcending this plane of existence, as long as everyone understand... (read more)

1Gesild Muka
My assumption is it’s difficult to design superintelligence and humans will either hit a limit in the resources and energy use that go into keeping it running or lose control of those resources as it reaches AGI. My other assumption then is an intelligence that can last forever and think and act at 1,000,000 times human speed will find non-disruptive ways to continue its existence. There may be some collateral damage to humans but the universe is full of resources so existential threat doesn’t seem apparent (and there are other stars and planets, wouldn’t it be just as easy to wipe out humans as to go somewhere else?). The idea that a superintelligence would want to prevent humans from building another (or many) to rival the first is compelling but I think once a level of intelligence is reached the actions and motivations of mere mortals becomes irrelevant to them (I could change my mind on this last idea, haven’t thought about it as much). This is not to say that AI isn’t potentially dangerous or that it shouldn’t be regulated (it should imo), just that existential risk from SI doesn’t seem apparent. Maybe we disagree on how a superintelligence would interact with reality (or how a superintelligence would present?). I can’t imagine that something that alien would worry or care much about humans. Our extreme inferiority will either be our doom or salvation.

Have you actually seen orthonormal's sequence on this exact argument? My intuitions say the "Martha" AI described therein, which imitates "Mary," would in fact have qualia; this suffices to prove that our intuitions are unreliable (unless you can convincingly argue that some intuitions are more equal than others.) Moreover, it suggests a credible answer to your question: integration is necessary in order to "understand experience" because we're talking about a kind of "understanding" which necessarily stems from the internal workings of the system, specifi... (read more)

2TAG
Yes. Obviously, both arguments rely on intuition. I don't think intuitions are 100% reliable. I do think we are stuck with them. I have been addressing the people who have the expected response to Mary's Room ..I can't do much about the rest. I think that sort of objection just pushes the problem back. If "integration" is a fully physical and objective process, and if Mary is truly a superscientist, then Mary will fully understand how her subject "integrated" their sense experience, and won't be surprised by experiencing red.

The obvious reply would be that ML now seems likely to produce AGI, perhaps alongside minor new discoveries, in a fairly short time. (That at least is what EY now seems to assert.) Now, the grandparent goes far beyond that, and I don't think I agree with most of the additions. However, the importance of ML sadly seems well-supported.

Hesitant to bet while sick, but I'll offer max bet $20k at 25:1.

1RatsWrongAboutUAP
Double the odds and I will accept immediately. Otherwise I might accept in the next few days depending on if I get more offers or not. I have reached out to others now and I expect when its confirmed that I really am giving out money, that more offers will come in.

The basic definition of evidence is more important than you may think. You need to start by asking what different models predict. Related: it is often easier to show how improbable the evidence is according to the scientific model, than to get any numbers at all out of your alternative theory.

>Instead it just means that Bob shouldn't rely on his company doing the fastest and easiest thing and having it turn out fine. Instead Bob should expect to make sacrifices, either burning down a technical lead or operating in (or helping create) a regulatory environment where the fastest and easiest option isn't allowed.

The above feels so bizarre that I wonder if you're trying to reach Elon Musk personally. If so, just reach out to him. If we assume there's no self-reference paradox involved, we can safely reject your proposed alternatives as obviously ... (read more)

3paulfchristiano
There are many industries where it is illegal to do things in the fastest or easiest way. I'm not exactly sure what you are saying here.

See, that makes it sound like my initial response to the OP was basically right, and you don't understand the argument being made here. At least one Western reading of these new guidelines was that, if they meant anything, then the bureaucratic obstacle they posed for AGI would greatly reduce the threat thereof. This wouldn't matter if people were happy to show initiative - but if everyone involved thinks volunteering is stupid, then whose job is it to make sure the official rules against a competitive AI project won't stop it from going forward? What does that person reliably get for doing the job?

9Lao Mein
Volunteering to work extra hard at your job and break things is highly valued (and rewarded). Volunteering at your local charity is childishly naive. If your labor/time/intelligence was truly worth anything, you wouldn't be giving it away for free.

All of that makes sense except the inclusion of "EA," which sounds backwards. I highly doubt Chinese people object to the idea of doing good for the community, so why would they object to helping people do more good, according to our best knowledge?

9Lao Mein
Yes. We hold volunteering in contempt.

I note in passing that the elephant brain is not only much larger, but also has many more neurons than any human brain. Since I've no reason to believe the elephant brain is maximally efficient, making the same claim for our brains should require much more evidence than I'm seeing.

gilch1812

That's if you're counting the cerebellum, which doesn't seem to contribute much to intelligence, but is important for controlling the complicated musculature of a trunk and large body.

By cortical neuron count, humans have about 18 billion, while elephants have less than 6, comparable to a chimpanzee. (source)

Elephants are undeniably intelligent as animals go, but not at human level.

Even blue whales barely approach human level by cortical neuron count, although some cetaceans (notably orcas) exceed it.

What are you trying to argue for? I'm getting stuck on the oversimplified interpretation you give for the quote. In the real world, smart people such as Leibniz raised objections to Newton's mechanics at the time, objections which sound vaguely Einsteinian and not dependent on lots of data. The "principle of sufficient reason" is about internal properties of the theory, similar to Einstein's argument for each theory of relativity. (Leibniz's argument could also be given a more Bayesian formulation, saying that if absolute position in space is meaningful, t... (read more)

Out of curiosity, what do you plan to do when people keep bringing up Penrose?

Pretty sure that doesn't begin to address the reasons why a paranoid dictator might invade Taiwan, and indeed would undo a lot of hard work spent signaling that the US would defend Taiwan without committing us to nuclear war.

Pretty sure this is my last comment, because what you just quoted about soundness is, in fact, a direct consequence of Löb's Theorem. For any proposition P, Löb's Theorem says that □(□P→P)→□P. Let P be a statement already disproven, e.g. "2+2=5". This means we already had □¬P, and now we have □(¬P & P), which is what inconsistency means. Again, it seemed like you understood this earlier.

https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem

A coherent formal system can't fully define truth for its own language. It can give more limited definitions for the truth of some statement, but often this is best accomplished by just repeating the statement in question. (That idea is also due to Tarski: 'snow is white' is true if and only if snow is white.) You could loosely say (very loosely!) that a claim, in order to mean anything, needs to point to its own definition of what it would mean for that claim to be true. Any more general defin... (read more)

-5Thoth Hermes

Here's some more:

A majority (55%) of Americans are now worried at least somewhat that artificially intelligent machines could one day pose a risk to the human race’s existence. This marks a reversal from Monmouth’s 2015 poll, when a smaller number (44%) was worried and a majority (55%) was not.

https://www.monmouth.edu/polling-institute/reports/monmouthpoll_us_021523/

The first part of the parent comment is perfectly true for a specific statement - obviously not for all P - and in fact this was the initial idea which inspired the theorem. (For the normal encoding, "This statement is provable within PA," is in fact true for this exact reason.) The rest of your comment suggests you need to more carefully distinguish between a few concepts:

  1. what PA actually proves
  2. what PA would prove if it assumed □P→P for all claims P
  3. what is actually true, which (we believe) includes category 1 but emphatically not 2.
1Thoth Hermes
I think the key here is that our theorem-prover or "mathematical system" is capable of considering statements to be "true" within itself, in the sense that if it believes it has proven something, well, it considers at least that to be true. It's got to pick something to believe in, in this case, that if it has written a proof of something, that thing has been proven. It has truth on that level, at least.  Consider that if we tabooed the use of the word "true" and used instead "has a proof" as a proxy for it, we don't necessarily get ourselves out of the problem. We basically are forced to do this no matter what, anyway. We sometimes take this to mean that "has a proof" means "could be true, maybe even is mostly really true, but all we know for sure is that we haven't run in to any big snags yet, but we could." Metaphysically, outside-of-the-system-currently-being-used truth? I think the Sequences are saying something more strongly negative than even Gödel's Theorems are usually taken to mean. They are saying that even if you just decide to use "my system thinks it has proved it, and believes that's good enough to act on", you'll run into trouble sooner than if you hesitated to act on anything you think you've already proved.

This may be what I was thinking of, though the data is more ambiguous or self-contradictory: https://www.vox.com/future-perfect/2019/1/9/18174081/fhi-govai-ai-safety-american-public-worried-ai-catastrophe

3gwd
Thanks for these, I'll take a look.  After your challenge, I tried to think of where my impression came from.  I've had a number of conversations with relatives on Facebook (including my aunt, who is in her 60's) about whether GPT "knows" things; but it turns out so far I've only had one conversation about the potential of an AI apocalypse (with my sister, who started programming 5 years ago).  So I'll reduce confidence in my assessment re what "people on the street" think, and try to look for more information. Re HackerNews -- one of the tricky things about "taking the temperature" on a forum like that is that you only see the people who post, not the people who are only reading; and unlike here, you only see the scores for your own comments, not those of others.  It seems like what I said about alignment did make some connection, based on the up-votes I got; I have no idea how many upvotes the dissenters got, so I have no idea if lots of people agreed with them, or if they were the handful of lone objectors in a sea of people who agreed with me.

I'll look for the one that asked about the threat to humanity, and broke down responses by race and gender. In the meantime, here's a poll showing general unease and bipartisan willingness to legally restrict the use of AI: https://web.archive.org/web/20180109060531/http://www.pewinternet.org/2017/10/04/automation-in-everyday-life/

Plus:

A SurveyMonkey poll on AI conducted for USA TODAY also had overtones of concern, with 73% of respondents saying that would prefer if AI was limited in the rollout of newer tech so that it doesn’t become a threat to huma

... (read more)

The average person on the street is even further away from this I think.

This contradicts the existing polls, which appear to say that everyone outside of your subculture is much more concerned about AGI killing everyone. It looks like if it came to a vote, delaying AGI in some vague way would win by a landslide, and even Eliezer's proposal might win easily.

3gwd
Can you give a reference?  A quick Google search didn't turn anything like that up.

It would've been even better for this to happen long before the year of the prediction mentioned in this old blog-post, but this is better than nothing.

Because the United Nations is a body chiefly concerned with enforcing international treaties, I imagine it would be incentivized to support arguments in favor of increasing its own scope and powers.

You imagine falsely, because your premise is false. The UN not only isn't a body, its actions are largely controlled by a "Security Council" of powerful nations which try to serve their own interests (modulo hypotheticals about one of their governments being captured by a mad dog) and have no desire to serve the interests of the UN as such. This is mostly by design. We created the UN to prevent world wars, hence it can't act on its own to start a world war.

-5Thoth Hermes

I don't know that I follow. The question, here and in the context of Löb's Theorem, is about hypothetical proofs. Do you trust yourself enough to say that, in the hypothetical where you experience a proof that eating babies is mandatory, that would mean eating babies is mandatory?

1Thoth Hermes
"Experiencing a proof that eating babies is mandatory" could only amount to something like being force-fed or otherwise some kind of impossible to resist scenario. "Experiencing a proof" of any proposition consists of the sequence of events or observed inputs leading to one to conclude a proposition is true. If you are firmly in the against baby eating camp, then presumably you've got ample proof already that it is not mandatory, and always generally possible to resist anyone trying to make you eat them. I think you're talking about the hypothetical world in which both baby eating and not baby eating seem correct, such as where one hypothetically reads a proof that seems correct, and urges one to eat babies, whilst it simultaneously seeming very wrong according to our normal intuitions like it normally does. That hypothetical world is inconsistent by assumption, though. I'm not talking about that. I think we live in the world in which our sense that baby eating is very wrong means a proof of that can be constructed.

I don't even understand why you're writing □(□P→P)→□P→P, unless you're describing what the theory can prove. The last step, □P→P, isn't valid in general, that's the point of the theorem! If you're describing what formally follows from full self-trust, from the assumption that □P→P for every P, then yes, the theory proves lots of false claims, one of which is a denial that it can prove 2+2=5.

1Thoth Hermes
□(□P→P)→□P→P makes sense to me. Just let P = □P→P.  In other words, P = "If this statement has proof, it is true."  Consider that "If I can write a proof of P, then P has been proven" is simply true.  Therefore, I have written a proof of the statement "If I can write a proof of P, then P has been proven", which means it has been proven. Let that statement = P. So I have that □(□P→P). Since this statement is also true, we have that □P→P.  On the whole of it, we'd certainly want □P to have something to do with P, no? Why wouldn't we expect □P to tell us anything at all about P?

If you're asking how we get to a formal self-contradiction, replace "False" with a statement the theory already disproves, like "0 does not equal 0." I don't understand your conclusion above; the contradiction comes because the theory already proves ¬False and now proves False, so the claim about "¬□False" seems like a distraction.

1Thoth Hermes
Löb's theorem: □(□P→P)→□P Informally, this means that □P→P is something we'd like to believe is true. If I can prove P, that must mean P is true. It would be good for this to be true. I might also like to believe that P is provable. Informally, maybe "P" is the best we can hope for, the claim that P is true. If my system can prove P, that's about as close as I can get to believing P. So, □(□P→P)→□P→P. Worst case scenario: Let's "sanity check" by replacing P with False. If P is false, then I'd expect not to be able to prove it.  □(□F→F)→□F→F □(¬□F)→¬□F. If replacing P with False was enough to break Löb's theorem, then Löb's theorem would be false. PA can handle False.

Do you also believe that if you could "prove" eating babies was morally required, eating babies would be morally required? PA obviously believes Lob's theorem itself, and indeed proves the soundness of all its actual proofs, which is what I said above. What PA doesn't trust is hypothetical proofs.

1Thoth Hermes
No, I would have to think that eating babies was morally required first. If not, then I could not prove it.
Answer by hairyfigment20

How do you interpret "soundness"? It's being used to mean that a proof of X implies X, for any statement X in the formal language of the theory. And yes, Löb's Theorem directly shows that PA cannot prove its own soundness  for any set of statements save a subset of its own theorems.

1Thoth Hermes
I consider myself to be a reasoner at least as powerful as PA, since I am using myself plus PA plus Löb's Theorem to reason about systems at least as powerful as myself (which encompasses everything I've thus far described).  I consider myself "sound" if I believe myself to be trustworthy / reliable enough to believe what I believe Löb's Theorem's says, which is something to do with self-trust and the ability to believe that certain propositions are true, as long as I can prove that proving them true means that they are true.  

Go ahead and test the prediction from the start of that thread, if you like, and verify that random people on the street will often deny the existence of the other two types. (The prediction also says not everyone will deny the same two.) You already know that NTs - asked to imagine maximal, perfect goodness - will imagine someone who gets upset about having the chance to save humanity by suffering for a few days, but who will do it anyway if Omega tells him it can't be avoided.

1MSRayne
Oh god, that not only describes Jesus but also many main characters of epic fantasy stories etc. The whole reluctant hero bullshit. I was always like, who in their right mind wouldn't want to be the hero? Interesting point though!

It sure sounds like you think outsiders would typically have the "common sense" to avoid Ziz. What do you think such an outsider would make of this comment?

4Ben Pace
I think mostly somewhat confused?  Though I've never met her, from her writing and things others have told me, I expect LaSota seems much more visibly out-of-it and threatening than e.g. Michael does, who I have met and didn't seem socially alarming or unpredictable in the way where you might be scared of a sudden physical altercation.
6tailcalled
Scott Alexander seems to have withdrawn some of his critiques of Michael Vassar.

There's this guy Michael Vassar who strikes me - from afar - as a failed cult leader, and Ziz as a disciple of his who took some followers in a different direction. Even before this new information, I thought her faith sounded like a breakaway sect of the Church of Asmodeus.

Michael Vassar was one of the inspirations for Eliezer's Professor Quirrell, but otherwise seems to have little influence.

At the risk of this looking too much like me fighting a strawman...

Cults may have a tendency to interact and pick up adaptations from each other, but it seems wrong to operate on the assumption that they're all derivatives of one ancestral "proto-cult" or whatever. Cult leaders are not literal vampires, where you only become a cult leader by getting bit by a previous cult leader or whatever.

It's a cultural attractor, and a cult is a social technology simple enough that it can be spontaneously re-derived. But cults can sometimes pick up or swap beliefs &... (read more)

Ben Pace*1013

I heard that LaSota ('ziz') and Michael interacted but I am sort of under the impression she was always kind of violent and bizarre before that, so I'm not putting much of this bizarreness down to Michael. Certainly interest in evidence about this (here or in DM).

While it's arguably good for you to understand the confusion which led to it, you might want to actually just look up Solomonoff Induction now.

4Adam Zerner
Solomonoff Induction has always felt a little intimidating but I see how it's relevant so yeah, I'll check it out at some point.

>Occam's razor. Is it saying anything other than P(A) >= P(A & B)?

Yes, this is the same as the argument for (the abstract importance of) Solomonoff Induction. (Though I guess you might not find it convincing.)

We have an intuitive sense that it's simpler to say the world keeps existing when you turn your back on it. Likewise, it's an intuitively simpler theory to say the laws of physics will continue to hold indefinitely, than to say the laws will hold up until February 12, 2023 at midnight Greenwich Mean Time. The law of probability which you cit... (read more)

2Adam Zerner
I'm not familiar with Solomonoff induction or minimum message length stuff either, sorry. My first thought when I read the "world keeps existing" and "laws of physics keep holding" examples was to think that the conjunction rule covers it. Ie. something like P(these are the laws of physics) > P(these are the laws of physics & 2/12/23 and onwards things will be different). But I guess that begs the question of why. Why is the B a new condition rather than the default assumption, and I guess Occam's razor is saying something like "Yes, it should be a new condition. The default assumption should be that the laws keep working". I'm struggling to understand how this generalizes though. Does it say that we should always assume that things will keep working the way they have been working? I feel like that wouldn't make sense. For the laws of physics we have evidence pointing towards them continuing to work. They worked yesterday, the day before that, the day before that, etc. But for new phenomena where we don't have that evidence, I don't see why we would make that assumption. This probably isn't a good example, but let's say that ice cream was just recently invented and you are working as the first person taking orders. Your shop offers the flavors of chocolate, vanilla and strawberry. The first customer orders chocolate. Maybe Occam's razor is saying that it's simpler to assume that the next person will also order chocolate, and that people in general order chocolate? I think it's more likely that they order chocolate than the other flavors, but we don't have enough evidence to say that it's significantly more likely. And so more generally, what makes sense to me, is that how much we assume things will keep working the way they have been depends on the situation. Sometimes we have a lot of evidence that they will and so we feel confident that they will continue to. Other times we don't. I have a feeling at least some of what I said is wrong/misguided though.
-1TAG
But intuitions are just subjective flapdoodle, aren't they? ;-)

Except, if you Read The Manual, you might conclude that in fact those people also can't understand you exist.

1MSRayne
Lol this entire thread that you've linked to is "why neurotypicals are bad, except I'm not going to admit that they're bad and I'll keep protesting devoutly that they're not bad even though I haven't said a single actually positive thing about them yet."
4RobertM
I see no mention of risk of death by ethnicity in the post.  Do you want to clarify your accusation?

Well, current events seem to have confirmed that China couldn't keep restrictions in place indefinitely, and the fact that they even tried - together with the cost of stopping - suggest that it would've been a really good idea to protect their people using the best vaccine. China could presumably have just stuck it in everyone by force of law. What am I missing here?

3Lao Mein
If you ever find out, I would love to know. I haven't heard a good explanation thus far.
Load More