All of Jayson_Virissimo's Comments + Replies

It seems to me that there is some tension in the creed between (6), (9), and (11). On the one hand, we are supposed to affirm that "changes to one’s beliefs should generally also be probabilistic, rather than total", but on the other hand, we are using belief/lack of belief as a litmus test for inclusion in the group.

4dxu
(9) is a values thing, not a beliefs thing per se. (I.e. it's not an epistemic claim.) (11) is one of those claims that is probabilistic in principle (and which can be therefore be updated via evidence), but for which the evidence in practice is so one-sided that arriving at the correct answer is basically usable as a sort of FizzBuzz test for rationality: if you can’t get the right answer on super-easy mode, you’re probably not a good fit.

My prediction is that giving such population-level arguments in response to why they are by themselves is much less likely to result in being left alone (presumably, the goal) than by saying their parents said it's okay, so would show lower levels of instrumental rationality, rather than demonstrate more agency.

1Dzoldzaya
I presume the stated goal of schooling your child in this way is to set the grown-up's mind at ease, rather than ensuring the child is left alone (which is probably the default outcome), and I expect both responses would suffice for this instrumental purpose.

There's nothing unjustified about appealing to your parents' authority. Parents are legally responsible for their children: they have literal (not epistemic) authority over them, although it's not absolute.

2Dzoldzaya
Technically true, but it's a very unagentic way for a five-year old to respond to something they should have the capability to justify through argument. 

I think those are good lessons to learn from the episode, but it should be pointed out that Copernicus' model also required epicycles in order to achieve approximately the same predictive accuracy as the most widely used Ptolemaic systems. Sometimes later, Kepler-inspired corrected versions of Copernicus' model, are projected back into the past making the history both less accurate and interesting, but more able fit a simplistic morality tale.

...I (mostly) trust them to just not do things like build an AI that acts like an invasive species...

What is the basis of this trust? Anecdotal impressions of a few that you know personally in the space, opinion polling data, something else?

2Max H
A bit of anecdotal impressions, yes, but mainly I just think that in humans being smart, conscientious, reflective, etc. enough to be the brightest researcher a big AI lab is actually pretty correlated with being Good (and also, that once you actually solve the technical problems, it doesn't take that much Goodness to do the right thing for the collective and not just yourself). Or, another way of looking at it, I find Scott Aaronson's perspective convincing, when it is applied to humans. I just don't think it will apply at all to the first kinds of AIs that people are actually likely to build, for technical reasons.

I don't have a solution to this, but I have a question that might rule in or out an important class of solutions.

The US spent about $75 billion in assistance to the Ukraine. If both the US and EU pitched in an amount of similar size, that's $150 billion. There are about 2 million people in Gaza.

If you split the money evenly between each person and the country that was taking them in, how much of the population could you relocate? That is, Egypt gets $37,500 for allowing Yusuf in and Yusuf gets $37,500 for emigrating, Morocco gets $37,000 for allowing Fatim... (read more)

Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.

Any idea if something similar is being done to cater to economists (or other social scientists)?

3Raemon
It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I'm not sure what Cowen wants. I'm personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi's recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)

Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren't of that type, their members can't easily engage with those arguments. For example:

...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested

... (read more)
3Daniel Kokotajlo
Has Tyler Cowen heard of the Bio Anchors by Ajeya Cotra model or the takeoffspeeds.com model by Tom Davidson or Roodman's model of the singularity, or for that matter the earlier automation models by Robin Hanson? All of them seem to be the sort of thing he wants, I'm surprised he hasn't heard of them. Or maybe he has and thinks they don't count for some reason? I would be curious to know why.
2Charlie Steiner
There's been reasonable amounts of modeling work done in the context of managing money. E.g. https://forum.effectivealtruism.org/posts/Ne8ZS6iJJp7EpzztP/the-optimal-timing-of-spending-on-agi-safety-work-why-we This is probably the sort of thing Tyler would want but wouldn't know how to find.
2Raemon
For the case of David Chalmers, I think that's explicitly what Robby was going for in this post: https://www.lesswrong.com/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruin 

IMO,  Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.

2zchuang
He posted on a twitter a request to talk to people who feel strongly here.
4Stephen Fowler
For anyone who wasn't aware both Ng and LeCun have strongly indicated that they don't believe people existential risks from AI are a priority. Summary here You can also check out Yann's twitter.  Ng believes the problem is "50 years" down the track, and Yann believes that many concerns AI Safety researchers have are not legitimate. Both of them view talk about existential risks as distracting and believe we should address problems that can be seen to harm people in today's world.   

Consider the following rhetorical question:

Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?

Do we expect the answer to this to be any different for vegans than for AI-risk worriers?

2ArisC
Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm - and preventing it surely justifies violence (if it doesn't, then what does?)

Does that mean the current administration is finally taking AGI risk seriously or does that mean they aren't taking it seriously?

IIRC, he says that in Intuition Pumps and Other Tools for Thinking.

6Ben Pace
You're right. Thanks!

I noticed that Meta (Facebook) isn't mentioned as being participants. Is that because they weren't asked to or because they were asked but declined?

1RomanS
Also no Tesla, it spite of: * Tesla's ~4 million of AI-powered wheeled robots on the road * Elon being one of the most prominent people pushing for AI regulations * Elon himself claiming that the Tesla AI is among the smartest AIs (which makes sense, given the complexity of the task, and how Teslas are solving it)  Maybe Meta and Tesla will join later. If not, perhaps there is some political conflict in the play. 

...there is hardly any mention about memorization on either LessWrong or EA Forum.

I'm curious how you came to believe this. IIRC, I first learned about spaced repetition from these forums over a decade ago and hovering over the Memory and Mnemonics and Spaced Repetition tags on this very post shows 13 and 67 other posts on those topics, respectively. In addition, searching for "Anki" specifically is currently returning ~800+ comments.

1Alvin Ånestrand
My apologies, when I started on the post I searched for the word "memorization", and there were not many results. I forgot to change the statement when I realised there were more posts than I first thought. Although, I still think there is too little discussion about memorization, perhaps with the exception of spaced repetition. Thank you for pointing out the error.
1[comment deleted]

FWIW, if my kids were freshmen at a top college, I would advise them to continue schooling, but switch to CS and take every AI-related course that was available if they hasn't already done so.

When I worked for a police department a decade ago, we used Zebra, not Zulu, for Z, but our phonetic alphabet started with Adam, Baker, Charles, etc...

Strictly speaking it is a (conditional) "call for violence", but we often reserve that phrase for atypical or extreme cases rather than the normal tools of international relations. It is no more a "call for violence" than treaties banning the use of chemical weapons (which the mainstream is okay with), for example.

It's a call for preemptive war; or rather, it's a call to establish unprecedented norms that would likely lead to a preemptive war if other nations don't like the terms of the agreement. I think advocating a preemptive war is well-described as "a call for violence" even if it's common for mainstream people to make such calls. For example, I think calling for an invasion of Iraq in 2003 was unambiguously a call for violence, even though it was done under the justification of preemptive self-defense.

Yeah, this comment seemed technically true but seems misleading with regards to how people actually use words

It is advocating that we treat it as the class-of-treaty we consider nuclear treaties, and yes that involves violence, but "calls for violence" just means something else.

If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.

How many LessWrong users are there? What is the base rate for cult formation? Shouldn't we answer these questions before speculating about what "should be done"?

3Noosphere89
Yeah, I'll probably edit the post to remove takeaways and to talk about base rates.

Virtue ethics says to decide on rules ahead of time.

This may be where our understandings of these ethical views diverges. I deny that virtue ethicists are typically in the position to decide on the rules (ahead of time or otherwise). If what counts as a virtue isn't strictly objective, then it is at least intersubjective, and is therefore not something that can decided on by an individual (at least relatively). It is absurd to think to yourself "maybe good knives are dull" or "maybe good people are dishonest and cowardly", and when you do think such though... (read more)

Another interesting case study:


Phineas Gage was an American railroad construction foreman remembered for his improbable survival of an accident in which a large iron rod was driven completely through his head, destroying much of his brain's left frontal lobe, and for that injury's reported effects on his personality and behavior over the remaining 12 years of his life...".

3mad
Apparently the extent to which Phineas was affected by the injury is exaggerated, see: https://skeptoid.com/episodes/4744

Assuming humans can't be "aligned", then it would also make sense to allocate resources in an attempt to prevent one of them from becoming much more powerful than all of the rest of us.

We (and I mostly mean the US, where I'm located) seem to design our culture and our government in an incredibly convoluted, haphazard and error-prone way. No thought is given to the long-run consequences or the stability of our political decisions.

It's interesting to me that it looks that way to you, given that the architects of the American system (James Madison, John Jay etc...) where explicitly attempting to achieve a kind of "defense in depth" (e.g. separation of powers between the branches, federalism with independent states, decentralized militia sys... (read more)

1blackstampede

If "rationalist" is a taken as a success term, then why wouldn't "effective altruist" be as well? That is to say: if you aren't really being effective, then in a strong sense, you aren't really an "effective altruist". A term that doesn't presuppose you have already achieved what you are seeking would be "aspiring effective altruist", which is quite long IMO.

5Said Achmiz
One man’s modus tollens is another’s modus ponens—I happen to think that the term “effective altruist” is problematic for exactly this reason.

Did nobody make the claim that 'guy who claims he wants free speech will restrict speech instead'?

I interpreted the following as saying just that:

Free speech good but endangered by this man who wants free speech.

4Adele Lopez
No, because that point is for the case where he does want free speech, just that there are other factors that might interfere with that. This point covers the case where he doesn't actually want free speech (i.e. wants it for me but not thee).

Would you agree with a person that told you that human testimony is not sufficient grounds for the belief in a natural event (say, that your friend was attacked by another, but there were no witnesses and it left no marks) because humans are not perfect, etc...? 

If not, might that indicate the rest of your argument only holds in the case where the prior probability of miracles is extremely low (and potentially misses the crux of the disagreement between yourself and miracle-believing people)?

0Yitz
No, I would not agree with a person that told me that human testimony is not sufficient grounds for the belief in a natural event.  There are many things that we believe based on human testimony that have not been proven beyond a shadow of a doubt. For example, we take the testimony of our friends and family about their lives as sufficient grounds for belief. We also take the testimony of experts in various fields as sufficient grounds for belief. In both of these cases, we trust that the person is telling the truth to the best of their knowledge and we do not require perfect certainty. The same is true for testimony about supernatural events. We can never be certain that any particular event is a miracle, but we can weigh the evidence and decide whether or not it is sufficient to believe that a miracle has occurred.

Every industry has downsides. Some industries have much larger downsides for some kinds of people. If you personally think the tradeoffs are such that overall you prefer to stay in finance, then by analogy perhaps others who are like you would as well. 

Deontology and virtue ethical frameworks have lots of resources for explaining why one shouldn't lie, but from a purely (naively) consequentialist perspective, it would be wrong to encourage people to enter your industry despite its problems only if compared to their next best alternative it would leave them worse off overall. Does it?

This is the form I expect answers to "why do you believe x"-type questions to take. Thanks.

Note: That interfax.ru link doesn't seem to work from North American or European IP addresses, but you can view a snapshot on the Way Back Machine here.

On March 4th Putin's troops shelled Zaporizhzhia nuclear power plant in Enerhodar city.

Why do you believe that?

4ViktoriaMalyasova
Because there is a video of the shelling. Russian media confirms that there was fighting and a fire, though they blame the fire on Ukrainians. I guess it could be Ukrainians shelling their own NPP, but it seems much more likely that Russian troops did it because they were the ones attacking, so they were more likely to have artillery in place and Ukrainians would have to fire at their own people to shell it.

I think that if Lesswrong wants to be less wrong, then questions "why do you believe in that?" should not be downvoted.

As for the question itself, I know next to nothing about the situation on this NPP, but just from priors I'd give 70% that if someone shelled it, it was Russian army.

1) It is easier to shoot at NPP if you don't know what you re shooting at. Russian army is much more likely to mistake this target for something else.

2) p(Russian government lies that it wasn't them | it was them) > p(Ukrainian government lies it wasn't them | it was them) ... (read more)

Care to specify over what time horizon you expect(ed) it to fold?

5Arcayer
No. I'm going to judge my prediction by the number of deaths, not days (or weeks, or months. Years would really mess with my idea of what's going on.) Insignificant: Less than 20,000. If single battles in the American Civil War are larger than this entire conflict, then the sides must not have been fighting very hard. Total War: I would normally say millions. I expect that the original prediction did not actually mean that? So, I'll say the other side was right if it's over a hundred thousand. More right than me at above 50,000. Of course, I'm also wrong if Russia surrenders.   There's a lot of fog of war right now. I think anyone who's changed their mind about the events in Ukraine based on new data is being silly. Hopefully we'll have real data, and not just war propaganda in the not too distant future. Russia says it's winning easily, but is taking its time to avoid civilian casualties. Ukraine has a paradoxical stance where it's winning easily, but if Germany (or X) doesn't give it (Something) (Right Now) it'll cave instantly. There's pretty much no neutral observers. I sort of expected more and clearer information. I think that was a mistake on my part. Ukraine and Russia are both incredibly untrustworthy, so I shouldn't have based that part of my expectations on typical wars. In general I'd like for the facts to speak for themselves, and would like to avoid debating definitions too heavily? I'm displeased that I'm turning a simple and symmetric single sentence statement into several paragraphs of text, but think people are updating way too strongly on either the wrong evidence or on unreliable evidence that should be ignored.

I've personally known many people who have had serious medical problems that sure looked clearly like vaccine reactions.

I don't consider it a "serious medical problem", but I attempted to report (via the phone number on the paperwork given me by the person that administered the shot at Wallgreens) my 48 hours long migraine + ~4 day long high blood pressure (as measured by my Omron home blood pressure monitor) after getting a Pfizer booster. I was told they don't need me to fill anything out because those are already known side-effects.

Searching Google for ... (read more)

1tslarm
If you don't mind sharing, what vaccine(s) did you have for your previous doses, and what side effects did they cause? And had you had a previous covid infection?  (I'm wondering on behalf of a family member who is prone to awful migraines, and who I think has high blood pressure. Sorry about your symptoms by the way, I know a two-day migraine can be much more unpleasant than most people probably realise.)
7tivelen
I searched the CDC's Vaccine Adverse Event Reporting System (VAERS) and there are 474 reported cases of abnormal blood pressure following COVID-19 vaccination. Looking further in the Google search, I found a study (n = 113) which indicated increased risk of high blood pressure after vaccination, especially after previous infection. Plainly, not everyone in the healthcare system is on the same page about side effects. I'd err on the side of the Walgreens person you talked to being more accurate, given that high blood pressure is a known side effect. Not known by that Nebraska Medicine doctor, apparently.
1tivelen
Who exactly told you that?

PredictionBook now has a basic tagging functionality. Props to CFAR and Bellroy for supporting me in getting the feature added.

Are we assuming affirming A-theory is indicative of science illiteracy because it is incompatible with special relativity or for some other reason?

4Rob Bensinger
Basically, though with two wrinkles: * By "science illiteracy" I mean something like 'having relatively little familiarity with or interest in scientific knowledge and the scientific style of thinking', not 'happens to not be familiar with special relativity'. E.g., I don't dock anyone points for saying they're agnostic about A-theory -- no one can be an expert on every topic. But you can at least avoid voting in favor of A-theory, without first doing enough basic research to run into the special relativity issue. * Some versions of the A-theory might technically be compatible with special relativity. E.g., maybe you received a divine revelation to the effect of 'there's a secret Correct Frame of Reference, and you alone among humans (not, e.g., your brother Fred who lives on Mars) live your life in that Frame'. The concern is less 'these two claims are literally inconsistent', more 'reasonable mentally stable people should not think they have Bayesian evidence that their Present Moment is metaphysically special and unique'. * (Or, if they've literally just never heard of relativity in spite of being a faculty member at a university, they should refrain from weighing in on the A-theory vs. B-theory  debate until they've done some googling.)

For reference, here are the raw data from when LWers took the survey in 2012 and here is the associate post from which it was extracted.

This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).

Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

*As far as I know I didn't know any such people before 2020;... (read more)

Actually, several of the chapters of this book are very likely completely wrong and the rest are on shakier foundations than I believed 9 years ago (similar to other works of social psychology that accurately reported typical expert views at the time). See here for further elaboration.

I'm on the fence about recommending this book now, but please read skeptically if you do choose to read it.

I agree with your point about there being at least two distinct ways to interpret the non-central fallacy, and also the OPs point that while ad hominem arguments are technically invalid, they can be of high inductive strength in some circumstances. I'm mostly critiquing Scott's choice of examples for introducing the non-central fallacy, since mixing it with other fallacious forms of reasoning makes it harder to see what the non-central part is contributing to the mistake being made. For this reason, the theft example is preferred by me.

I think the Martin Luther King scenario is a particularly bad example for explaining the non-central fallacy, because it depends on a conjunction of fallacies, rather than isolating the non-central part. The inference from (1) MLK does/doesn't fit some category with negative emotional valence, to (2) his ideas are bad just is the ad hominem fallacy (which is distinct from the non-central fallacy). The truth (or falsity) of Bloch's theorem is logically independent of whether or not André Bloch was a murder (which he was).

5DirectedEvolution
But the point of the OP is that fallacies are not logical, but inferential. In social contexts, we often use a leader's social reputation as a proxy for whether their ideas are any good. The point of the OP is that this may be inferentially reasonable, without being deductively logical.

Does this add you to an email list where discussion is happening, or merely put you on a map so that others in the area can reach out to you on an ad hoc basis?

1Brendan Long
I think it just puts you on the map.

I asked around about this on the ##hplusroadmap irc channel:

15:59 < Jayson_Virissimo> Yeah, sorry. Was much more interested in the claim about peptide sourcing specifically. 
16:00 < Jayson_Virissimo> Is that 4-5 weeks duration normal? How flexible is it, if at all? 
16:01 < yashgaroth> some of them might offer expedited service, though I've never had cause to find out when ordering peptides and am not bothered to check...and it'd save you a week or two at most 
16:02 < Jayson_Virissimo> What would you guess as to the ma

... (read more)

Are there any English language sources where I could learn more about the legal issues surrounding human experimentation in Russia such as the one you mentioned?

What explains the 4-5 weeks delivery time for special lab peptide synthesis?

1eillasti
I don't know, this is simply what they state for all peptides and this is what actually took them to deliver.

Mati_Roy makes the case for Phoenix here.

Full Disclosure: I'm in Phoenix.

A similar "measure function is non-normalizable" argument is made at length in McGrew, T., McGrew, L., & Vestrup, E. (2001). Probabilities and the Fine-Tuning Argument: A Sceptical View. Mind, 110(440), 1027-1037.

I've been working on an interactive flash card app to supplement classical homeschooling called Boethius. It uses a spaced-repetition algorithm to economize on the students time and currently has exercises for (Latin) grammar, arithmetic, and astronomy.

Let me know what you think!

Do you happen to know where he discusses this idea?

2romeostevensit
https://medium.com/@steven_gibson/science-and-sanity-and-alfred-korzybski-a25ad01e1bad

Suspended Reason: you may find this philosophy poll of LWers from 8 years ago interesting. The poll results no longer render (as of the 2.0 reboot of LW), but the raw data can be found in this git repo.

1Suspended Reason
Thank you! I'd seen the poll but not the repo.
Load More