All of Anonymous's Comments + Replies

if you get into fashion there is a whole range of expression with suits. with the right cut and materials, you can wear a suit, which looks great as suits ought to, yet is clearly casual and even in Japan would never be perceived as "for work". expensive hobby but if you're already doing this, might as well get into it.

Anonymous287

a quite widespread experience right now among normal people, is having their boss tell them to use AI tools in stupid ways that don't currently work, and then being somewhat held responsible for the failures. (For example: your boss heard about a study saying AI increased productivity by 40% among one group of consultants, so he's buying you a ChatGPT Plus subscription and increasing all your KPI targets by 40%.)

on the one hand this produces very strong anti-AI sentiment. people are just sick of it. if "Office Space" were made now, Bill Lumbergh would be t... (read more)

I don't think this is a sufficiently complete way of looking at things. It could make sense when the problem was thought to be "replication crisis via p-hacking" but it turns out things are worse than this.

  • The research methodology in biology doesn't necessarily have room for statistical funny business but there are all these cases of influential Science/Nature papers that had fraud via photoshop.
  • Gino and Ariely's papers might have been statistically impeccable, the problem is they were just making up data points.
  • there is fraud in experimental physics and a
... (read more)

I think the biggest difference is this will mean more people with a wider range of personality types, socially interacting in a more arms-length/professionalized way, according to the social norms of academia.

Especially in CS, you can be accepted among academics as a legitimate researcher even without a formal degree, but it would require being able and willing to follow these existing social norms.

And in order to welcome and integrate new AI safety researchers from academia, the existing AI safety scene would have to make some spaces to facilitate this style of interaction, rather than the existing informal/intense/low-social-distance style.

This community is doing way better than it has any right to for a bunch of contrarian weirdos with below-average social skills. It's actually astounding.

The US government and broader military-industrial complex is taking existential AI risk somewhat seriously. The head of the RAND Corporation is an existential risk guy who used to work for FHI. 

Apparently the Prime Minister of the UK and various European institutions are concerned as well.

There are x-risk-concerned people at most top universities for AI research and within many of the top commercial l... (read more)

This is the case for me as well, and I don't remember when it developed. I have a timeline that starts with the present day on the right, and goes left and slightly up. It gets blurry around 500 BC. I can somewhat zoom in and recenter it if I'm thinking about individual historical periods. I can roughly place some historical events in the correct spots on the timeline, but since I have never needed to formally memorize many historical dates, this is very rough.

You might be interested in reading about experiences in the broad category of synesthesia, and of... (read more)

1Jacob G-W
That's really interesting! Did you ever use Anki or a spaced-repetition app? I wonder if the feeling happens because the brain gets rewarded for having a certain representation? Or did it just appear out of nowhere?

Normal, standard causal decision theory is probably it. You can make a case that people sometimes intuitively use evidential decision theory ("Do it. You'll be glad you did.") but if asked to spell out their decision making process, most would probably describe causal decision theory.

1Throwaway2367
People also sometimes use fdt: "don't throw away that particular piece of trash onto the road! If everyone did that we would live among trash heaps!" Of course throwing away one piece of trash would not directly (mostly) cause others to throw away their trash, the reasoning is using the subjunctive dependence between one's action and others' action mediated through human morality and comparing the possible future states' desirability.

Fandom people on Tumblr,  AO3, etc. really responded to The Last Jedi (because it was targeted to them). Huge phenomenon. There are now bestselling romance novels that started life as TLJ fanfiction. Everything worked just like it does for the Marvel movies, very profitably.

However there was an additional group of Star Wars superfans outside of fandom, who wanted something very different, hence the backlash. This group is somewhat more male and conservative, and then everything polarized on social media so this somehow became a real culture war issue.... (read more)

5localdeity
Wow, Rian Johnson actually has a Tumblr account.  That statement is plausible.  And explains a decent amount. Does that mean revenue for Disney?  I googled and it looks like you mean "The Love Hypothesis", which is being adapted by Netflix.  Though I doubt Disney anticipated that particular result in any case. Remember that the ultimate question here is whether what Disney did made business sense, knowing what they knew at the time. "An additional group of Star Wars superfans", as in, the group of people that were fans of Star Wars, buying Star Wars toys and games and attending Star Wars Celebration, since before Tumblr was created (2007)?  Their preexisting repeat customer group, in other words?  (I haven't been able to find e.g. statistics on what percentage of Star Wars Celebration attendees were male, but I'd be surprised if, as of 2016, it were less than 80%, and 90% would not surprise me.  I expect similar numbers for "people who've seen more than one Star Wars movie", "people who have bought a Star Wars video game", etc.) You seem to be saying that Disney treated that preexisting customer group as an afterthought, instead targeting the Tumblr/AO3/etc. fandom group.  (In fact, as I say, TLJ looks to be somewhat actively hostile to the first group—having characters criticize them by proxy for liking classic Star Wars stuff.)  I'm not saying that's an incorrect description of what they did, but, given what I expect the revenue numbers from the two groups were at the time TLJ was being created... I think this can be accurately described as "the decisionmakers for TLJ [most importantly Rian Johnson, but also any higher-ups who didn't countermand him] were acting in a way that any profit-maximizer in their position should have recognized as expected-to-lose-profit".  Which was to be demonstrated. So, for franchises with pre-existing fanbases... is the recommendation to go full woke, cater to the Tumblr fandom, and alienate some portion of the pre-existing fanb
Answer by Anonymous10

As far as running a media company goes, fandom is extremely profitable, increasingly so in an age where enormous sci-fi/fantasy franchises drive everything. And there's been huge overlap between fandom communities and social justice politics for a long time.

It's definitely in Disney's interest to appeal to Marvel superfans who write fanfiction and cosplay and buy tons of merchandise, and those people tend to also be supporters of social justice politics.

Like, nothing is being forced on this audience -- there are large numbers of people who get sincerely ex... (read more)

7Valentine
I guess this is the part that's not so clear to me. I see lots of people like this. I also see lots of people who are groaning about being repeatedly lectured and about their characters and franchises getting deconstructed. It's hard for me to find a vantage point that doesn't bubble me in one sphere or the other in a way that makes one side look overwhelmingly larger than the other. So I just can't tell what the actual demographics are here. But the revealed behavior of these companies gives me the impression that they do find it crystal clear. That's what I find a bit bewildering.

The “canonical” rankings that CS academics care about would be csrankings.org (also not without problems but the least bad).

2LawrenceC
That list seems really off to me - I don't think UCSD and UMich should rank above both Stanford and Berkeley.  EDIT: This is probably because csrankings.org calculates ranking based on a normalized count of unweighted faculty publications, as oppose to weighing pubs by impact.  I think this list is the least bad of any I've seen so far: https://drafty.cs.brown.edu/csopenrankings/

The KataGo paper says of its training, "Self-play games used Tromp-Taylor rules modified to not require capturing stones within pass-alive territory".

It sounds to me like this is the same scoring system as used in the adversarial attack paper, but I don't know enough about Go to be sure.

5MathiasKB
No, the KataGo paper explicitly states at the start of page 4: "Self play games used Tromp-Taylor rules [21] modified to not require capturing stones within pass-aliveterritory" Had KataGo been trained on unmodified Tromp-Taylor rules, the attack would not have worked. The attack only works because the authors are having KataGo play under a different ruleset than it was trained on. If I have the details right, I am honestly very confused about what the authors are trying to prove with this paper. Given their Twitter announcement claimed that the rulesets were the same my best guess is simply that it was an oversight on their part. (EDIT: this modification doesn't matter, the authors are right, I am wrong. See my comment below)
2ChristianKl
No. KataGo loses in their examples because it doesn't capture stones within pass-alive territory. It's training rules are modified so it doesn't need to do that. 
Answer by Anonymous20

The Sprawl trilogy by William Gibson (starting with Neuromancer) is basically about this, and is a classic for a reason. It's not exactly hard sci-fi though.

If you don’t signal the expected way then you are, if not being dishonest, at least misleading people — in many cases it is less honest.

Everyone knows your job application is written to puff you up, and they price it in. If you don’t have the correct amount of puffery, you’re misleading people into thinking you’re worse than you are.

It’s a bad way to communicate and a bad race-to-the-bottom equilibrium but not actually dishonest.

You can write “Dear X” on a letter to a person you don’t know. People used to sign off letters “Your obedient servant”. It evolves for weird signaling reasons but is not taken literally.

"Systems that would adapt their policy if their actions would influence the world in a different way"

Does the teacup pass this test? It doesn't necessarily seem like it.

We might want to model the system as "Heat bath of Air -> teacup -> Socrates' tea". The teacup "listens to" the temperature of the air on its outside, and according to some equation transmits some heat to the inside. In turn the tea listens to this transmitted heat and determines its temperature.

You can consider the counterfactual world where the air is cold instead of hot. Or the cou... (read more)

3Jiro
This kind of description depends completely on how you characterize things. If the policy is "transmit heat according to physics" the policy doesn't change. If the policy is "get hotter" this policy changes to "get colder". It's the same thing, described differently.

There are industry places that will, at least as stated, take you seriously with no PhD as long as you have some publications (many job postings don't require a PhD or say "or equivalent research experience"), and it's unusual but not unheard of for people do this.

The thing is, a PhD program is a reliable way to build a research track record.  And you don't see too many PhD dropouts who want to be scientists because if you've got a research track record, the extra cost of just finishing your dissertation and graduating is pretty low.

People sometimes seem to act like unsolved problems are exasperating, aesthetically offensive, or somehow unappealing, so they have no choice but to roll up their sleeves and try to help fix them, because it's just so irritating to see the problem go unsolved. So one can do purely altruistic stuff, but with this selfish posture (which also shifts focus away from motivation and psychology) it won't trip the hypocrisy alarms. It may also genuinely be a better attitude to cultivate, if it helps deflate one's ego a little bit -- I'm not quite sure.

0Lone Pine
TBH this is how I feel about the alignment problem.

A lot of the AI risk arguments seem to come mixed together with assumptions about a particular type of utilitarianism, and with a very particular transhumanist aesthetic about the future (nanotech, von Neumann probes, Dyson spheres, tiling the universe with matter in fixed configurations, simulated minds, etc.).

I find these things (especially the transhumanist stuff) to not be very convincing relative to the confidence people seem to express about them, but they also don't seem to be essential to the problem of AI risk. Is there a minimal version of the AI risk arguments that are disentangled from these things?

1DaemonicSigil
I'd say AI ruin only relies on consequentialism. What consequentialism means is that you have a utility function, and you're trying to maximize the expected value of your utility function. There are theorems to the effect that if you don't behave as though you are maximizing the expected value of some particular utility function, then you are being stupid in some way. Utilitarianism is a particular case of consequentialism where your utility function is equal to the average happiness of everyone in the world. "The greatest good for the greatest number." Utilitarianism is not relevant to AI ruin because without solving alignment first, the AI is not going to care about "goodness". The von Neumann probes aren't important to the AI ruin picture either: Humanity would be doomed, probes or no probes. The probes are just a grim reminder that screwing up AI won't only kill all humans, it will also kill all the aliens unlucky enough to be living too close to us.
5Kaj_Sotala
There's this, which doesn't seem to depend on utilitarian or transhumanist arguments:
1DeLesley Hutchins
I ended up writing a short story about this, which involves no nanotech.  :-)   https://www.lesswrong.com/posts/LtdbPZxLuYktYhveL/a-plausible-story-about-ai-risk
3lc
Yes. I'm one of those transhumanist people, but you can talk about AI risk completely adjacent from that. Tryna write up something that compiles the other arguments.

most academic research work is done by grad students, and grad students need incremental, legible wins to put on their CV so they can prove they are capable of doing research. this has to happen pretty fast. an ML grad student who hasn't contributed to any top conference papers by their second or third year in grad school might get pulled aside for a talk about their future.

ideally you want a topic where you can go from zero to paper in less than a year, with multiple opportunities for followup work. get a few such projects going and you have a very strong... (read more)

“Does the disease heavily affect career-age people (age 25-65), or frequently leave survivors with lasting disability?”

This is rightly ticked off as “No”, but I think it morally counts as “Yes” if there is more danger to young children. That’s scarier in itself, and from COVID it seems people are also more likely to accept very extreme NPIs to protect children, meaning there might well be a large economic impact.

Historically, scientists would use anagrams to do this. Galileo famously said "Smaismrmilmepoetaleumibunenugttauiras". Later he revealed that it could be unscrambled into "Altissimum planetam tergeminum observavi" which per Wikipedia is Latin for "I have observed the most distant planet to have a triple form", establishing his priority in discovering the rings of Saturn.

Obviously hashing and salting is better, nowadays.

From my limited knowledge, that's definitely one of the purposes Ruism/Confucianism was put to -- especially once the civil service exams were instituted.

In one way, "philosophy of the establishment" seems mostly correct to me, as the Mengzi seemingly makes a core assumption that the current social order is legitimate. But it mostly isn't making excuses for that social order (as philosophy and social science often does), it's challenging rulers to live up to an ideal and serve the people. At one point, Mengzi says that any king who "mutilates benevolence" ... (read more)

Schizophrenia is the wrong metaphor here -- it's not the same disease as split personalities (i.e. dissociative identity disorder). I think it would be clearer and more accurate to rewrite that paragraph without it. I don't intend this as an attack or harsh criticism, it's just that I have decided to be a pedant about this point whenever I encounter it, as I think it would be good for the general public to develop a more accurate and realistic understanding of schizophrenia.

8Viliam
Good point. I addition to that, using human diseases as a metaphor for AI misalignment is misleading, because it kinda implies that the default option is health; we only need to find and eliminate the potential causes of imbalance, and the health will happen naturally. While the very problem with AI is that there is no such thing as a natural good outcome. A perfectly healthy paperclip maximizer is still a disaster for humanity.

Rubin's framework says basically, suppose all our observations are in a big data table. Now consider the counterfactual observations that didn't happen (i.e. people in the control group getting the treatment) -- these are called "potential outcomes" -- treat those like missing cells in the data table. Then causal inference is just to fill in potential outcomes using missing data imputation techniques, although to be valid these require some assumptions about conditional independence.

Pearl's framework and Rubin's are isomorphic in the sense that any set of ... (read more)

1Alexander Gietelink Oldenziel
Aha, I will have to ponder on this for a while. Thanks a lot!

Some reading on this:

https://csss.uw.edu/files/working-papers/2013/wp128.pdf

http://proceedings.mlr.press/v89/malinsky19b/malinsky19b.pdf

https://arxiv.org/pdf/2008.06017.pdf

---

From my experience it pays to learn how to think about causal inference like Pearl (graphs, structural equations), and also how to think about causal inference like Rubin (random variables, missing data).  Some insights only arise from a synthesis of those two views.

Pearl is a giant in the field, but it is worth remembering that he's unusual in another way (compared to a typical ... (read more)

I will give a potted history of Pearl's discovery as I understand it.

In the late 70s/early 80s, people wanted to deal with uncertainty in logic-based AI. The obvious thing to use is probability, but doing a Bayesian update to compute a posterior is exponentially expensive.

Pearl wanted to come up with a good data structure for doing computations over probability distributions in less-than-exponential time.

He introduced the idea of Bayesian networks in his paper Reverend Bayes On Inference Engines where he represents factorized probability distributions usin... (read more)

8Jacy Reese Anthis
I think another important part of Pearl's journey was that during his transition from Bayesian networks to causal inference, he was very frustrated with the correlational turn in early 1900s statistics. Because causality is so philosophically fraught and often intractable, statisticians shifted to regressions and other acausal models. Pearl sees that as throwing out the baby (important causal questions and answers) with the bathwater (messy empirics and a lack of mathematical language for causality, which is why he coined the do operator). Pearl discusses this at length in The Book of Why, particularly the Chapter 2 sections on "Galton and the Abandoned Quest" and "Pearson: The Wrath of the Zealot." My guess is that Pearl's frustration with statisticians' focus on correlation was immediate upon getting to know the field, but I don't think he's publicly said how his frustration began.
5Alexander Gietelink Oldenziel
Is Rubin's work actually the same as Pearl's?? Please tell more? That's not the impression from reading Pearl s causality. If so, seems like a major omission of scholarship