Anonymous

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

if you get into fashion there is a whole range of expression with suits. with the right cut and materials, you can wear a suit, which looks great as suits ought to, yet is clearly casual and even in Japan would never be perceived as "for work". expensive hobby but if you're already doing this, might as well get into it.

a quite widespread experience right now among normal people, is having their boss tell them to use AI tools in stupid ways that don't currently work, and then being somewhat held responsible for the failures. (For example: your boss heard about a study saying AI increased productivity by 40% among one group of consultants, so he's buying you a ChatGPT Plus subscription and increasing all your KPI targets by 40%.)

on the one hand this produces very strong anti-AI sentiment. people are just sick of it. if "Office Space" were made now, Bill Lumbergh would be talking about "AI transformation" and "agents" all the time. that's politically useful if you're advocating about x-risk.

on the other hand, it means if you are talking about how AI capabilities are growing fast, this gets an instant negative reaction because you sound like their delusional boss. At the same time they are worried about AI taking their jobs as it gets better.

This isn't a very internally consistent set of beliefs, but I could summarize what I've heard as something like this:

"AI doesn't really work, it's all a big scam, but it gives the appearance of working well enough that corporations will use it as an excuse to cut costs, lay people off, and lower quality to increase their profits. The economy is a rigged game anyway, and the same people that own the corporations are all invested in AI, so it won't be allowed to fail, we will just live in a world of slop."

I don't think this is a sufficiently complete way of looking at things. It could make sense when the problem was thought to be "replication crisis via p-hacking" but it turns out things are worse than this.

  • The research methodology in biology doesn't necessarily have room for statistical funny business but there are all these cases of influential Science/Nature papers that had fraud via photoshop.
  • Gino and Ariely's papers might have been statistically impeccable, the problem is they were just making up data points.
  • there is fraud in experimental physics and applied sciences too from time to time.

I don't know much about what opportunities there are for bad research practices in the humanities. The only thing I can think of is citing a source that doesn't say what is claimed. This seems like a particular risk when history or historical claims are involved, or when a humanist wants to refer to the scientific literature. The spectacular claim that Victorian doctors treated "hysteria" using vibrators turns out to have resulted from something like this.

Outside cases like that, I think the humanities are mostly "safe" like math in that they just need some kind of internal consistency, whether that is presenting a sound argument, or a set of concepts and descriptions that people find to be harmonious or fruitful.

I think the biggest difference is this will mean more people with a wider range of personality types, socially interacting in a more arms-length/professionalized way, according to the social norms of academia.

Especially in CS, you can be accepted among academics as a legitimate researcher even without a formal degree, but it would require being able and willing to follow these existing social norms.

And in order to welcome and integrate new AI safety researchers from academia, the existing AI safety scene would have to make some spaces to facilitate this style of interaction, rather than the existing informal/intense/low-social-distance style.

This community is doing way better than it has any right to for a bunch of contrarian weirdos with below-average social skills. It's actually astounding.

The US government and broader military-industrial complex is taking existential AI risk somewhat seriously. The head of the RAND Corporation is an existential risk guy who used to work for FHI. 

Apparently the Prime Minister of the UK and various European institutions are concerned as well.

There are x-risk-concerned people at most top universities for AI research and within many of the top commercial labs.

In my experience "normies" are mostly open to simple, robust arguments that AI could be very dangerous if sufficiently capable, so I think the outreach has been sufficiently good on that front.

There is a much more specific set of arguments about advanced AI (exotic decision theories, theories of agency and preferences, computationalism about consciousness) that are harder to explain and defend than the basic AI risk case, so would rhetorically weaken it. But people who like them get very excited about them. Thus I think having a lot more popular materials by LessWrong-ish people would do more harm than good, so it was a good move whether intentional or not to avoid this. (On the other hand if you think these ideas are absolutely crucial considerations without which sensible discussion is impossible, then it is not good.)

This is the case for me as well, and I don't remember when it developed. I have a timeline that starts with the present day on the right, and goes left and slightly up. It gets blurry around 500 BC. I can somewhat zoom in and recenter it if I'm thinking about individual historical periods. I can roughly place some historical events in the correct spots on the timeline, but since I have never needed to formally memorize many historical dates, this is very rough.

You might be interested in reading about experiences in the broad category of synesthesia, and of the  really fascinating history of "memory palace" techniques. Also in the linguistic details of how different languages spatially talk about the past and future (e.g. in English the past is behind/future is ahead; in Chinese, past is above/future is below).

Normal, standard causal decision theory is probably it. You can make a case that people sometimes intuitively use evidential decision theory ("Do it. You'll be glad you did.") but if asked to spell out their decision making process, most would probably describe causal decision theory.

Fandom people on Tumblr,  AO3, etc. really responded to The Last Jedi (because it was targeted to them). Huge phenomenon. There are now bestselling romance novels that started life as TLJ fanfiction. Everything worked just like it does for the Marvel movies, very profitably.

However there was an additional group of Star Wars superfans outside of fandom, who wanted something very different, hence the backlash. This group is somewhat more male and conservative, and then everything polarized on social media so this somehow became a real culture war issue. Of course, Disney did not like the backlash, and tried to make the 3rd movie more palatable to this group.

That kind of fan doesn't organically exist for most things outside of Star Wars though. For most things, you only get superfans in this network of fan communities which skew towards social justice. And for any new genre story without a pre-existing fanbase, there's an opportunity to get fandom people excited about it, which is very valuable.

As far as running a media company goes, fandom is extremely profitable, increasingly so in an age where enormous sci-fi/fantasy franchises drive everything. And there's been huge overlap between fandom communities and social justice politics for a long time.

It's definitely in Disney's interest to appeal to Marvel superfans who write fanfiction and cosplay and buy tons of merchandise, and those people tend to also be supporters of social justice politics.

Like, nothing is being forced on this audience -- there are large numbers of people who get sincerely excited when a new character is introduced that gives representation for the first time to a new minority group, or something like that.

As with so many businesses, the superfans are worth quite a few normies who might be put off by this. I think this is the main explanation.

The “canonical” rankings that CS academics care about would be csrankings.org (also not without problems but the least bad).

Load More