LESSWRONG
LW

1566
Richard Korzekwa
1498101880
Message
Dialogue
Subscribe

Former physicist, current worry-about-AI-ist.
Previously at AI Impacts

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
If I imagine that I am immune to advertising, what am I probably missing?
Richard Korzekwa7d40

It is sometimes good to avoid coming across as really weird or culturally out of touch, and ads can give you some signal on what's normal and culturally relevant right now. If you're picking up drinks for a 4th of July party, Bud Light will be very culturally on-brand, Corona would be fine, but a bit less on-brand, and mulled wine would be kinda weird. And I think you can pick this sort of thing up from advertising.

Also, it might be helpful to know roughly what group membership you or other people might be signalling by using a particular product. For example, I drive a Subaru. Subaru has, for a long time, marketed to (what appears to me to be) people who are a bit younger, vote democrat, and spend time in the mountains. This is in contrast to, say, Ram trucks, which are marketed to (what looks to me like) people who vote Republican. If I'm in a context where people who don't know me very well see my car, I am now aware that they might be biased toward thinking I vote democrat or spend time outdoors. (FWIW, I did a low-effort search for which states have the strongest Subaru sales and it is indeed states with mountains and states with people who vote democrat).

Reply
Before LLM Psychosis, There Was Yes-Man Psychosis
Richard Korzekwa18d60

Recently I've been wondering what this dynamic does to the yes-men. If someone is strongly incentivized to agree with whatever nonsense their boss is excited about that week, then they go on Twitter or national TV to repeat that nonsense, it can't be good for seeing the world accurately.

Reply
Banning Said Achmiz (and broader thoughts on moderation)
Richard Korzekwa19d72

Sometimes what makes a crime "harder to catch" is the risk of false positives. If you don't consider someone to have "been caught" unless your confidence that they did the crime is very high, then, so long as you're calibrated, your false positive rate is very low. But holding off on punishment in cases where you do not have very high confidence might mean that, for most instances where someone commits the crime, they are not punished.

Reply
Yudkowsky on "Don't use p(doom)"
Richard Korzekwa21d20

If you want someone to compress and communicate their views on the future, whether they anticipate everyone will be dead within a few decades because of AI seems like a pretty important thing to know. And it's natural to find your way from that to asking for a probability. But I think that shortcut isn't actually helpful, and it's more productive to just ask something like "Do you anticipate that, because of AI, everyone will be dead within the next few decades?". Someone can still give a probability if they want, but it's more natural to give a less precise answer like "probably not" or a conditional answer like "I dunno, depends on whether <thing happens>" or to avoid the framing like "well, I don't think we're literally going to die, but".

Reply
Banning Said Achmiz (and broader thoughts on moderation)
Richard Korzekwa21d63

He says, under the section titled "So what options do I have if I disagree with this decision?":

But beyond [leaving LW, trying to get him fired, etc], there is no higher appeals process. At some point I will declare that the decision is made, and stands, and I don't have time to argue it further, and this is where I stand on the decision this post is about.

Reply1
Arjun Panickssery's Shortform
Richard Korzekwa22d20

Yeah, seems like it fails mainly on 1, though I think that depends on whether you accept the meaning of "could not have done otherwise" implied by 2/3. But if you accept a meaning that makes 1 true (or, at least, less obviously false), then the argument is no longer valid.

Reply
Arjun Panickssery's Shortform
Richard Korzekwa22d43

This seems closely related to an argument I vaguely remember from a philosophy class:

  1. A person is not morally culpable of something if they could not have done otherwise
  2. If determinism is true, there is only one thing a person could do
  3. If there is only one thing a person could do, they could not have done otherwise
  4. If determinism is true, whatever someone does, they are not morally culpable
Reply
Jimrandomh's Shortform
Richard Korzekwa23d80

Seems reasonable.

Possibly I'm behind on the state of things, but I wouldn't put too much trust in a model's self-report on how things like routing work.

Reply
Interiors can be more fun
Richard Korzekwa1mo42

Of course many ways of making a room more fun are idiosyncratic to a particular theme, concept, or space.

I think fun is often idiosyncratic to particular people as well, and this is one reason why fun design is not more common, at least for spaces shared by lots of pepople. For me, at least, 'fun' spaces are higher variance than more conventional spaces. Many do indeed seem fun, but sometimes my response is "this is unusual and clearly made for someone who isn't me".

But maybe this is mostly a skill issue. The Epic campus looks consistently fun to me, for example.

Reply1
The Problem
Richard Korzekwa1mo180

AI Impacts looked into this question, and IMO "typically within 10 years, often within just a few years" is a reasonable characterization. https://wiki.aiimpacts.org/speed_of_ai_transition/range_of_human_performance/the_range_of_human_intelligence

I also have data for a few other technologies (not just AI) doing things that humans do, which I can dig up if anyone's curious. They're typically much slower to cross the range of human performance, but so was most progress prior to AI, so I dunno what you want to infer from that.

Reply3
Load More
80You will crash your car in front of my house within the next week
5mo
6
6AI Impacts Quarterly Newsletter, Apr-Jun 2023
2y
0
15What we’ve learned so far from our technological temptations project
2y
4
46A policy guaranteed to increase AI timelines
2y
1
42How popular is ChatGPT? Part 2: slower growth than Pokémon GO
3y
4
36Product safety is a poor model for AI governance
3y
0
44Observed patterns around major technological advancements
4y
15
212Why indoor lighting is hard to get right and how to fix it
5y
54
50A simple device for indoor air management
5y
10
26Description vs simulated prediction
5y
0
Load More