LESSWRONG
LW

175
cousin_it
31743Ω45214565210
Message
Dialogue
Subscribe

https://vladimirslepnev.me

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
How do you read Less Wrong?
cousin_it1h20

You could try it on HN: go to any user's comments page, choose any comment and click its "context" link. It'll load the page and jump to the right place. To experience the "scroll before load" problem you'll have to work pretty hard. And it's plain old server side rendering, with an SPA you have strictly more control, you can even blink the page into existence scrolled to the right place. And if you want even more clarity, you can highlight the linked-to comment.

Reply
Orient Speed in the 21st Century
cousin_it17h61

It's certainly a skill I feel I'm needing more lately, and trying to cultivate more. But I also have a feeling that people shouldn't need to do this to survive. If elites are building a world where this is necessary to survive (e.g. where older people must stay on top of all the new scams appearing every year, or lose all their money if they slip up once), then maybe fuck those elites. Let's choose different ones: those that understand that humans need a habitat fit for humans.

Reply
How do you read Less Wrong?
cousin_it17h60

Yeah, I also use GW, and the recent comments firehose is part of the reason. Very old LW also had it and I loved it then too.

(Another pet complaint of mine is that comment permalinks on current LW work in a very weird way. They show the linked comment at the top of the page, then the post, then the rest of the comments, including a second copy of the linked comment. I don't know what design process led to this, but even after all these years it throws me off every time. Reddit and HN also get it wrong, but less wrong than LW: they show the comment and its subthread, but not the surrounding context. GW is the only one that gets it right: it links to the entire comment page, and jumps to the comment in question.)

Reply1
Paranoia: A Beginner's Guide
cousin_it1d*41

All that said, in reality, navigating a lemon market isn’t too hard. Simply inspect the car to distinguish bad cars from good cars, and then the market price of a car will at most end up at the pre-lemon-seller equilibrium, plus the cost of an inspection to confirm it’s not a lemon. Not too bad.

“But hold on!” the lemon car salesman says. “Don’t you know? I also run a car inspection business on the side”. You nod politely, smiling, then stop in your tracks as the realization dawns on you. “Oh, and we also just opened a certification business that certifies our inspectors as definitely legitimate” he says as you look for the next flight to the nearest communist country.

I immediately thought about warranties. It's not a perfect solution, but maybe if you buy a used car with a warranty that will cover possible repairs, you could feel a bit safer, assuming the dealer doesn't disappear overnight? Or at least, it can reduce your problem from inspecting a car to inspecting a textual contract: for example, running it through an LLM to find potential get-out-free clauses. And the same kind of solutions can apply to lemon markets more generally.

Reply
Favorite quotes from "High Output Management"
cousin_it2d0-2

Should you have personal relationships with your colleagues?

Everyone must decide for himself what is professional and appropriate here. A test might be to imagine yourself delivering a tough performance review to your friend.

It's possible for managers to be friends with their employees; I've seen it. But it's only possible if the economy allows it. Namely, if there's low unemployment and people know they can always find another equally good job, or there's enough safety net that they can afford to go without.

If the economy isn't as pleasant, and people depend on jobs for survival, then the manager-employee relationship is a power relationship. It's not possible for a power relationship to be friendship. Contrary to the quote, it's not a matter of what the manager decides. At most, the manager can make-believe that "I'm friends with this employee even though I can give them a tough performance review". The employee will never feel that way.

That said, I don't think performance reviews specifically are a bad thing. The power imbalance is the bad thing, but assuming it exists, I'd rather work for a company with performance reviews than one with total manager discretion whom to fire when. Performance reviews are a kind of smoothing filter: they at least give the employee some months of warning, "you're about to get pushed out and you should think what to do next". It's still a bit of pretense, because (let's be real) a manager can always arrange for an employee to get poor reviews and get pushed out, given time. But this pretense and smoothing-out is still valuable, in a world where bills come every month.

Reply
Mourning a life without AI
cousin_it3d*42

I think as soon as AGI starts acting in the world, it'll take action to protect itself against catastrophic bitflips in the future, because they're obviously very harmful to its goals. So we're only vulnerable to such bitflips a short time after we launch the AI.

The real danger comes from AIs that are nasty for non-accidental reasons. The way to deal with them is probably acausal bargaining: AIs in nice futures can offer to be a tiny bit less nice, in exchange for the nasty AIs becoming nice. Overall it'll come out negative, so the nasty AIs will accept the deal.

Though I guess that only works if nice AIs strongly outnumber the nasty ones (to compensate for the fact that nastiness might be resource-cheaper than niceness). Otherwise the bargaining might come out to make all worlds nasty, which is a really bad possibility. So we should be quite risk-averse: if some AI design can turn out nice, nasty, or indifferent to humans, and we have an chance to make it more indifferent and less likely to be nice or nasty in equal amounts, we should take that chance.

Reply
Problems I've Tried to Legibilize
cousin_it4d*Ω592

I think on the level of individual people, there's a mix of moral and self-interested actions. People sometimes choose to do the right thing (even if the right thing is as complicated as taking metaethics and metaphilosophy into account), or can be convinced to do so. But with corporations it's another matter: they choose the profit motive pretty much every time.

Making an AI lab do the right thing is much harder than making its leader concerned. A lab leader who's concerned enough to slow down will be pressured by investors to speed back up, or get replaced, or get outcompeted. Really you need to convince the whole lab and its investors. And you need to be more convincing than the magic of the market! Recall that in many of these labs, the leaders / investors / early employees started out very concerned about AI safety and were reading LW. Then the magic of the market happened and now the labs are racing at full speed, do you think our convincing abilities can be stronger than the thing that did that? The profit motive, again. In my first comment there was a phrase about things being not profitable to understand.

What it adds up to is, even with our uncertainty about ethics and metaethics, it seems to me that concentration of power is itself a force against morality. The incentives around concentrated power are all wrong. Spreading out power is a good thing that enables other good things, enables individuals to sometimes choose what's right. I'm not absolutely certain but that's my current best guess.

Reply
Universal Basic Income in an AGI Future
cousin_it4d*244

Thank you for writing this! I think a lot of people miss this point, and keep talking about UBI in the AI future without being clear which power bloc will ensure UBI will continue existing, and why.

However, I'd like to make a big correction to this. Your point exactly matches my thinking until a few months ago. Then I realized something that changes it a lot, and is also I think crucial to understand.

Namely, elites have always needed the labor of the masses. The labor of serfs was needed, the labor of slaves was needed. That circumstance kept serfs and slaves alive, but not in an especially good position. The masses were exploited by elites throughout most of history. And it doesn't depend on economic productivity either: a slave in a diamond mine can have very high productivity by the numbers, but still be enslaved.

The circumstance that changed things, and made the masses in Western countries enjoy (temporarily) a better position than serfs in the past, was the military relevance of the masses. It started with the invention of firearms. A peasant with a gun can be taught to shoot a knight dead, and knights correctly saw even at the time that this would erode their position. I'm not talking about rebellion here (rebellions by the masses against the elites have always been very hard), but rather on whether the masses are needed militarily for large scale conflicts.

And given military relevance, economic productivity isn't actually that important. It's possible to have a leisure class that doesn't do much work except for being militarily relevant; knights are a good example. It's actually pretty hard to find historical examples of classes that were militarily relevant but treated badly. Even warhorses were treated much better than peasant horses. Being useful keeps you alive, but exploited; being dangerous is what keeps you alive and treated well. If we by some miracle end up with a world where the masses of people remain militarily relevant, but not needed for productive work, then I can imagine the entire masses becoming such a leisure class. That'd be a nice future if we could get it.

However, as you point out, the future will have not just AI labor, but AI armies as well. Ensuring the military relevance of the masses seems just as difficult as ensuring their economic relevance. So my comment, unfortunately, isn't replacing the problem with an easier one; just with a different one.

Reply
Problems I've Tried to Legibilize
cousin_it5d*Ω3135

I'm pretty slow to realize these things, and I think other people are also slow, so the window is already almost closed. But in any case, my current thinking is that we need to start pushing on the big actors from outside, try to reduce their power. Trying to make them see the light is no longer enough.

What it means in practical terms: - Make it clear that we frown on people who choose to work for AI labs, even on alignment. This social pressure (on LW and related forums maybe) might already do some good. - Make it clear that we're allied with the relatively poor majority of people outside the labs, and in particular those who are already harmed by present harms. Make amends with folks on the left who have been saying such things for years. - Support protests against labs, support court cases against them having to do with e.g. web scraping, copyright infringement, misinformation, suicides. Some altruist money in this might go a long way. - Think more seriously about building organizations that will make AI power more spread out. Open source, open research, open training. Maybe some GPL-like scheme to guarantee that things don't get captured. We need to reduce concentration of power in the near term, enable more people to pose a challenge to the big actors. I understand it increases other risks, but in my opinion it's worth it.

Reply1
Problems I've Tried to Legibilize
cousin_it5dΩ6137

I'm worried about the approach of "making decisionmakers realize stuff". In the past couple years I've switched to a more conflict-theoretic view: the main problem to me is that the people building AI don't want to build aligned AI. Even if we solved metaethics and metaphilosophy tomorrow, and gave them the solution on a plate, they wouldn't take it.

This is maybe easiest to see by looking at present harms. An actually aligned AI would politely decline to do such things as putting lots of people out of jobs or filling the internet with slop. So companies making AI for the market have to make it misaligned in at least these ways, otherwise it'll fail in the market. Extrapolating into the future, even if we do lots of good alignment research, markets and governments will pick out only those bits that contribute to market-aligned or government-aligned AI. Which (as I've been saying over and over) will be really bad for most people, because markets and governments don't necessarily need most people.

So this isn't really a comment on the list of problems (which I think is great), but more about the "theory of change" behind it. I no longer have any faith in making decisionmakers understand something it's not profitable for them to understand. I think we need a different plan.

Reply1
Load More
2cousin_it's Shortform
Ω
6y
Ω
28
35An argument that consequentialism is incomplete
1y
27
24Population ethics and the value of variety
1y
11
43Book review: The Quincunx
1y
12
16A case for fairness-enforcing irrational behavior
2y
3
46I'm open for projects (sort of)
2y
13
27A short dialogue on comparability of values
2y
7
29Bounded surprise exam paradox
2y
5
31Aligned AI as a wrapper around an LLM
3y
19
24Are extrapolation-based AIs alignable?
3y
15
37Nonspecific discomfort
4y
18
Load More