I think AI offers a chance of getting huge power over others, so it would create competitive pressure in any case. In case of a market economy it's market pressure, but in case of countries it would be a military arms race instead. And even if the labs didn't get any investors and raced secretly, I think they'd still feel under a lot of pressure. The chance of getting huge power is what creates the problem, that's why I think spreading out power is a good idea. There would still be competition of course, but it would be normal economic levels of competition, and people would have some room to do the right things.
Does the viewport at least contain the target comment? If yes, maybe highlighting it would solve the rest of the problem? (And it might help on desktop too, if the target comment is near the bottom of the page and we can't scroll far enough to line it up with top of screen.)
You could try it on HN: go to any user's comments page, choose any comment and click its "context" link. It'll load the page and jump to the right place. To experience the "scroll before load" problem you'll have to work pretty hard. And it's plain old server side rendering, with an SPA you have strictly more control, you can even blink the page into existence scrolled to the right place. And if you want even more clarity, you can highlight the linked-to comment.
It's certainly a skill I feel I'm needing more lately, and trying to cultivate more. But I also have a feeling that people shouldn't need to do this to survive. If elites are building a world where this is necessary to survive (e.g. where older people must stay on top of all the new scams appearing every year, or lose all their money if they slip up once), then maybe fuck those elites. Let's choose different ones: those that understand that humans need a habitat fit for humans.
Yeah, I also use GW, and the recent comments firehose is part of the reason. Very old LW also had it and I loved it then too.
(Another pet complaint of mine is that comment permalinks on current LW work in a very weird way. They show the linked comment at the top of the page, then the post, then the rest of the comments, including a second copy of the linked comment. I don't know what design process led to this, but even after all these years it throws me off every time. Reddit and HN also get it wrong, but less wrong than LW: they show the comment and its subthread, but not the surrounding context. GW is the only one that gets it right: it links to the entire comment page, and jumps to the comment in question.)
All that said, in reality, navigating a lemon market isn’t too hard. Simply inspect the car to distinguish bad cars from good cars, and then the market price of a car will at most end up at the pre-lemon-seller equilibrium, plus the cost of an inspection to confirm it’s not a lemon. Not too bad.
“But hold on!” the lemon car salesman says. “Don’t you know? I also run a car inspection business on the side”. You nod politely, smiling, then stop in your tracks as the realization dawns on you. “Oh, and we also just opened a certification business that certifies our inspectors as definitely legitimate” he says as you look for the next flight to the nearest communist country.
I immediately thought about warranties. It's not a perfect solution, but maybe if you buy a used car with a warranty that will cover possible repairs, you could feel a bit safer, assuming the dealer doesn't disappear overnight? Or at least, it can reduce your problem from inspecting a car to inspecting a textual contract: for example, running it through an LLM to find potential get-out-free clauses. And the same kind of solutions can apply to lemon markets more generally.
Should you have personal relationships with your colleagues?
Everyone must decide for himself what is professional and appropriate here. A test might be to imagine yourself delivering a tough performance review to your friend.
It's possible for managers to be friends with their employees; I've seen it. But it's only possible if the economy allows it. Namely, if there's low unemployment and people know they can always find another equally good job, or there's enough safety net that they can afford to go without.
If the economy isn't as pleasant, and people depend on jobs for survival, then the manager-employee relationship is a power relationship. It's not possible for a power relationship to be friendship. Contrary to the quote, it's not a matter of what the manager decides. At most, the manager can make-believe that "I'm friends with this employee even though I can give them a tough performance review". The employee will never feel that way.
That said, I don't think performance reviews specifically are a bad thing. The power imbalance is the bad thing, but assuming it exists, I'd rather work for a company with performance reviews than one with total manager discretion whom to fire when. Performance reviews are a kind of smoothing filter: they at least give the employee some months of warning, "you're about to get pushed out and you should think what to do next". It's still a bit of pretense, because (let's be real) a manager can always arrange for an employee to get poor reviews and get pushed out, given time. But this pretense and smoothing-out is still valuable, in a world where bills come every month.
I think as soon as AGI starts acting in the world, it'll take action to protect itself against catastrophic bitflips in the future, because they're obviously very harmful to its goals. So we're only vulnerable to such bitflips a short time after we launch the AI.
The real danger comes from AIs that are nasty for non-accidental reasons. The way to deal with them is probably acausal bargaining: AIs in nice futures can offer to be a tiny bit less nice, in exchange for the nasty AIs becoming nice. Overall it'll come out negative, so the nasty AIs will accept the deal.
Though I guess that only works if nice AIs strongly outnumber the nasty ones (to compensate for the fact that nastiness might be resource-cheaper than niceness). Otherwise the bargaining might come out to make all worlds nasty, which is a really bad possibility. So we should be quite risk-averse: if some AI design can turn out nice, nasty, or indifferent to humans, and we have an chance to make it more indifferent and less likely to be nice or nasty in equal amounts, we should take that chance.
I think on the level of individual people, there's a mix of moral and self-interested actions. People sometimes choose to do the right thing (even if the right thing is as complicated as taking metaethics and metaphilosophy into account), or can be convinced to do so. But with corporations it's another matter: they choose the profit motive pretty much every time.
Making an AI lab do the right thing is much harder than making its leader concerned. A lab leader who's concerned enough to slow down will be pressured by investors to speed back up, or get replaced, or get outcompeted. Really you need to convince the whole lab and its investors. And you need to be more convincing than the magic of the market! Recall that in many of these labs, the leaders / investors / early employees started out very concerned about AI safety and were reading LW. Then the magic of the market happened and now the labs are racing at full speed, do you think our convincing abilities can be stronger than the thing that did that? The profit motive, again. In my first comment there was a phrase about things being not profitable to understand.
What it adds up to is, even with our uncertainty about ethics and metaethics, it seems to me that concentration of power is itself a force against morality. The incentives around concentrated power are all wrong. Spreading out power is a good thing that enables other good things, enables individuals to sometimes choose what's right. I'm not absolutely certain but that's my current best guess.
Nice exercise, tried it for a couple days and I like it! Wonder if you or anyone else know something similar for eye contact (I've had huge problems with it all my life).