All of lc's Comments + Replies

lc166

The two guys from Epoch on the recent Dwarkesh Patel podcast repeatedly made the argument that we shouldn't fear AI catastrophe, because even if our successor AIs wanted to pave our cities with datacenters, they would negotiate a treaty with us instead of killing us. It's a ridiculous argument for many reasons but one of them is that they use abstract game theoretic and economic terms to hide nasty implementation details

3artemium
I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven't read the fine print. /s
4Dagon
Ah, yes - bargaining solutions that ignore or hide a significant underlying power disparity are rampant in wishful-thinking academic circles, and irrelevant in real life.  That's the context I was missing; my confusion is resolved.  Thanks!
3Steven Byrnes
Yeah I said that to Matt Barnett 4 months ago here. For example, one man's "avoiding conflict by reaching negotiated settlement" may be another man's "acceding to extortion". Evidently I did not convince him. Shrug.
lc66

Formatting is off for most of this post.

lc3433

"Treaties" and "settlements" between two parties can be arbitrarily bad. "We'll kill you quickly and painlessly" is a settlement.

7Dagon
I'm missing some context here.  Is this not obvious, and well-supported by the vast majority of "treaties" between europeans and natives in the 16th through 19th centuries?  For legal settlements, it's generally between the extremes that each party would prefer, but it's not always the case that this range doesn't include "quite bad", even if not completely arbitrary. "We'll kill you quickly and painlessly" isn't actually arbitrarily bad, it's only quite bad.  There are possibly worse outcomes available if no agreement was available.  
lc52

The doubling down is delusional but I think you're simplifying the failure of projection a bit. The inability of markets and forecasters to predict Trump's second term is quite interesting. A lot of different models of politics failed.

lc119

The indexes above seem to be concerned only with state restrictions on speech. But even if they weren't, I would be surprised if the private situation was any better in the UK than it is here.

lc*5739

Very hard to take an index about "freedom of expression" seriously when the United States, a country with constitutional guarantees of freedom of speech and freedom of the press, is ranked lower than the United Kingdom, which prosecutes hundreds of people for political speech every year.

4Ben
The index by Reporters Without Boarders is primarily about whether a newspaper or reporter can say something without consequences or interference. Things like a competitive media environment seem to be part of the index (The USAs scorecard says "media ownership is highly concentrated, and many of the companies buying American media outlets appear to prioritize profits over public interest journalism"). Its an important thing, but its not the same thing you are talking about. The second one, from "our world in data", ultimately comes from this (https://www.v-dem.net/documents/56/methodology.pdf). Their measure includes things like corruption, whether political power or influence is concentrated into a smaller group and effective checks and balances on the use of executive power. It sounds like they should have called it a "democratic health index" or something like that instead of a "freedom-of-expression index". The last one is just a survey of what people think should be allowed.
4Kamil
It might be the case that the US has higher protections for freedom of expression, in terms of spoken or written text. I would certainly agree that some of the restrictions in central Europe are rather onerous. However, as someone who has lived in central Europe, it seems that central Europe allows for considerably higher freedom of political expression for the average person where it counts: at the polls. We have various voting systems that favor a plurality of parties instead of first-past-the-post systems like in the UK or the US. Therefore I would call the statement from the OP by Richard Ngo "unresponsiveness to public opinion that we see today in England, France, and Germany." factually incorrect.
Guive212

I agree with your broader point, but it's actually more than 10,000 people per year

3Viliam
On the other hand, people from the United States are often the first to tell you that "freedom of speech" is not a general aspiration for making the world a better place, but merely a specific amendment to their constitution, which importantly only applies to censorship done directly by the government... therefore it does not apply to censorship by companies or mobs or universities or whatever. (As an extreme example, from this perspective, a country without an official government would count as 100% free speech, even if criticizing the local warlord gets you predictably tortured to death; as long as the local warlord is not considered a "government" because of some technicality.)
lc10

The future is hard to predict, especially under constrained cognition.

lc110

AGI is still 30 years away - but also, we're going to fully automate the economy, TAM 80 trillion

lc112

I find it a little suspicious that the recent OpenAI model releases became leaders on the MASK dataset (when previous iterations didn't seem to trend better at all), but I'm hopeful this represents deeper alignment successes and not simple tuning on the same or a similar set of tasks.

8Vladimir_Nesov
My first impression of o3 (as available via Chatbot Arena) is that when I'm showing it my AI scaling analysis comments (such as this and this), it responds with confident unhinged speculation teeming with hallucinations, compared to the other recent models that usually respond with bland rephrasings that get almost everything correctly with a few minor hallucinations or reasonable misconceptions carrying over from their outdated knowledge. Don't know yet if it's specific to speculative/forecasting discussions, but it doesn't look good (for faithfulness of arguments) when combined with good performance on benchmarks. Possibly stream of consciousness style data is useful to write down within long reasoning traces and can add up to normality for questions with a short final answer, but results in spurious details within confabulated summarized arguments for that answer (outside the hidden reasoning trace) that aren't measured by hallucination benchmarks and so allowed to get worse. Though in the o3 System Card hallucination rate also significantly increased compared to o1 (Section 3.3).
lc20

Works now for me

lc8-1

If anybody here knows someone from CAIS they need to setup their non-www domain name. Going to https://safe.ai shows a github landing page

1Alice Blair
Clicking on the link on mobile Chrome sends me to the correct website. How do you replicate this? In the meantime I've passed this along and it should make it to the right people in CAIS by sometime today.
lc20

Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there's free money on the table, and you should just say so, or not, and this anecdote doesn't seem to be relevant (similarly, I made money buying NVDA around that time, but I don't think that proves anything).

I am saying so! The market is definitely not pricing in AGI; doesn't matter if it comes in 2028, or 2035, or 2040. Though interest rates are a pretty bad way to arb this; I would just buy call options on the Nasdaq.

Perhaps, but shouldn't LLMs

... (read more)
2Cole Wyeth
Hmm well at least you're consistent. Certainly I can see why you expect them to become more useful, I still feel like there's some circularity here. Do you expect the current paradigm to continue advancing because LLM agents are somewhat useful now (as you said, for things like coding)? Unless that effect is currently negligible (and will undergo a sharp transition at some point) it seems we should expect it to already be reflected in the exponential growth rate claimed by METR.
lc*1815

The outside view, insofar is that is a well-defined thing...

It's not really a well-defined thing, which is why the standard on this site is to taboo those words and just explain what your lines of evidence are, or the motivation for any special priors if you have them.

If AGI were arriving in 2030, the outside view says interest rates would be very high (I'm not particularly knowledgeable about this and might have the details wrong but see the analysis here, I believe the situation is still similar), and less confidently I think the S&P's value would pr

... (read more)
2Cole Wyeth
Thanks for engaging in detail with my post. It seems there were a few failures of communication that are worth clarifying. I thought it was clear that I'm not confident in any outside view prediction of AGI timelines, from various statements/phrasings here (including the sentence you're quoting, which questions the well-definedness of "the outside view") and the fact that the central focus of the post is disputing an outside view argument. Apparently I did not communicate this clearly, because many commenters have objected to my vague references to possible outside views as if I were treating them as solid evidence, when in fact they aren't really a load bearing part of my argument here. Possibly the problem is that I don't think anyone has a good inside view either! But in fact I am just "radically" uncertain about AGI timelines - my uncertainty is ~in the exponent.  Still, I find your response a little ironic since this site is practically the only place I've seen the term "outside view" used. It does seem to be less common over the last year or two, since this post which you're probably referring to.  Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there's free money on the table, and you should just say so, or not, and this anecdote doesn't seem to be relevant (similarly, I made money buying NVDA around that time, but I don't think that proves anything).  Perhaps, but shouldn't LLMs already be speeding up AI progress? And if so, shouldn't that already be reflected in METR's plot? Are you predicting superexponential growth here? It seems to me that progress has been slowing for the last couple of years. If this trend continues, progress will stall. 
lc42

I don't predict a superintelligent singleton (having fused with the other AIs) would need to design a bioweapon or otherwise explicitly kill everyone. I expect it to simply transition into using more efficient tools than humans, and transfer the existing humans into hyperdomestication programs

+1, this is clearly a lot more likely than the alignment process missing humans entirely IMO

lc*44

Now, one could reasonably counter-argue that the yin strategy delivers value somewhere else, besides just e.g. "probability of a date". Maybe it's a useful filter for some sort of guy...

I feel like you know this is the case and I'm wondering why you're even asking the question. Of course it's a filter; the entire mating process is. Women like confidence, and taking these mixed signals as a sign of attraction is itself a sign of confidence. Walking up to a guy and asking him to have sex immediately would also be more "efficient" by some deranged standards, but the point of flirting is that you get to signal social grace, selectivity, and a willingness to walk away from the interaction.

lc*142

Close friend of mine, a regular software engineer, recently threw tens of thousands of dollars - a sizable chunk of his yearly salary - at futures contracts on some absurd theory about the Japanese Yen. Over the last few weeks, he coinflipped his money into half a million dollars. Everyone who knows him was begging him to pull out and use the money to buy a house or something. But of course yesterday he sold his futures contracts and bought into 0DTE Nasdaq options on another theory, and literally lost everything he put in and then some. I'm not sure but I... (read more)

lc*1510

Typically I operationalize "employable as a software engineer" as being capable of completing tasks like:

  • "Fix this error we're getting on BetterStack."
  • "Move our Redis cache from DigitalOcean to AWS."
  • "Add and implement a cancellation feature for ZeroPath scans."
  • "Add the results of this evaluation to our internal benchmark."

These are pretty representative examples of the kinds of tasks your median software engineer will be getting and resolving on a day to day basis.

No chatbot or chatbot wrapper can complete tasks like these for an engineering team at... (read more)

lc*50

They strengthen chip export restrictions, order OpenBrain to further restrict its internet connections, and use extreme measures to secure algorithmic progress, like wiretapping OpenBrain employees—this catches the last remaining Chinese spy

Wiretapping? That's it? Was this spy calling Xi from his home phone? xD

lc11936

As a newly minted +100 strong upvote, I think the current karma economy accurately reflects how my opinion should be weighted

1faul_sname
As a newly-minted +1 strong upvote, I disagree, though I feel that this change reflects the level of care and attention to detail that I expect out of EA.
lc40

I have Become Stronger

lc470

My strong upvotes are now giving +1 and my regular upvotes give +2.

I notice that although the loot box is gone, the unusually strong votes that people made yesterday persist.

My strong upvotes are giving +61 :shrug:

4Phiwip
My strong downvotes are giving +1, which is a little confusing.
habryka343

The lootbox giveth and the lootbox taketh.

lc112

Just edited the post because I think the way it was phrased kind of exaggerated the difficulties we've been having applying the newer models. 3.7 was better, as I mentioned to Daniel, just underwhelming and not as big a leap as either 3.6 or certainly 3.5.

lc*120

If you plot a line, does it plateau or does it get to professional human level (i.e. reliably doing all the things you are trying to get it to do as well as a professional human would)?

It plateaus before professional human level, both in a macro sense (comparing what ZeroPath can do vs. human pentesters) and in a micro sense (comparing the individual tasks ZeroPath does when it's analyzing code). At least, the errors the models make are not ones I would expect a professional to make; I haven't actually hired a bunch of pentesters and asked them to do the s... (read more)

7gwern
Still seems like potentially valuable information to know: how much does small-model smell cost you? What happens if you ablate reasoning? If it is factual knowledge and GPT-4.5 performs much better, then that tells you things like 'maybe finetuning is more useful than we think', etc. If you are already set up to benchmark all these OA models, then a datapoint from GPT-4.5 should be quite easy and just a matter of a small amount of chump change in comparison to the insight, like a few hundred bucks.
lc100

We use different models for different tasks for cost reasons. The primary workhorse model today is 3.7 sonnet, whose improvement over 3.6 sonnet was smaller than 3.6's improvement over 3.5 sonnet. When taking the job of this workhorse model, o3-mini and the rest of the recent o-series models were strictly worse than 3.6.

Thanks. OK, so the models are still getting better, it's just that the rate of improvement has slowed and seems smaller than the rate of improvement on benchmarks? If you plot a line, does it plateau or does it get to professional human level (i.e. reliably doing all the things you are trying to get it to do as well as a professional human would)?

What about 4.5? Is it as good as 3.7 Sonnet but you don't use it for cost reasons? Or is it actually worse?

lc*71

I haven't read the METR paper in full, but from the examples given I'm worried the tests might be biased in favor of an agent with no capacity for long term memory, or at least not hitting the thresholds where context limitations become a problem:

 

For instance, task #3 here is at the limit of current AI capabilities (takes an hour). But it's also something that could plausibly be done with very little context; if the AI just puts all of the example files in its context window it might be able to write the rest of the decoder from scratch. It might not... (read more)

1Xodarap
This seems plausible to me but I could also imagine the opposite being true: my working memory is way smaller than the context window of most models. LLMs would destroy me at a task which "merely" required you to memorize 100k tokens and not do any reasoning; I would do comparatively better at a project which was fairly small but required a bunch of different steps.
1Petropolitan
Not just long context in general (that can be partially mitigated with RAG or even BM25/tf-idf search), but also nearly 100% factual accuracy on it, as I argued last week
lc5-5

There was a type of guy circa 2021 that basically said that gpt-3 etc. was cool, but we should be cautious about assuming everything was going to change, because the context limitation was a key bottleneck that might never be overcome. That guy's take was briefly "discredited" in subsequent years when LLM companies increased context lengths to 100k, 200k tokens.

I think that was premature. The context limitations (in particular the lack of an equivalent to human long term memory) are the key deficit of current LLMs and we haven't really seen much improvement at all.

5ryan_greenblatt
Why do you think they haven't seen much improvement? I agree that the "effective" high-understanding context length--as in, what the model can really process more deeply than just doing retrieval on small chunks--is substantially less than 200k tokens. But, why do you think this hasn't improved much? I think our best proxy for this is how well AIs do on long agentic tasks and this appears to be improving pretty fast. This isn't a very isolated proxy as it includes other aspects of capabilities, but performance on these tasks does seem like it should be upper bounded by effective usage of long context, at least to the extent that effective usage of long context is a bottleneck for anything.
lc109

If AI executives really are as bullish as they say they are on progress, then why are they willing to raise money anywhere in the ballpark of current valuations?

The story is that they need the capital to build the models that they think will do that.

lc*41

Moral intuitions are odd. The current government's gutting of the AI safety summit is upsetting, but somehow less upsetting to my hindbrain than its order to drop the corruption charges against a mayor. I guess the AI safety thing is worse in practice but less shocking in terms of abstract conduct violations.

6Viliam
Specific things -- especially people -- feel more real than abstract threats? Also, maybe instead of actual damage the intuitions reflect the perceived intention / integrity of the actor? Like, it is more plausible that there is a honest misunderstanding / difference of opinions on AI safety, than about protecting a corrupt mayor. Such intuition may make sense in general (even if not specifically in the case of AI safety), because misunderstandings can possibly be addressed by a dialog, but it doesn't make much sense to talk to people participating in corruption -- they are perfectly aware of what they are doing.
1ank
Yep, we chose to build digital "god" instead of building digital heaven. The second is relatively trivial to do safely, the first is only possible to do safely after building the second
lc*40

It helps, but this could be solved with increased affection for your children specifically, so I don't think it's the actual motivation for the trait.

The core is probably several things, but note that this bias is also part of a larger package of traits that makes someone less disagreeable. I'm guessing that the same selection effects that made men more disagreeable than women are also probably partly responsible for this gender difference.

lc*40

I suspect that the psychopath's theory of mind is not "other people are generally nicer than me", but "other people are generally stupid, or too weak to risk fighting with me".

That is true, and it is indeed a bias, but it doesn't change the fact that their assessment of whether others are going to hurt them seems basically well calibrated. The anecdata that needs to be explained is why nice people do not seem to be able to tell when others are going to take advantage of them, but mean people do. The posts' offered reason is that generous impressions of ... (read more)

lc50

This post is about a suspected cognitive bias and why I think it came to be. It's not trying to justify any behavior, as far as I can tell, unless you think the sentiment "people are pretty awful" justifies bad behavior in of itself.

The game theory is mostly an extended metaphor rather than a serious model. Humans are complicated.

4sapphire
I don't agree on the size of the bias. I think most people in lesswrong are biased the other way.  Also it sort of does justify the behavior? Consider idk 'should we race to achieve AI dominance before China does'. Well I think starting such an arms race is bad behavior. But if I thought China was almost certainly actually going to secretly race ahead, then enslave or kill us, it would be justify the race. Treating people as worse than they are is a common and serious form of bad behavior.  In general if you "defect" because you thought the other party would that is quite sketchy. But what if proof comes out they really were about to defect on you? In that case I cannot really blame you. The behavior is only bad if the other party was likely to not defect! There are other somewhat more speculative cases. If someone actually would rob you if your positions were switched it does seem less wrong for you to rob them? Idk how strong this justification is but it seems non trivial. In my opinion there isn't really a good practical ethics approach except "avoid interacting with people you consider to be of low character"
lc13-5

Elon already has all of the money in the world. I think he and his employs are ideologically driven, and as far as I can tell they're making sensible decisions given their stated goals of reducing unnecessary spend/sprawl. I seriously doubt they're going to use this access to either raid the treasury or turn it into a personal fiefdom. It's possible that in their haste they're introducing security risks, but I also think the tendency of media outlets and their sources will be to exaggerate those security risks. I'd be happy to start a prediction market about this if a regular feels very differently.

If Trump himself was spearheading this effort I would be more worried.

5Martin Randall
I do see some security risk. Although Trump isn't spearheading the effort I expect he will have access to the results.
lc150

Anthropic has a bug bounty for jailbreaks: https://hackerone.com/constitutional-classifiers?type=team

If you can figure out how to get the model to give detailed answers to a set of certain questions, you get a 10k prize. If you can find a universal jailbreak for all the questions, you get 20k.

lc*53

Yeah, one possible answer is "don't do anything weird, ever". That is the safe way, on average. No one will bother writing a story about you, because no one would bother reading it.

You laugh, but I really think a group norm of "think for yourself, question the outside world, don't be afraid to be weird" is part of the reason why all of these groups exist. Doing those things is ultimately a luxury for the well-adjusted and intelligent. If you tell people over and over to question social norms some of those people will turn out to be crazy and conclude crime and violence is acceptable.

I don't know if there's anything to do about that, but it is a thing.

lc*81

So, to be clear, everyone you can think of has been mentioned in previous articles or alerts about Zizians so far? Because I have only been on the periphery of rationalist events for the last several years, but in 2023 I can remember sending this[1] post about rationalist crazies into the San Antonio LW groupchat. A trans woman named Chase Carter, who doesn't generally attend our meetups, began to argue with me that Ziz (who gets mentioned in the article as an example) was subject to a "disinformation campaign" by rationalists, her goals were actually... (read more)

Hi Dean (?)! If you have any pressing questions in this vein (or heck, any other vein for that matter) re: me, you've always been welcome to ask me in a DM or in the group chat you mentioned. Which I am still in. I'd be down to schedule a zoom call even. I'm an open book. Thanks for your concern (I think?).

....I know someone named Chase Novinha? I don't think it's the same person, though.

Edit: Confirmed same person, slimepriestess has said they are "safe and accounted for," and are one of the cofounders of its alignment company.

5AprilSR
Yeah, I haven't heard of this person, though it's possible someone I know knows them—that definitely sounds like the kind of person someone should be trying to check in on to me. I think there are a lot of people out there who will be willing to tell the Ziz sympathetic side of the story. (I mean, I would if asked, though "X did little wrong" seems pretty insane for most people involved and especially for Ziz). Like, I think there's a certain sort of left anarchismish person who is just, going to be very inclined to take the broke crazy trans women's side as much as it's possible to do so. It doesn't seem possible or even necessarily desirable to track every person with a take like that... whereas with people very very into Zizianism, it seems like important information. I don't know exactly what update should be drawn from the fact that people I know were collectively 3 for 3 on having known the people who showed up in recent incidents, information on the topic hasn't generally been shared freely enough for me to have a whole picture. Edit: To be thorough about the "everyone I can think of" part, there is this tweet I saw, and you could argue @Slimepriestess hasn't technically been mentioned in articles or alerts. I don't really believe either of these people to be dangerous (more confident on Slimepriestess I don't know much about that Twitter user) but they have explicitly described themselves as Zizian even after recent events, so.  I'll also explicitly specify that I'm not really inclined to like, list every person I know who is half-fluent in hemisphere nonsense or who has read Sinceriously, there are rather a lot of people who've done those things. It's stuff like, under what situations does this person endorse violence, do they do ideological purity tests for who they'll be friends with, do they explicitly call themselves a Zizian, and especially whether they've started being more reclusive lately.
lc*288

I know you're not endorsing the quoted claim, but just to make this extra explicit: running terrorist organizations is illegal, so this is the type of thing you would also say if Ziz was leading a terrorist organization, and you didn't want to see her arrested.

lc*120

Why did 2 killings happen within the span of one week?

According to law enforcement the two people involved in the shootout received weapons and munitions from Jamie Zajko, and one of them also applied for a marriage certificate with the person who killed Curtis Lind. Additionally I think it's also safe to say from all of their preparations that they were preparing to commit violent acts.

So my best guess is that:

  • Teresa Youngblut and/or Felix Bauckholt were co-conspirators with the other people committing violent crimes
  • They were preparing to commit fur
... (read more)
8AnonymousAcquaintance
I believe it was Teresa Youngblut, not Ophelia (Felix) , who first pulled a gun and opened fire. Only after the shooting between Teresa and the border patrol started did Ophelia (Felix) pull a gun.
lc62

I think an accident that caused a million deaths would do it.

2AnthonyC
I hope it would, but I actually think it would depend on who or what killed whom, how, and whether it was really an accident at all. If an American-made AI hacked the DOD and nuked Milan because someone asked it to find a way to get the 2026 Olympics moved, then I agree, we would probably get a push back against race incentives. If a Chinese-made AI killed millions in Taiwan in an effort create an opportunity for China to seize control, that could possibly *accelerate* race dynamics.

I think this post is quite good, and gives a heuristic important to modeling the world. If you skipped it because of title + author, you probably have the wrong impression of its contents and should give it a skim. Its main problem is what's left unsaid.

Some people in the comments reply to it that other people self-deceive, yes, but you should assume good faith. I say - why not assume the truth, and then do what's prosocial anyways?

2Noosphere89
One reason to assume good faith is to stop yourself from justifying your own hidden motives in a conflict: https://www.lesswrong.com/posts/e4GBj6jxRZcsHFSvP/assume-bad-faith#Sc6RqbDpurX6hJ8pY
lc*4-1

You're probably right, I don't actually know many/haven't actually interacted personally with many trans people. But also, I'm not really talking about the Zizians in particular here, or the possibility of getting physically harmed? It just seems like being trans is like taking LSD, in that it makes a person ex-ante much more likely to be someone who I've heard of having a notoriously bizarre mental breakdown that resulted in negative consequences for the people they've associated themselves with.

4metachirality
I fear that, while it might be a good idea to discourage LSD, it would make things even worse to discourage transitioning.
eukaryote4916

I think this is a horrible thing to say. The murderers are associated with each other; that gives you much more information than just knowing that someone is trans or not. There are many, many stellar trans rationalists. I'm thinking you maybe are thinking of the standout dramatic cases you've heard of and don't know a lot of trans people to provide a baseline. 

This is the craziest shit I have ever read on LessWrong, and I am mildly surprised at how little it is talked about. I get that it's very close to home for a lot of people, and that it's probably not relevant to either rationality as a discipline or the far future. But like, multiple unsolved murders by someone involved in the community is something that I would feel compelled to write about, if I didn't get the vague impression that it'd be defecting in some way.

lc20

Most of the time when people publicly debate "textualism" vs. "intentionalism" it smacks to me of a bunch of sophistry to achieve the policy objectives of the textualist. Even if you tried to interpret English statements like computer code, which seems like a really poor way to govern, the argument that gets put forth by the guy who wants to extend the interstate commerce clause to growing weed or whatever is almost always ridiculous on its own merits.

The 14th amendment debate is unique, though, in that the letter of the amendment goes one way, and the sta... (read more)

2JBlack
I don't see how they're "the exact opposite way". The usual rules of English grammar make this a statement that those those who are born in the United State but belong to families of accredited diplomatic personnel are foreigners, i.e. aliens. Perhaps you read the statement disjunctively as "foreigners, [or] aliens, [or those] who belong [...]"? That would require inserting extra words to maintain correct grammatical structure, and also be a circular reference since the statement is intended to define those who are considered citizens and those who are considered non-citizens (i.e. foreigners, aliens).
lc*80

What’s reality? I don’t know. When my bird was looking at my computer monitor I thought, ‘That bird has no idea what he’s looking at.’ And yet what does the bird do? Does he panic? No, he can’t really panic, he just does the best he can. Is he able to live in a world where he’s so ignorant? Well, he doesn’t really have a choice. The bird is okay even though he doesn’t understand the world. You’re that bird looking at the monitor, and you’re thinking to yourself, ‘I can figure this out.’ Maybe you have some bird ideas. Maybe that’s the best you can do

lc*232

Sarcasm is when we make statements we don't mean, expecting the other person to infer from context that we meant the opposite. It's a way of pointing out how unlikely it would be for you to mean what you said, by saying it.

There are two ways to evoke sarcasm; first by making your statement unlikely in context, and second by using "sarcasm voice", i.e. picking tones and verbiage that explicitly signal sarcasm. The sarcasm that people consider grating is usually the kind that relies on the second category of signals, rather than the first. It becomes more fu... (read more)

lc20

Just reproduced it; all I have to do is subscribe to a bunch of people and this happens and the site becomes unusable:

2RobertM
Well, that's unfortunate.  That feature isn't super polished and isn't currently in the active development path, but will try to see if it's something obvious.  (In the meantime, would recommend subscribing to fewer people, or seeing if the issue persists in Chrome.  Other people on the team are subscribed to 100-200 people without obvious issues.)
Load More