All of brummli's Comments + Replies

Thanks for disclosing. 
I feel this should be part of this kind of post. Not knowing exactly before reading is helpful though.
 

4frontier64
It seems the state of the art with generating GPT-3 speech is to generate multiple responses until you have a good one and cherry-pick it. I'm not sure whether including a disclaimer explaining that process will still be helpful. Yes there's a sizable number who don't know about that process or who don't automatically assume it's being used, but I'm not sure how big that number is anymore. I don't think Isusr should explain GPT-3 or link to an OpenAI blog every time he uses it as that's clearly a waste of time even though there's still a large number of people who don't know. So where do we draw the line? For me, every time I see someone say they've generated text with GPT-3 I automatically assume it's a cherry-picked response unless they say something to the contrary. I know from experience that's the only way to get consistently good responses out of GPT-3 is to cherry pick. I estimate that a lot of people on LW are in the same boat.
Vitor360

Hard disagree. I like to know what it is I'm reading. I got the strange feeling that this text was way more powerful/cogent than what I thought GPT-3 was capable of, and I feel very mislead that one of the crippling defects of GPT-3 (inability to maintain long-term coherency) was in fact being papered over by human intervention. 

Not knowing beforehand sure did help me train my bullshit detector, though.

Answer by brummli10

A pretty good trigger for me is whenever I ask myself: "Is that plausible?" 

1Teerth Aloke
Yes
brummli*30

How would that app work? In what way similar? I am failing to see the part worth emulating in my example.

I will definitely read this. I've been trying to find these kinds of preferences in myself for some time. 

3johnswentworth
I mean I've thought about it in a similar way - i.e. there's a state where users are basically "in" the app and use it regularly, and a state where they're not, similar to how people are "in" a group or not. And there's an activation energy required to go in either direction - e.g. an app might have an onboarding flow that's a pain in the ass, or leaving it might require finding a substitute and moving your content over. Though the activation energy to leave an app is a lot lower than the activation energy to leave a sect.
Answer by brummli*10

This makes me think of two kinds of moderates. It is not literally about conformity but we can find a good criterion for conformity/independent thinking in there: Looking at their opinion spectrum and seeing if they are too smooth and/or short tailed to come from mostly one person. 
I'd guess you can try to find accidental members for most groups. 

  • Look at their spectrum of ideas. They should scatter differently around the group ideal.
  • They probably have opinions about other things than socially enforced, might not really have them where enforced.
  • Th
... (read more)
Answer by brummli40
  • Polar caps and glaciers. Albedo change sets a high barrier for new growth when gone.
  • When getting to know a sect you can be in any social relation to them. At some point you will settle in one of two very distinct states. Entering isn't too hard. Exiting has a very high 'activation energy'.
  • Acquired taste preferences (coffee or tea) seem to be bistable. (I'd guess many habits are)
3johnswentworth
Answer by brummli60

It is harder than expected not to recycle from known instances. I had to totally avoid physics and markets to feel like finding not remembering examples.

  •  There's a lot of somewhat cyclic stochastic processes that I would call a stable equilibrium. My whiteboard tends to have about the same fraction of free space most of the time. Sketching something or deleting something is a fluctuation. Changing my usage habits could make a long term difference.
  • The density of social events in my calendar is surprisingly constant. Less is boring, more is exhausting.
... (read more)
3johnswentworth
One nice thing about this exercise is that it gets harder the more you've used the concept in specific contexts (like physics or economics). Well done.

I come up blank on a regular basis when thinking about the usefulness of sharing something. 
Useful content tends to teach me a model or enable me to built one. 

  • Unexpectedly useful content extends a model the writer didn't know the reader had or fills a conceptual hole of the readers model.
  • Unexpectedly useless content tries to teach about something the reader already has a good model for.

I'd love to have even a bad heuristic (for not totally obvious cases) of this problem.