All of steph's Comments + Replies

steph*1

Can you give me an example of a page where the editor slows down substantially? I want to make sure I'm reproducing the correct thing.

steph*10

I think the saying is "clear beliefs weakly held", not "strong beliefs weakly held".

steph*50

I think this claim's title is too long to be used a handle for the concept.

1alexei
Fixed.
steph*10

I need some explanatory text before I can vote on this.

1Eric Rogstad
Better?
steph*2

I think it is important. I now want to refine the claim.

steph*50

I don't understand the graph, can someone explain it to me?

1alexei
My take on it: let's say you tell the other person you are running 10 minutes late. If you end up coming on time, then you get no social penalty for being not on-time (blue line at t="on time"), but you get a social penalty for being off your stated mark (red U at t="on time"). If you show up 10 minutes late, as you said you would, you get some social penalty for being late (blue line at t="10 mins late"), but a small amount of social penalty from red U (at t="10 mins late") because you actually showed up when you said you would (i.e. 10 minutes later). The reason it's small amount and not zero is because you are still 10 minutes late from the original time. If you said you would be 9 minutes later, then U would be closer to x-axis; at 5 minutes, it would be presumably half as high from the x-axis; and if you said you were running on-time and showed up on-time, then it would be tangent to x-axis (and t="on time" and t="reported ETA" would be the same). One thing that's a bit misleading is that earlier in the post Paul says: Error is the blue line. Noise is the red U, but he still counts that as social penalty. I guess it just doesn't factor into "signaling", meaning the other person might still be somewhat upset, but they won't take delay from noise as evidence about you.
steph*1

Paul's recent post argues in favor of this position: https://arbital.com/p/6mt/

2alexei
https://arbital.com/p/6mt/ => If we can’t lie to others, we will lie to ourselves
steph*10

I'd update toward this position if I saw a good reason to expect you've gotten it more right than others.

steph*10

In particular, I would disagree with the claim if "our community" means "all the people who are fans of the sequences" and I would agree if it means "silicon valley"

steph*10

Also I think we want a basic argument instead of a bunch of links to related claims.

steph*20

It's not operationalized enough for me to vote.

steph*80

I think it's important for claims to be very clear, and that this one isn't clear enough.

steph*20

This claim is making want a "wrong question" button.

1steph
Also I think we want a basic argument instead of a bunch of links to related claims.
2steph
It's not operationalized enough for me to vote.
1steph
In particular, I would disagree with the claim if "our community" means "all the people who are fans of the sequences" and I would agree if it means "silicon valley"
8steph
I think it's important for claims to be very clear, and that this one isn't clear enough.
1alexei
To quote Anna: the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc. I don't know of a short phrase to describe that.
steph*60

What if it was tagged with claims? Would that give you what you're wanting from a summary? I feel much more able to tag a post with claims than I am able to write a summary of the post.

steph*6

Hike through Glen Canyon

steph*3

Go to the beach and make a fire

steph*7

Go to Yosemite and see the Milky Way

steph*10

Is this paragraph needed? I find myself wanting to skip past it.

steph*1

Also, Nate suggests removing the trivial inconvenience of highlighting the text before you can react to the text.

steph*2

Also: Nate wants a "whoa, I get it" button for author hedons. Maybe it should be the same class as the comment/object/typo reaction.

1Eric Rogstad
Possibly with a lightbulb icon!
steph*1

My notes from this meeting: https://workflowy.com/#/93473ece4410

steph*1

Eliezer Yudkowsky Could you review this proposal for author hedons? Are there things missing?

2Eliezer Yudkowsky
The updates would be snowed under. Like, the updates bar's bottom could have an "achievements since..." or "likes since..." but we don't want to obscure the important updates under a flood of 'X liked your comment!'
steph*1

I had to read this sentence several times to parse it correctly. Consider s/you need to write on/before you can write on/

steph*1

Seems like we want to make it possible to have pages that are not part of an explanation?

1Eric Rogstad
Can you give an example of the kind of page you have in mind?
steph*1

Thanks, sounds good.

One more question: are we planning to use the EDIT score and KARMA score for purposes other than privileges? If not, I'd like to call those things something like PRIVILEGES to prevent confusion.

steph*1

Eliezer Yudkowsky Two questions:

  1. What is the function of the KARMA score?
  2. Do you mean that the user sees no aggregate "Reputation" score anywhere? That they only see lists of good deeds on their and others' profiles?
steph*1

Eliezer Yudkowsky Does this look complete? What numerical values should be assigned to each?

2Eliezer Yudkowsky
We at some point might want to calculate a TRUSTWORTHINESS score for weighting things like prediction bars or a GOODREAD score for the probability that a previously unrated comment is good, but I'm happy with naming things what they're called, meaning that PRIVILEGES and EDIT PRIVILEGES are fine for now.
1steph
Thanks, sounds good. One more question: are we planning to use the EDIT score and KARMA score for purposes other than privileges? If not, I'd like to call those things something like PRIVILEGES to prevent confusion.
2Eliezer Yudkowsky
Well, we might eventually have different thresholds for different things, but right now I'm visualizing a threshold for page-editing privileges and a threshold for other privileges such as, e.g., your comments becoming visible by default without author approval. We should not make visible the quantities used in calculation of internal privilege thresholds. But we should show people a separately-computed tally of their good deeds, such as xp gains from people viewing their pages, or the number of times their pages have been liked.
1steph
Eliezer Yudkowsky Two questions: 1. What is the function of the KARMA score? 2. Do you mean that the user sees no aggregate "Reputation" score anywhere? That they only see lists of good deeds on their and others' profiles?
2Eliezer Yudkowsky
Answered with my suggested weights by editing page.
steph*1

I found this diagram confusing. Is the claim that all of these things are of equal value / took equal time? Why is EAG all the way on the right?

steph*50

Are UnforeseenMaximums distinct from EdgeInstantiation problems? Seems like they are EdgeInstantiation problems in which the utility of the edge solution is much higher than the utility of the solutions that the programmers had in mind.