Recent site changes have generated more unhappiness than I expected. This post is a brief note to share resources that will make it easier for concerned site users to track what's happening and what we intend.
- First, know that we're listening. We'll make further site changes next week that will likely include some reversions.
- The official site issue tracker remains unchanged, but for the next week or so we'll work from this public Google Doc (just because it's lighter weight). Nothing on that document is a promise - just evidence of our current thinking. We'll strike out items on that list as we deliver them to our (private) staging server, and will roll them out onto the live site soon after.
- I've reached out to a small handful of SIAI and LessWrong heavyweights to track my balance as we make these changes. My feed should make it clear that I'm trying to act with calm rationality, but I'm obviously invested in the work we've shared to date and asking for some external help seems prudent.
- I'll track discussion on this post.
Some reflections:
- On what we did wrong
- We didn't predict the unhappiness our changes caused.
- We didn't make it clear that we were listening for feedback, and that changes were not final.
- On what many of you did right
- Calmly and politely voiced your concerns about some of our changes.
- Where you liked some of our changes, said so where we could see it. Thank you.
- On what many of you could have done differently, that would have increased average happiness (particularly mine)
- Calmly and politely voiced your concerns about some of our changes.
- Checked the very common assumption that the recent changes were final and would not be discussed. (Please note that I've first admitted fault in not making this clearer.)
… but try to be polite.
Thank you for the detailed response.
That modus operandi seems to be a trend in recent social web development (e.g. Facebook), but I don't think it's one that endears the developers to users. Prior communication is almost always a better option than just doing things whenever, especially in cases like this one where user advice and suggestions were explicitly requested.
Also, I suspect the release of new features at the same time as new visuals contributed to users' inference that what's been done so far constituted "everything." If a new feature pops up by itself, it's just a new feature. The inclusion of the graphic redesign made it seem like this was the entire redesign, period.
Thanks for the response. Before making a few specific replies, let me further explain why I went through the posted suggestions in that manner. Really, it was to make sure that I wasn't just upset because my own suggestions didn't appear to get much attention; I wanted to remove my own bias by looking at the contribution of the user base at large. As it turned out, my suggestions were repeated several times, and related suggestions were also repeated several times, as we both noted. On to specific notes:
After your responses to the suggestion list, my revised count is: 13 Y, 29 N (7 planned), 2 ?. (This is after removing the 4 repeats you identified explicitly.) That obviously looks much better than it did before.
I think the principle illustrated here is without providing explicit information and description, action:[considering suggestions carefully and sorting them into "implement immediately," "implement in the future," "too costly to implement"] looks a lot like action:[cherry-picking a few suggestions and just ignoring the rest]. It's very hard to tell how much effort was put into analyzing the suggestions if that effort isn't somehow conveyed to us.
Agreed. As you and others have pointed out, a post like this one would have helped a lot. I have learned.
For what it's worth, I did what you did, and we worked from a Freemind map very similar to your list, organised/weighted by upvotes, our estimate of effort requ... (read more)