Wiki Contributions

Comments

I've now added password reset capability to GreaterWrong.

I do this because there's no way to request posts and comments sorted together chronologically with GraphQL. However, if you click the posts or comments tab, the pagination will work correctly for any number of pages.

The two sites are based on quite different philosophies of web development, so it would be far from straightforward to do some of the things I've done within the existing LW 2.0 code. I've had fun creating GreaterWrong, and I don't mind putting effort into it as long as LW 2.0 seems like a viable community. I don't think it's necessarily bad to have two sites that do the same thing, if some people prefer one and other people prefer the other. (I agree with Error's comment.)

No, I don't have any special access to the database. If you log in to GreaterWrong, your password is briefly stored in my server's memory, only as needed to forward it to LW 2.0 and receive an authentication token back. In the future I'd like to eliminate even that, but it will require some additional complexity and changes on the LW 2.0 side.

Do you have plans to implement a list of posts by user (without comments), a list of drafts, and an inbox? These are the only things I go to LW2.0 for, most of my time is now spent on GW.

Yes, definitely.

It would be nice to have more than just a single page of 'new' content, since as is, it can even be hard to check out all recent posts from the past few days [...] more of a user's posting and commenting history

Done :)

Hi, I'm the one who created Greater Wrong. I'm intending to announce it more widely once it doesn't have so many conspicuously missing features, but it's something I'm working on in my spare time so progress is somewhat gradual. You can, however, already log in and post comments. You can use your existing LW 2.0 username/password or create a new one. Let me know if you have any problems.

I always assumed it was by selling prediction securities for less than they will ultimately pay out.

Bs pbhefr, erirnyvat n unfu nsgre gur snpg cebirf abguvat, rira vs vg'f irevsvnoyl gvzrfgnzcrq. Nabgure cbffvoyr gevpx vf gb fraq n qvssrerag cerqvpgvba gb qvssrerag tebhcf bs crbcyr fb gung ng yrnfg bar tebhc jvyy frr lbhe cerqvpgvba pbzr gehr. V qba'g xabj bs na rnfl jnl nebhaq gung vs gur tebhcf qba'g pbzzhavpngr.

Unless you have a model that exactly describes how a given message was generated, its Shannon entropy is not known but estimated... and typically estimated based on the current state of the art in compression algorithms. So unless I misunderstood, this seems like a circular argument.

Load More