Hey guys,
I'm a lurker, but I'm a regular member of the Denver LW meetup crew, trying to get our scheduled meetups on the main map. There's a Karma limit for that sort of post, and the mod I talked to sent me to you here for help. Would you please give me internet points to make this possible? You'd make all of my transhumanist and EA dreams come true. You know, except the main ones.
Could I get a couple of upvotes so that I could post links? I'd like to put some of the LW-relevant content from weird.solar here now that link posts are a thing.
Downvoting is temporarily disabled! I'm very excited about this change because in the last few weeks I've seen some good conversations deleted by someone exploiting a sockpuppet glitch. Besides, I have always preferred commenting to downvoting.
Check out the Double Crux post in Main!
Double Crux is one of the recent CFAR methods that seems like it could spread easily and isn't too deeply reliant on other things that CFAR teaches. (Basically, it's about what leads to conversations where people can actually change their minds, and a recipe for doing so.)
There's a new post in Main! I missed it completely, because on login I head straight to Discussion... if you are like me, just be aware.
The reason I visit LW is it satisfies a need for community. I'm glad to see the recent efforts at revitalisation, as a large part of the value for me generated by a single conversational locus is the social support it provides. This site has been inactive for a long time - and yet to my puzzlement I still found myself checking it regularly, despite not learning anything. I discovered that it's because I just wanted to keep in touch with what's going on in rationalist circles, and hang out a bit. I see myself as an aspiring rationalist, and that's a hard th...
I had to translate an article about testing the shelf life of a viral diagnosticum of the fourth generation, and it seemed rather fishy to me (but I'm never a chemist). The authors used the "accelerated aging" method, heated the diagnosticum up for some periods of time to some temperatures, and then tested the "functional parameters". The rationale is that a 10C increase in temperature results in double the rate of the reaction. They used the results to project the shelf life at 4 C.
As far as I can tell, they did not check test kits th...
If you could pick one music track that, if turned into a music video, could most exemplify the emotions resulting from LW-style rationality, what would that song be?
The bottom left corner of Questionable Content number 3362 (http://questionablecontent.net/view.php?comic=3362). That is all.
Everyone's afraid that robots will steal manual labor. But the components for robots stealing enterpreneur's jobs are already floating around: DAO's, machine learning for copywriting, maximize profit.
MIRI publishes a lot of research on 'neat' systems like first order logic reasoners, and not on 'scruffy' systems like neural networks. I heard Eliezer Yudkowsky allude to the idea that this is for convenience or budgetary reasons, and that they will do more research on neural networks (etc) in the future.
Does anyone have any more information about what MIRI thinks and intends to research about 'scruffy' AI systems?
I'm having an un-rational moment, and despite knowing that, it's still affecting my behaviour.
Earlier today, my newsfeed included the datum discussed here, of Trump having a phone call with the President of Taiwan; and the item discussed here, about Trump talking about 'shutting down' the Internet. And later, while listening to my music playlist of the Merry Wives of Windsor, one of the tunes that popped up was "Green Fields of France", one version of which can be heard here. And I started wondering whether I was prepared for politics to go in an...
I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.
[pollid:1171]
I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.
[pollid:1170]
Let's say that I have a belief running like this: "a DAO that controls the manifacturing output of robots to produce a UBI would be the solution to the robots-stealing-jobs problem".
What would be the best move for me to influence someone into believe / try this?
Take a degree in economics? Joining some kind of foundation? Shouting from the top of a carboard box in front of the Coliseum?
What else?
Richard Wong, head of engineers at Coursera, in an interview at the site lifehacker.com has declared:
I used to be a PC-only person, back during my days at Microsoft, but now I’m pretty much Apple only. It has some of the best development tools for engineers.
It beats me, though. I knew that PCs are good for gaming and developing, but which are the conclusively superior development tools for engineer? I'm confused.
Would I be able to tap the LW academic network to get a copy of this paper?
Extreme gratitude in advance.
Explosions in the Sky music It's very important as a rationalist, your job is to understand the machine that you are. If what you exists, how you choose your actions, seeing through the conditioning and extreme obstacles that are limiting your growth and the growth of humanity, is very important. So study neuroscience!
A reminder that rationality is a slave to our emotions and how in line our emotions are with rationality dictates how rational our actions are, for example, from one moment to the next you can become Vegan. The disconnect between emotions an...
Okay, I finished reading the book, and then I also looked at the wiki. So...
A few years ago I suspected that the biggest danger for the rationalist movement could be it's own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word "rationality" becomes popular, all crackpots and scammers will notice it, and will start producing their own versions -- and if they won't care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there... except that instead of "rationality", his applause light is "logic". Same difference.
Instead of nitpicking hundred small details, I'll try to get right into what I perceive as the fundamental difference between LW and "logic nation":
According to LW, rationality is hard. It's hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use to...
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "