Hey guys,
I'm a lurker, but I'm a regular member of the Denver LW meetup crew, trying to get our scheduled meetups on the main map. There's a Karma limit for that sort of post, and the mod I talked to sent me to you here for help. Would you please give me internet points to make this possible? You'd make all of my transhumanist and EA dreams come true. You know, except the main ones.
Could I get a couple of upvotes so that I could post links? I'd like to put some of the LW-relevant content from weird.solar here now that link posts are a thing.
Downvoting is temporarily disabled! I'm very excited about this change because in the last few weeks I've seen some good conversations deleted by someone exploiting a sockpuppet glitch. Besides, I have always preferred commenting to downvoting.
Check out the Double Crux post in Main!
Double Crux is one of the recent CFAR methods that seems like it could spread easily and isn't too deeply reliant on other things that CFAR teaches. (Basically, it's about what leads to conversations where people can actually change their minds, and a recipe for doing so.)
There's a new post in Main! I missed it completely, because on login I head straight to Discussion... if you are like me, just be aware.
The reason I visit LW is it satisfies a need for community. I'm glad to see the recent efforts at revitalisation, as a large part of the value for me generated by a single conversational locus is the social support it provides. This site has been inactive for a long time - and yet to my puzzlement I still found myself checking it regularly, despite not learning anything. I discovered that it's because I just wanted to keep in touch with what's going on in rationalist circles, and hang out a bit. I see myself as an aspiring rationalist, and that's a hard th...
I had to translate an article about testing the shelf life of a viral diagnosticum of the fourth generation, and it seemed rather fishy to me (but I'm never a chemist). The authors used the "accelerated aging" method, heated the diagnosticum up for some periods of time to some temperatures, and then tested the "functional parameters". The rationale is that a 10C increase in temperature results in double the rate of the reaction. They used the results to project the shelf life at 4 C.
As far as I can tell, they did not check test kits th...
If you could pick one music track that, if turned into a music video, could most exemplify the emotions resulting from LW-style rationality, what would that song be?
The bottom left corner of Questionable Content number 3362 (http://questionablecontent.net/view.php?comic=3362). That is all.
Everyone's afraid that robots will steal manual labor. But the components for robots stealing enterpreneur's jobs are already floating around: DAO's, machine learning for copywriting, maximize profit.
MIRI publishes a lot of research on 'neat' systems like first order logic reasoners, and not on 'scruffy' systems like neural networks. I heard Eliezer Yudkowsky allude to the idea that this is for convenience or budgetary reasons, and that they will do more research on neural networks (etc) in the future.
Does anyone have any more information about what MIRI thinks and intends to research about 'scruffy' AI systems?
I'm having an un-rational moment, and despite knowing that, it's still affecting my behaviour.
Earlier today, my newsfeed included the datum discussed here, of Trump having a phone call with the President of Taiwan; and the item discussed here, about Trump talking about 'shutting down' the Internet. And later, while listening to my music playlist of the Merry Wives of Windsor, one of the tunes that popped up was "Green Fields of France", one version of which can be heard here. And I started wondering whether I was prepared for politics to go in an...
I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.
[pollid:1171]
I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.
[pollid:1170]
Let's say that I have a belief running like this: "a DAO that controls the manifacturing output of robots to produce a UBI would be the solution to the robots-stealing-jobs problem".
What would be the best move for me to influence someone into believe / try this?
Take a degree in economics? Joining some kind of foundation? Shouting from the top of a carboard box in front of the Coliseum?
What else?
Richard Wong, head of engineers at Coursera, in an interview at the site lifehacker.com has declared:
I used to be a PC-only person, back during my days at Microsoft, but now I’m pretty much Apple only. It has some of the best development tools for engineers.
It beats me, though. I knew that PCs are good for gaming and developing, but which are the conclusively superior development tools for engineer? I'm confused.
Would I be able to tap the LW academic network to get a copy of this paper?
Extreme gratitude in advance.
Explosions in the Sky music It's very important as a rationalist, your job is to understand the machine that you are. If what you exists, how you choose your actions, seeing through the conditioning and extreme obstacles that are limiting your growth and the growth of humanity, is very important. So study neuroscience!
A reminder that rationality is a slave to our emotions and how in line our emotions are with rationality dictates how rational our actions are, for example, from one moment to the next you can become Vegan. The disconnect between emotions an...
Okay, I finished reading the book, and then I also looked at the wiki. So...
A few years ago I suspected that the biggest danger for the rationalist movement could be it's own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word "rationality" becomes popular, all crackpots and scammers will notice it, and will start producing their own versions -- and if they won't care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there... except that instead of "rationality", his applause light is "logic". Same difference.
Instead of nitpicking hundred small details, I'll try to get right into what I perceive as the fundamental difference between LW and "logic nation":
According to LW, rationality is hard. It's hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use to...
This is Athene:
I tried to understand the world by seeing everything as information instead since it then becomes a lot easier to find a logical answer to how we came to existence and why the logical patterns around us emerge. There are two scenario's that sound more logical for the average person, one is that there has always been nothing and the other that there has always been infinite chaos. Keep in mind, this is simplified because always makes us think about time and time came only to existence with the big bang. The issue people have though is how something could emerge from nothing without the intervention of a creator. On the other hand, if we assume there was always infinite chaos and we can find a falsifiable explanation to how our consistent reality could emerge from it we would have a much easier time to set our inner conflict at ease.
To get back to how I approach everything as information, let's represent this infinite chaos as 1's and 0's. How could our reality emerge from this and how would logic be able to bring about all this beauty and consistency. There is already mathematical models of how chaos brings about order but in this specific case we can also derive certain mathematical conclusions from infinity. For example 0 would appear around half the time and 1 as well. Same, if you take the combination 01 it would appear 25% of the time while the combination 10, 11 and 00 would do so to. What you already can see is that the longer the binary number is the less frequent it appears within infinity.
To understand the next step you need some basic understanding about the concept of compression algorithm. To illustrate, if you have a fully black background in paint and save it as a .bmp it will be a much larger file then when you save it as a .jpg. The reason for this is because the .jpg uses a compression algorithm that allows you to show the same black picture on the screen but requires a lot smaller binary number. If this black picture would be our consciousness instead and it would emerge from infinite chaos, it would naturally be the one that is most compressed since it is what is most likely to happen. This is one explanation for how everything around us seems to follow specific patterns as these are merely the compression algorithms that are brought about due to the probabilities within infinite chaos.
If this line of thinking would be true it would also have other consequences. The number 1 and a billion 0's for example would be smaller then a shorter binary number that would contain more information. This approach would also bring about a different kind of math that isn't based on Euclidean or non-Euclidean geometry. Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.
This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction
I hope you understand why I am not impressed with the Athene's version.
Well, he probably hasn't read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they're all just intellectually masturbating.
Having to stay somewhere for a few days doesn't sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
(Uhm, this is probably not the case, but asking anyway to make sure -- "they did reach out regarding their group on here I think" does not refer to this, right? Because that's the only recent attempt to reach out here that I remember.)
Regarding rationality.org and so forth I think he mentioned they're all just intellectually masturbating.
Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic... now it's popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than "intellectual masturbation".
Athene has an impressive personal track record. I admit that part. But the whole thing about "clicking" is a separate claim. (Steve Jobs was an impressive person; that doesn't prove his beliefs in reincarnation are correct.)
By the way, what do you think about the website: https://www.asimpleclick.org/ ?
Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they "clicked" (most posts seem the same, and so do all replies, it's a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?
This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction I hope you understand why I am not impressed with the Athene's version.
I understand, that article looks interesting.
Having to stay somewhere for a few days doesn't sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
I think it was an event like you linked.
...(Uhm, this is probably not the case, but asking anyway to make sure -- "they did reach out regarding their group on here I think" does not ref
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "