Hey guys,
I'm a lurker, but I'm a regular member of the Denver LW meetup crew, trying to get our scheduled meetups on the main map. There's a Karma limit for that sort of post, and the mod I talked to sent me to you here for help. Would you please give me internet points to make this possible? You'd make all of my transhumanist and EA dreams come true. You know, except the main ones.
Could I get a couple of upvotes so that I could post links? I'd like to put some of the LW-relevant content from weird.solar here now that link posts are a thing.
Downvoting is temporarily disabled! I'm very excited about this change because in the last few weeks I've seen some good conversations deleted by someone exploiting a sockpuppet glitch. Besides, I have always preferred commenting to downvoting.
Check out the Double Crux post in Main!
Double Crux is one of the recent CFAR methods that seems like it could spread easily and isn't too deeply reliant on other things that CFAR teaches. (Basically, it's about what leads to conversations where people can actually change their minds, and a recipe for doing so.)
There's a new post in Main! I missed it completely, because on login I head straight to Discussion... if you are like me, just be aware.
The reason I visit LW is it satisfies a need for community. I'm glad to see the recent efforts at revitalisation, as a large part of the value for me generated by a single conversational locus is the social support it provides. This site has been inactive for a long time - and yet to my puzzlement I still found myself checking it regularly, despite not learning anything. I discovered that it's because I just wanted to keep in touch with what's going on in rationalist circles, and hang out a bit. I see myself as an aspiring rationalist, and that's a hard th...
I had to translate an article about testing the shelf life of a viral diagnosticum of the fourth generation, and it seemed rather fishy to me (but I'm never a chemist). The authors used the "accelerated aging" method, heated the diagnosticum up for some periods of time to some temperatures, and then tested the "functional parameters". The rationale is that a 10C increase in temperature results in double the rate of the reaction. They used the results to project the shelf life at 4 C.
As far as I can tell, they did not check test kits th...
If you could pick one music track that, if turned into a music video, could most exemplify the emotions resulting from LW-style rationality, what would that song be?
The bottom left corner of Questionable Content number 3362 (http://questionablecontent.net/view.php?comic=3362). That is all.
Everyone's afraid that robots will steal manual labor. But the components for robots stealing enterpreneur's jobs are already floating around: DAO's, machine learning for copywriting, maximize profit.
MIRI publishes a lot of research on 'neat' systems like first order logic reasoners, and not on 'scruffy' systems like neural networks. I heard Eliezer Yudkowsky allude to the idea that this is for convenience or budgetary reasons, and that they will do more research on neural networks (etc) in the future.
Does anyone have any more information about what MIRI thinks and intends to research about 'scruffy' AI systems?
I'm having an un-rational moment, and despite knowing that, it's still affecting my behaviour.
Earlier today, my newsfeed included the datum discussed here, of Trump having a phone call with the President of Taiwan; and the item discussed here, about Trump talking about 'shutting down' the Internet. And later, while listening to my music playlist of the Merry Wives of Windsor, one of the tunes that popped up was "Green Fields of France", one version of which can be heard here. And I started wondering whether I was prepared for politics to go in an...
I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.
[pollid:1171]
I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.
[pollid:1170]
Let's say that I have a belief running like this: "a DAO that controls the manifacturing output of robots to produce a UBI would be the solution to the robots-stealing-jobs problem".
What would be the best move for me to influence someone into believe / try this?
Take a degree in economics? Joining some kind of foundation? Shouting from the top of a carboard box in front of the Coliseum?
What else?
Richard Wong, head of engineers at Coursera, in an interview at the site lifehacker.com has declared:
I used to be a PC-only person, back during my days at Microsoft, but now I’m pretty much Apple only. It has some of the best development tools for engineers.
It beats me, though. I knew that PCs are good for gaming and developing, but which are the conclusively superior development tools for engineer? I'm confused.
Would I be able to tap the LW academic network to get a copy of this paper?
Extreme gratitude in advance.
Explosions in the Sky music It's very important as a rationalist, your job is to understand the machine that you are. If what you exists, how you choose your actions, seeing through the conditioning and extreme obstacles that are limiting your growth and the growth of humanity, is very important. So study neuroscience!
A reminder that rationality is a slave to our emotions and how in line our emotions are with rationality dictates how rational our actions are, for example, from one moment to the next you can become Vegan. The disconnect between emotions an...
Okay, I finished reading the book, and then I also looked at the wiki. So...
A few years ago I suspected that the biggest danger for the rationalist movement could be it's own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word "rationality" becomes popular, all crackpots and scammers will notice it, and will start producing their own versions -- and if they won't care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there... except that instead of "rationality", his applause light is "logic". Same difference.
Instead of nitpicking hundred small details, I'll try to get right into what I perceive as the fundamental difference between LW and "logic nation":
According to LW, rationality is hard. It's hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use to...
This is exactly the part I called "his bastardized version of Tegmark Multiverse + Solomonoff Induction" in my previous comment. He intruduces a few complicated concepts, without going into details; it's all just "this could", "this would", "emerges from this". To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.
Falsifiable mathematically, a theory of everything which includes the theory itself, I mean. But sure, it allows someone to pick up the torch who are going to write a paper anyway.
For example: "Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition." -- Okay, might. How exactly? Uhm, who cares, right? It's important that I said "quantum", "entanglement" and "superposition". It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe. When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something "can be this", "could be this", "emerges from this", or "is one of reasons" are hard to disprove. Statements saying "I have been wondering about this", "I will define this", "this makes me look at the world differently" can be true descriptions of author's mental state; I have no way to verify it; but that's irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don't want to play this verbal game, because this is an exercise in rhetorics, not rationality.
They are just ramblings and no real inquiry has been made to investigate these 'bathtub theories', as Elon Musk puts it. But it is an easy way to explain certain things. Too much weight shouldn't be put on it, but it would be interesting with papers, so someone who is going to publish - do this.
Statements like "If logic is your core value you automatically try to understand everything logically" are deeply in the motte-and-bailey territory. Yeah, people who value logic, are probably more likely to try using it.
On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.
That's not correct as doing another thing still arises out of what you value emotionally according to this theory.
I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have "breakthrough insights" every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don't happen.
It does: religious experience enlightenment "wikipedia.org/wiki/Enlightenment_(spiritual)" mystical experience - nondualism
We don't know if it's permanent, so far data only goes for around 1 month - 1 month 1 week. With the exception of Athene himself (creator of the experiment). But enlightenment, religious experiences etc last for awhile.
So, he knows about LW and stuff, but he doesn't bother to make a reference, and instead he tells it like he made up everything himself. Nice. Well, that probably explains my feeling of "some parts are pure manipulation, but some parts feel really LessWrong-ish". The LessWrong-ish parts are probably just... taken from Less Wrong.
Well, he probably hasn't read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they're all just intellectually masturbating.
By the way, what do you think about the website: https://www.asimpleclick.org/# ?
This is Athene:
...I tried to understand the world by seeing everything as information instead since it then becomes a lot easier to find a logical answer to how we came to existence and why the logical patterns around us emerge. There are two scenario's that sound more logical for the average person, one is that there has always been nothing and the other that there has always been infinite chaos. Keep in mind, this is simplified because always makes us think about time and time came only to existence with the big bang. The issue people have though is how s
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "