I've finally figured out why Eliezer was popular. He isn't the best writer, or the smartest writer, or the best writer for smart people, but he's the best writer for people who identify with being smart. This opportunity still seems open today, despite tons of rational fiction being written, because its authors are more focused on showing how smart they are, instead of playing on the self-identification of readers as Eliezer did.
It feels like you could do the same trick for people who identify with being kind, or brave, or loving, or individualist, or belo...
Does anyone follow the academic literature on NLP sentence parsing? As far as I can tell, they've been writing the same paper, with minor variations, for the last ten years. Am I wrong about this?
Two things have been bugging me about LessWrong and its connection to other rationality diaspora/tangential places.
1) Criticism on LW is upvoted a lot, leading to major visibility. This happens even in the case where the criticism is quite vitriolic, like in Duncan's Dragon Army Barracks post. Currently, there's only upvotes for comments, and there aren't multiple reactions, like on Facebook, Vanilla Forums, or other places. So there's no clear way to say something like "you bring up good points, but also, your tone is probably going to make other peo...
I'm thinking about starting an AIrisk meetup every other tuesday in London. Anyone interested? Also if you could signal boost to other Londoners you know, that would be good.
I have a question about AI safety. I'm sorry in advance if it's too obvious, I just couldn't find an answer on the internet or in my head.
The way AI has bad consequences is through its drive to maximize (destroys the world in order to produce paperclips more efficiently). If you instead designed AIs to: 1) find a function/algorithm within an error range of the goal, 2)stop once that method is found, 3) do 1) and 2) while minimizing the amount of resources it uses and/or its effect on the outside world
If the above could be incorporated as a convention into any AI designed, would that mitigate the risk of AI going "rougue"?
A highly recommended review of James Scott's Seeing like a State which, not coincidentally, has also been reviewed by Yvain.
Sample:
...I think this is helpful to understand why certain aesthetic ideals emerged. Many people maybe started on the more-empirical side, but then noticed that all of the research started looking the same. I’ve called this “quantification”. It probably looked geometric, “simple” (think Occam’s razor), etc. Much like you’d imagine scientific papers to look today. When confronted with a situation where they didn’t have data, but still
Finally read the review, and I am happy I did. Made me think about a few things...
Legibility has its costs. For example, I had to use Jira for tracking my time in many software companies, and one task is always noticeably missing, despite requiring significant time and attention of all team members, namely using the Jira itself. How much time and attention does it require, in addition to doing the work, to make notes about what exactly you did when, whether it should be tracked as a separate issue, what meta-data to assign to that issue, who needs to approve it, communicating why they should approve it, explaining technical details of why the map drawn by the management doesn't quite match the territory, explaining that you are doing a "low-priority" task X because it is a prerequisite to a "high-priority" task Y, then explaining the same thing to yet another manager who noticed that you are logging time on low-priority tasks despite having high-priority tasks in the queue and decided to take initiative, negotiating whether you should log the time in your company's Jira or your company's customer's Jira or both, in extreme cases whether it is okay to use English...
I find myself in a potentially critical crossroads at the moment, one that could affect my ability to become a productive researcher for friendly AI in the future. I'll do my best to summarize the situation.
I had very strong mental capabilities 7 years ago, but a series of unfortunate health related problems including a near life threatening infection led to me developing a case of myalgic encephalomyelitis (chronic fatigue syndrome). This disease is characterized by extreme fatigue that usually worsens with physical or mental exertion, and is not signific...
I'd like to ask a question about the Sleeping Beauty problem for someone that thinks that 1/2 is an acceptable answer.
Suppose the coin isn't flipped until after the interview on Monday, and Beauty is asked the probability that the coin has or will land heads. Does this change the problem, even though Beauty is woken up on Monday regardless? It seems to me to obviously be equivalent, but perhaps other people disagree?
If you accept that these problems are equivalent, then you know that P(Heads | Monday) = P(Tails | Monday) = 1/2, since if it's Monday then a ...
It seems (understandably) that to get people to take your ideas seriously about intelligence there are incentives to actually make AI and show it doing things.
Then people will try and make it safe.
Can we do better at spotting ideas about intelligence that might be different compared to current AI and engaging with those ideas before they are instantiated?
Has there been / will there be in the future / could there be a condition where transforming atoms is cheaper than transforming bits? Or it's a universal law that emulation is always developed before nanotechnology?
Is this true for anyone: "If you offered me X right now, I'd accept the offer, but if you first offered me to precommit against taking X, I'd accept that offer and escape the other one"? For which values of X? Do you think most people have some value of X that would make them agree?
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "