I've finally figured out why Eliezer was popular. He isn't the best writer, or the smartest writer, or the best writer for smart people, but he's the best writer for people who identify with being smart. This opportunity still seems open today, despite tons of rational fiction being written, because its authors are more focused on showing how smart they are, instead of playing on the self-identification of readers as Eliezer did.
It feels like you could do the same trick for people who identify with being kind, or brave, or loving, or individualist, or belo...
Does anyone follow the academic literature on NLP sentence parsing? As far as I can tell, they've been writing the same paper, with minor variations, for the last ten years. Am I wrong about this?
Two things have been bugging me about LessWrong and its connection to other rationality diaspora/tangential places.
1) Criticism on LW is upvoted a lot, leading to major visibility. This happens even in the case where the criticism is quite vitriolic, like in Duncan's Dragon Army Barracks post. Currently, there's only upvotes for comments, and there aren't multiple reactions, like on Facebook, Vanilla Forums, or other places. So there's no clear way to say something like "you bring up good points, but also, your tone is probably going to make other peo...
I'm thinking about starting an AIrisk meetup every other tuesday in London. Anyone interested? Also if you could signal boost to other Londoners you know, that would be good.
I have a question about AI safety. I'm sorry in advance if it's too obvious, I just couldn't find an answer on the internet or in my head.
The way AI has bad consequences is through its drive to maximize (destroys the world in order to produce paperclips more efficiently). If you instead designed AIs to: 1) find a function/algorithm within an error range of the goal, 2)stop once that method is found, 3) do 1) and 2) while minimizing the amount of resources it uses and/or its effect on the outside world
If the above could be incorporated as a convention into any AI designed, would that mitigate the risk of AI going "rougue"?
A highly recommended review of James Scott's Seeing like a State which, not coincidentally, has also been reviewed by Yvain.
Sample:
...I think this is helpful to understand why certain aesthetic ideals emerged. Many people maybe started on the more-empirical side, but then noticed that all of the research started looking the same. I’ve called this “quantification”. It probably looked geometric, “simple” (think Occam’s razor), etc. Much like you’d imagine scientific papers to look today. When confronted with a situation where they didn’t have data, but still
Finally read the review, and I am happy I did. Made me think about a few things...
Legibility has its costs. For example, I had to use Jira for tracking my time in many software companies, and one task is always noticeably missing, despite requiring significant time and attention of all team members, namely using the Jira itself. How much time and attention does it require, in addition to doing the work, to make notes about what exactly you did when, whether it should be tracked as a separate issue, what meta-data to assign to that issue, who needs to approve it, communicating why they should approve it, explaining technical details of why the map drawn by the management doesn't quite match the territory, explaining that you are doing a "low-priority" task X because it is a prerequisite to a "high-priority" task Y, then explaining the same thing to yet another manager who noticed that you are logging time on low-priority tasks despite having high-priority tasks in the queue and decided to take initiative, negotiating whether you should log the time in your company's Jira or your company's customer's Jira or both, in extreme cases whether it is okay to use English...
I find myself in a potentially critical crossroads at the moment, one that could affect my ability to become a productive researcher for friendly AI in the future. I'll do my best to summarize the situation.
I had very strong mental capabilities 7 years ago, but a series of unfortunate health related problems including a near life threatening infection led to me developing a case of myalgic encephalomyelitis (chronic fatigue syndrome). This disease is characterized by extreme fatigue that usually worsens with physical or mental exertion, and is not signific...
I'd like to ask a question about the Sleeping Beauty problem for someone that thinks that 1/2 is an acceptable answer.
Suppose the coin isn't flipped until after the interview on Monday, and Beauty is asked the probability that the coin has or will land heads. Does this change the problem, even though Beauty is woken up on Monday regardless? It seems to me to obviously be equivalent, but perhaps other people disagree?
If you accept that these problems are equivalent, then you know that P(Heads | Monday) = P(Tails | Monday) = 1/2, since if it's Monday then a ...
It seems (understandably) that to get people to take your ideas seriously about intelligence there are incentives to actually make AI and show it doing things.
Then people will try and make it safe.
Can we do better at spotting ideas about intelligence that might be different compared to current AI and engaging with those ideas before they are instantiated?
Has there been / will there be in the future / could there be a condition where transforming atoms is cheaper than transforming bits? Or it's a universal law that emulation is always developed before nanotechnology?
Is this true for anyone: "If you offered me X right now, I'd accept the offer, but if you first offered me to precommit against taking X, I'd accept that offer and escape the other one"? For which values of X? Do you think most people have some value of X that would make them agree?
I find myself in a potentially critical crossroads at the moment, one that could affect my ability to become a productive researcher for friendly AI in the future. I'll do my best to summarize the situation.
I had very strong mental capabilities 7 years ago, but a series of unfortunate health related problems including a near life threatening infection led to me developing a case of myalgic encephalomyelitis (chronic fatigue syndrome). This disease is characterized by extreme fatigue that usually worsens with physical or mental exertion, and is not significantly improved by rest. There are numerous other symptoms that are common to ME, I luckily escaped a great many of them. However I developed the concentration and memory problems which are common to ME to a very large degree.
I had somewhat bad ME until a few years ago when in conjunction with a mind/body specialist I was able to put it into partial remission. I am now able to do physically demanding activities without fatigue but I still have severe cognitive constraints; my intelligence now seems to be almost as sharp as it ever was despite deficits in mental energy, concentration, and memory (especially working memory). However having efficacious mental throughput relies so much on these attributes that support intelligence, and I am hardly useful at all as it stands. Therefore my primary concern these past few years has been to resolve my medical issues to a large enough degree to enable real productivity.
I am still in this state despite putting all of my effort towards remedying it, I have stuck to safer treatments (like bacteriotherapy or sublingual methylcobalamin) in order to prevent worsening my condition (although I have had some repercussions from following even this philosophy). I am wondering if I can reasonably expect to get better using this methodology though. It could be that I need to take more extreme risks, because I won't do any good as I am and time continues to tick away. Looking at the the big picture with a properly pessimistic outlook gives me the impression that friendly AI research does not have a lot of time to spare as it is.
There is a doctor that is recommended by a large amount of people on a ME forum I frequent who has exceptionally aggressive treatment protocols. His name is Dr. Kenny de Meirleir and while I have misgivings about some of the stuff I've read about him, I've pretty much given up on trying to find someone who is both good and doesn't have a long wait list. I've gotten on the wait list of one practitioner who is local but I do not have too much confidence in them. Dr. Meirleir wasn't too difficult to get an appointment with because he travels to the USA for a few days every couple of months and these appointments are not widely known about.
However even the cost of initial tests and evaluation could be an unrecoverable failure for me if they don't pan out like I hope. It will cost thousands of dollars to pay for travel to the states, hotel, the consultation, and the comprehensive tests he is likely to run; even considering how much of the lab tests my own country will probably cover. Although at least then I could finally confirm a lot of unknowns about my health, such as whether there are infectious agents still affecting me. Despite all the testing I've gone through over the years he does a lot of tests I haven't gotten yet.
It really depends on the results of the tests, but I'm reading plenty of anecdotal reports that suggest a high likelihood of me getting put on multiple antibiotics by him. Plenty of people whose stories I have read have reported worsening conditions and relapses of ME due to antibiotics, and I know from my research that ME treatments in general often have these risks.
The quantity of symptoms I have has always been small, which might indicate that there is a lot more of my physiology that is working the way it should be compared to the average ME patient. My condition is also in partial remission already and I am still under 30, so I consider myself to have better odds of major recovery than the low rates of total remission this disease is usually predicted to have.
The question then is; as rationalists what path do you think I should take here? If I choose to go to the appointment next weekend, I lose a large chunk of my limited capital but gain knowledge and possibilities for treatment. If I then proceed to do treatment of the type he often prescribes, I probably lose most or all of my remaining money in something that could stand the best chance of making me functional again but that could also do nothing or make me irrecoverably worse (or anything else between the two extremes). This is not money I can recover easily, work is difficult still and it could take me lots of time to save considering normal essential expenses. If I chose to do nothing, cancel the appointment, and continue on my safe but so far ineffective path then I keep the status quo and avoid risking my health. Although if I do this I waste precious time either waiting for one of my less risky solutions to work, or waiting for the unlikely possibility of researchers developing a cure anytime soon. The years it will take for me to finish developing and expanding my skills and knowledge after recovery have to be factored in as well, I cannot just jump into FAI research right away. There are no doubt other options and variables I cannot see at the moment but I haven't found them as of yet.
Due to the aforementioned cognitive restraints I know that my ideas and research I have done on my condition are probably riddled with biases, errors, and gaps in knowledge. If anyone can offer suggestions or comments about this situation it would be appreciated. It's safe to assume that the personal outcomes I face from this choice only matter in the context of whether it increases or decreases the probability of me being useful to friendly AI development in the future. Even if I only further recover partially and can contribute in other ways (like financially), I'll consider that worth the effort.
I might not get the chance to answer any responses in a timely manner because of how much strain writing causes me (and if I do decide not to cancel the appointment I will have to prepare for travel this coming weekend). However reading and thinking both cost me less energy so know that any responses posted will be considered by me as carefully as I can and it will give me more perspective to help decide what to do in this situation.
I'm going to take a wild guess, and suggest that your attitude towards FAI research, and your experience of CFS, are actually related. I have no idea if this is a standard theory, but in some ways CFS sounds like depression minus the emotion - and that is a characteristic symptom in people who have a purpose they regard as supremely important, who find absolutely no support for their attempt to pursue it, but who continue to regard it as supremely important.
The point being that when something is that important, it's easy to devalue certain aspects of your...
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "