A short, nicely animated adaptation of The Unfinished Fable of the Sparrows from Bostrom's book was made recently.
Hope this is appropriate for here.
I had an epiphany related to akrasia today, though it may apply generally to a problem where you are stuck: For the longest time I thought to myself: "I know what I actually need to do, I just need to sit down and start working and once I've started it's much easier to keep going. I was thinking about this today and I had an imaginary conversation where I said: "I know what I need to do, I just don't know what I need to do, so I can do what I need to do." (I hope that makes sense). And then it hit me: I have no fucking clue what I actually need to do. It's like I've been trying to empty a sinking ship of water with buckets, instead of fixing the hole in the ship.
Reminds me in hindsight of the "definition of insanity": "The definition of insanity is doing the same thing over and over and expecting different results."
I think I believed, that I lacked the necessary innate willpower to overcome my inner demons, instead of lacking a skill I could acquire.
Deep Learning is the latest thing in AI. I predict that it will be exactly as successful at achieving AGI as all previous latest things. By which I mean that in 10 years it will be just another chapter in the latest edition of Russell and Norvig.
Revisited The Analects of Confucius. It's not hard to see why there's a stereotype of Confucius as a Deep Wisdom dispenser. Example:
The Master said, "It is Man who is capable of broadening the Way. It is not the Way that is capable of broadening Man."
I read a bit of the background information, and it turns out the book was compiled by Confucius' students after his death. That got me thinking that maybe it wasn't designed to be passively read. I wouldn't put forth a collection of sayings as a standalone philosophical work, but maybe I'd use it as a teaching aid. Perhaps one could periodically present students a saying of Confucius and ask them to think about it and discuss what the Master meant.
I've noticed this sort of thing in other works as well. Let's take the Dhammapada. In a similar vein, it's a collection of sayings of Buddha, compiled by his followers. There are commentaries giving background and context. I'm now getting the impression that it was designed to be just one part of a neophyte's education. There's a lot that one would get from teachers and more senior students, and then there are the sayings of the Master designed to stimulate thought and reflectio...
Do people who take modafinil also drink coffee (on the same day)? Is that something to avoid, or does it not matter?
I went to the dermatologist and today and I have some sort of cyst on my ear. He said it was nothing. He said the options are to remove it surgically, to use some sort of cream to remove it over time, or to do nothing.
I asked about the benefits of removing it. He said that they'd be able to biopsy it and be 100% sure that it's nothing. I asked "as opposed to... how confident are you now?" He said 99.5 or 99.95% sure.
It seems clear to me that the costs of money, time and pain are easily worth the 5/1000(0) chance that I detect something dangerous earlier and correspondingly reduce the chances that I die. Like, really really really really really clear to me. Death is really bad. I'm horrified that doctors (and others) don't see this. He was very ready to just send me home with his diagnosis of "it's nothing". I'm trying to argue against myself and account for biases and all that, but given the badness of death, I still feel extremely strongly that the surgery+biopsy is the clear choice. Is there something I'm missing?
Also, the idea of Prediction Book for Doctors occurred to me. There could be a nice UI with graphs and stuff to help doctors keep track of the predictions they've made. Maybe it could evolve into a resource that helps doctors make predictions by providing medical info and perhaps sprinkling in a little bit of AI or something. I don't really know though, the idea is extremely raw at this point. Thoughts?
1) surgery is dangerous. Even innocuous surgeries can have complications such as infection that can kill. There's also complications that aren't factored into the obvious math, for example ever since I got 2 of my wisdom teeth out, my jaw regularly tightens up and cracks if I open my mouth wide, something that never happened beforehand. I wasn't warned about this and didn't consider it when I was deciding to get the surgery.
2) If it's something dangerous, you're very likely to find out anyway before it becomes serious. eg, if it's a tumor, it's going to keep growing and you can come back a month later and get it out then with little problem.
3) even if it's not nothing, it might be something else that's unlikely to kill you. Thus the 5/1000 chance of death you're imagining is actually a 5/1000 chance of being not nothing.
The point is your consideration of "if surgery, definitely fine" vs "if no surgery, 5/1000 chance of death" are ignoring a lot of information. You're acting like your doctor is being unreasonable when in fact they're probably correct.
You're probably misreading your doctor.
When he said "99.5 or 99.95%" I rather doubt he meant to give the precise odds. I think that what he meant was "There is a non-zero probability that the cyst will turn out to be an issue, but it is so small I consider it insignificant and so should you". Trying to base some calculations on the 0.5% (or 0.05%) chance is not useful because it's not a "real" probability, just a figurative expression.
Inspired by terrible, terrible Facebook political arguments I've observed, I started making a list of heuristic "best practices" for constructing a good argument. My key assumptions are that (1) it's unreasonable to expect most people to acquire a good understanding of skepticism, logic, statistics, or the ways the LW-crowd thinks of as how to use words rightly, and (2) lists of fallacies to watch out for aren't actually much help in constructing a good argument.
One heuristic captured my imagination as it seems to encapsulate most of the other he...
Can anyone think of a decision which might come up in ordinary life where Baysian analysis and frequentist analysis would produce different recommendations?
Philosophers are apparently about as vulnerable as the general population to certain cognitive biases involved in making moral decisions according to new research. Apparently, they are as susceptible to the order of presentation impacting how moral or immoral they rate various situations. See summary of research here. Actual research is unfortunately behind a paywall.
A paper "Philosophers’ Biased Judgments Persist Despite Training, Expertise and Reflection" (Eric Schwitzgebel and Fiery Cushman) is available here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Stability-150423.pdf
An interactive twitch stream of a neural network hallucinating. Or twitch plays Large Scale Deep neural net.
EDIT: Fixed link.
What do you all think of "General Semantics"? Is it worth e.g. trying to read "Science and Sanity"? Are there insights / benefits there that can't be found in "Rationality: AI to Zombies"?
There is a paper out, the abstract of which says:
...Second, respondents significantly underestimated the proportion of [group X] among their colleagues. Third, [members of group X] fear negative consequences of revealing their ... beliefs to their colleagues. Finally, they are right to do so: In decisions ranging from paper reviews to hiring, many ... said that they would discriminate against openly [group X] colleagues. The more [group anti-X] respondents were, the more they said they would discriminate.
Before you go look at the link, any guesses as to what the [group X] is? X-/
I don't know of Sam Altman, so maybe this criticism is wrong, but the quote: "If you join a company, my general advice is to join a company on a breakout trajectory. There are a usually a handful of these at a time, and they are usually identifiable to a smart young person." Absent any guides on how to identify breakout trajectory companies, this advice seems unhelpful. It feels like: "Didn't work for you? You must not have been a smart young person or you would have picked the right company."
Paired with the paragraph below on not letting salary be a factor, I am left with the suspicion that Sam runs what he believes to be a company with a 'breakout trajectory' and pays noncompetitive salaries.
Now to find a way to test that suspicion.
Seeking writing advice: Tropes vs writing block?
I've started writing bits and pieces for S.I. again, but not nearly at the rate I was writing before my hiatus.
I'm beginning to wonder if I should cheat a bit, and deliberately leave some of the details I'm having trouble getting myself to write about vague, and explain it away with some memory problems of Bunny-the-narrator for that period. Goodness knows there are plenty of ways Bunny's brain has been fiddled with so far, so it's not without precedent; and if it gets me over the hump and into full-scale wri...
I want to do a PhD in Artificial General Intelligence in Europe (not machine learning or neuroscience or anything with neural nets). Anyone know a place where I could do that? (Just thought I'd ask...)
IDSIA / University of Lugano in Switzerland is where e.g. Schmidhuber is. His research is quite neural network-focused, but also AGI-focused. Also Shane Legg (now at DeepMind, one of the hottest AGI-ish companies around) graduated from Lugano with a PhD thesis on machine superintelligence.
"AGI but not machine learning or neuroscience or anything with neural nets" sounds a little odd to me, since the things you listed under the "not" seem like the components you'll need to understand if you want to ever build an AGI. (Though maybe you meant that you don't want to do research focusing only on neuroscience or ML without an AGI component?)
Zoubin Gharhamani / Carl Rassmussen (Cambridge)
Michael Osborne / Yee Whye Teh (Oxford)
A little while back, someone asked me 'Why don't you pray for goal X?' and I said that there were theological difficulties with that and since we were about to go into the cinema, it was hardly the place for a proper theological discussion.
But that got me thinking, if there weren't any theological problems with praying for things, would I do it? Well, maybe. The problem being that there's a whole host of deities, with many requiring different approaches.
For example, If I learnt that the God of the Old Testament was right, I would probably change my set of ...
Does anyone know about any programs for improving confidence in social situations and social skills that involve lots of practice (in real world situations or in something scripted/roleplayed)? Reading through books on social skills (ie. How to Win Friends and Influence People) seems to provide a lot of tips that would be useful to implement in real life, but they don't seem to stick without actually practicing them. The traditional advice to find a situation in your own life that you would already be involved in hasn't worked well for me because it is mis...
I've mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.
I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW's "reductionist AI" approach to AI in a negative light compared to their "neural network paradigm".
These people seem confident dee...
Perhaps it would be beneficial to introduce life to Mars in the hope that it could eventually evolve into intelligent life in the event that Earth becomes sterilized. There are some lifeforms on Earth that could survive on Mars. The outer space treaty would need to be amended to make this legal, though, as it currently prohibits placing life on Mars. That said, I find it doubtful that intelligent life ever would evolve from the microbes, given how extreme Mar's conditions are.
Can anyone help me understand the downvote blitz for my comments on http://lesswrong.com/lw/mdy/my_recent_thoughts_on_consciousness/ ?
I understand that I'm arguing for an unpopular set of views, but should that warrant some kind of punishment? Was I too strident? Grating? Illucid? How could I have gone about defending the same set of views without inspiring such an extreme backlash?
The downvotes wouldn't normally concern me too much but I received so many that my karma for the last 30 days has dropped to 30% positive from of 90%. I'd like to avoid this happening again when the same topic is under discussion.
If someone on LW mentions taking part in seriously illegal activities (in all jurisdictions), am I morally obliged to contact the police/site admin? I don't think the person in question is going to hurt anyone directly.
Speaking of which, who is the site mod? Vladmeir someone?
EDIT: I think I misunderstood and the situation isn't bad enough to need reporting to anyone. He was only worrying about whether he wanted to do certain things, rather than actually doing them.
NancyLebovitz is the newest moderator at present.... and I believe the only really active one at least in day-to-day operations. Viliam_Bur was previously in that role but he backed off in January due to other time commitments.
There is a moderator list here
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.