How does this compare to the other recent compilation of Yvains articles?
This one is better organised and I either like the ones I see, or discover gems; not true of the last one.
- Getting an air filter can gain you ~0.6 years of lifespan, plus some healthspan. Here's /u/Louie's post where I saw this.
- Lose weight. Try Shangri-La, and if that doesn't work consider the EC stack or a ketogenic diet.
- Seconding James_Miller's recommendation of vegetables, especially cruciferous vegetables (broccoli, bok choy, cauliflower, collard greens, arugula...) Just eat entire plates of the stuff often.
- Write a script that takes a screenshot and webcam picture every 30 seconds. Save the files to an external hard drive. After a few decades, bury the external, along with some of your DNA and possibly brain scans, somewhere it'll stay safe for a couple hundred years or longer. This is a pretty long shot, but there's a chance that a future FAI will find your horcrux and use it to resurrect you. I think this is a better deal than cryonics since it costs so much less.
That last one sounds like a plan, on top of signing up for Cryonics. Tah.
If you have a human-level intelligence which can read super-fast, and you set it free on the internet, it will learn a lot very quickly. (p71)
But why would you have a human-level intelligence which could read super-fast, which hadn't already read most of the internet in the process of becoming an incrementally better stupid intelligence, learning how to read?
Similarly, if your new human-level AI project used very little hardware, then you could buy heaps more cheaply. But it seems somewhat surprising if you weren't already using a lot of hardware, if it is cheap and helpful, and can replace good software to some extent.
I think there was a third example along similar lines, but I forget it.
In general, these sources of low recalcitrance would be huge if you imagine AI appearing fully formed at human-level without having exploited any of them already. But it seems to me that probably getting to human-level intelligence will involve exploiting any source of improvement we get our hands on. I'd be surprised if these ones, which don't seem to require human-level intelligence to exploit, are still sitting untouched.
I don't think that the point Bostrom is making hangs on this timeline of updates; the point is simply that, if you take an AGI to human level through purely improvement to qualitative intelligence, it will be super intelligent immediately. This point is important regardless of timeline; if you have an AGI that is low on quality intelligence but has these other resources, it may work to improve its quality intelligence. At the point that it's quality is equivalent to a human, it will be beyond a human in ability and competence.
Perhaps this is all an intuition pump to appreciate the implications of a general intelligence on a machine.
"Virtue of Silence" link is wrong.
Where, exactly? All I've noticed is that there's less interesting material to read, and I don't know where to go for more.
Okay, SSC. That's about it.
It is on the list of things-to-be-done, but not near the top. Maybe around a month's time, it might be feasible, and I'll pounce on it in anticipation.
Just in case, I will write down that I think this is a plausible candidate for solving a significant portion of your problems. That, and asking interesting people to Skype. :)
I would if I had a cam, buuut I don't anymore. Kinda screens off the Study Hall idea as well. Thanks anyways. :)
How hard would it be for you to get a cam?
I’m asking after advice. Here’s my predicament;
I will soon fall over dead from social deprivation. I’m only exaggerating somewhat. I’m living in my hometown, where for unspecified reasons all previous contacts are lost to me. I am unlikely to be able to move for months, at least. I live far from rationalist circles. I’ve decided to try out the study hall to fill the gap a little (yet to do this, time for bed). I will also probably try to forge new circles by going to town and searching for groups to join that are at least adjacent to my interests. This feels (flagging for overconfidence) unlikely to work here, it’s a smallish town. Nice, but still, not academically active in a suitable fashion, that I've noticed.
There are specifics to group-finding in meatspace I am able to work out fine, so I don’t need help there. But that is the extent of my creativity. Am I missing something glaringly obvious? Please tell me I am.
EDIT: Issue solved! Thanks! :D
Ask rationalists for Skype conversations! I'll set one up with you if you wish!
Since utilions are a unit of caring and Less Wrong (the website) has helped me immensely in making the transition from a somewhat despondent college graduate to a software engineer job with an annual salary + benefits, is there any way I can donate some dollars towards the site's upkeep?
Failing that, and as a more immediate measure, I extend my sincere thanks to everyone on Less Wrong, especially Eliezer Yudkowsky and his works HPMOR & An Intuitive Explanation of Bayes' Theorem for enlightening me.
On a more useful note, it appears that the Java applets at http://www.yudkowsky.net/rational/bayes are now blocked by the current version of the Oracle Java Runtime Environment for Windows.
Please take some status for doing this :-)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It's striking how much value there is in academia that I didn't notice, and that a base-level rational person would've noticed if they'd asked "what are the main blind spots of the rationality community, and how can I steelman the opposing positions?". Not a good sign about me, certainly.
Also, is that your actual email address?