Do you know what went wrong or what's the difference in making a working link post?
No, i don't. One possible explanation for the bug is that the successful time i used the dropdown to post the link directly to Discussion, rather than first to Drafts.
This is also interesting: Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity
Oh, this is much more complete, thanks.
Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.
I don't see a link. Was it lost like in my link post on a different subject? I still don't know how to post links correctly.
I just deleted the link post and made a better one: http://lesswrong.com/r/discussion/lw/o0i/barack_obamas_opinions_on_nearfuture_ai_fixed/
I don't see a link. Was it lost like in my link post on a different subject? I still don't know how to post links correctly.
What?? Weird!
Maybe it was lost when i edited the draft.
The headline is misleading. I don't think there is an Apollo-style funding plan; i think Obama just thinks it'd be a good idea.
Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.
This blog is so wordy and cultural that i (unfamiliar with the context) find it actually challenging to figure out what the premise, thesis, or content of the post is. Reminds me of my experience with discovering arcane 'neoreaction' blogs.
I would also contemplate the scenario that the human species might turn out to be less impressive than it currently appears, and is actually a fairly typical example of a successful Earth species. Most achievements that distinguish humans from eg plankton are in the future (eg space industry), not the past or present.
This might sound strange. Arguments in favor of this perspective:
• Homo sapiens is not the greatest species in terms of population or total biomass.
• Homo sapiens is not the only species to make tools, use agriculture, build buildings, or adapt to a variety of terrestrial habitats.
• Homo sapiens is not the first species to have a catastrophic impact on the atmosphere.
Arguments against this perspective:
• The human economy is currently doubling in scale every couple decades.
• No species (probably) ever reached the edge of the atmosphere before Homo sapiens.
(To clarify, i think this question is far from settled. But i think the idea that Homo sapiens will be smaller-impact than expected is more likely than the scenario that historical gods are representations of unknown prosperous civilizations.)
As a side note, this might also be interesting, purely from a utilitarian standpoint. If insect suffering matters, that would completely dwarf all human moral weight, since there are 10^18 of them but only 10^9 of us.
However, perhaps we don't care morally about animals which can't pass the mirror test, on the assumption that this means they have no self-image, and therefore no consciousness. They could feel pain and other stimuli, but there would be no internal observer to notice their own suffering.
If that's the case, animal welfare might still dominate over human welfare, but by a smaller margin. Doing what I described in the previous comment would let us estimate the value of future life in general, if we can determine to within an order of magnitude or so how much we value animals with various traits. This is critical for questions like whether terraforming mars is net positive or net negative.
I actually drew up a spreadsheet to estimate this: https://docs.google.com/spreadsheets/d/1xnfsDuC0ddUxvKekGLJ5QA5nrXxzked7K-k6jqUm538/edit?usp=sharing
I agree with you about the numbers: If there were say 10^15 insects then their moral weight might be in question. However there are actually more like 10^18, which is huge even for very small per-insect weightings.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.
Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!
I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.
Both of those Ito remarks referenced supposedly widespread perspectives. But personally, i have almost never encountered these perspectives before.