The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.
My understanding is that if the creature is conscious at all, and it acts observably like a human with the kind of experience we care about, THEN it likely has the kind of experiences we care about.
Do you think it is likely that the creatures will NOT have the experiences we care about?
(just trying to make sure we're on the same page)
Your link is broken at the moment.
Sorry about that. The link should be fixed now
That website is just an auto-generated snapshot from a system I use on my phone.
The way I use it on my part is that it prompts me at various intervals to do one of 2 things:
evaluate my track record regarding a given trigger,
predict situations in which it might be relevant in the future and plan what I'll do then.
And yes, at least the way I use this, it is great at making me internalize things.
It is so great in fact, that I can't tell anyone about it, because they would laugh at me.
This includes you of course.
Let me just mention that most things I add to this system actually become fully, subconsciously internalized the moment I add them to the system.
Like in, before the system prompts me about it even once.
If you don't believe me, well, I wouldn't believe myself either.
The only other report of this happening to other people from LW-sphere I've seen is here: http://agentyduck.blogspot.jp/2014/02/lobs-theorem-cured-my-social-anxiety.html
The difference is, I'm doing it with hundreds of things and it predictably works instantly in around 80% of cases.
Thank you, SquirrellinHell, for sharing your mind. I'm enjoying browsing through the triiger-action plans and trying them on :)
I think that using the term "effective altruist" causes a lot of problems with labeling (e.g. 'hardcore EA', 'softcore EA'). My thinking clarified when I began using only the term "effective altruism", and using it to stimulate asking, "how can I do the most good for each dollar of mine?"
http://effective-altruism.com/ea/9s/effective_altruism_is_a_question_not_an_ideology/
GiveWell's list of causes might give you some idea of causes considered to be important: http://www.givewell.org/labs/causes
80000hours has a good list of various causes for which talent can be useful at https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/
Hi ChristianKI, I was trying to find out from Pete what the winning would look like for the specific problems CFAR has in mind.
The causes in your links are very diverse, from biosecurity to AI risk. I 'd assumed that CFAR focused only on a couple of the most pressing problems. But I haven't heard officially what problems CFAR wants to solve the most.
Hi Pete, could you please give some examples of what you mean by "the world’s most important problems"?
I don't have money to give now, but perhaps I could just work on a problem directly.
I'm glad you liked the article.
Can you point me to a post on LW that is laid out in the style that you propose? This could give me a better vision of it.
Also, don't you think my techniques might sound a little kooky without context? I worry that, as openers, they might be more off-putting than inviting.
Here's an article that has an abstract in the first paragraph (although it'd be nice if it were called out as such), and a table of contents.
http://lesswrong.com/lw/md2/the_brain_as_a_universal_learning_machine/
I love these techniques and can't wait to try them out. Would you consider putting in an abstract with the 4 techniques? You could even throw in the one-sentence summaries from ScottL so that other readers can quickly get the gist before delving in further.
Cryonics is being deeply confused with suspended animation in this thread. Cryonics has nothing to do with cellular viability. It's only about preserving the wiring and physical structure of the brain by any means necessary. In current cryonics, all cells are totally and completely dead long before the procedure is finished. But we also have electron micrographs showing very good structural preservation of these dead cells. The cryonics revival technology will need to manipulate trillions of atoms inside of each of billions of cells. No low tech is going to be able to revive them.
Thank you for clarifying this point.
FYI I was referring only to "Cryonics" when I said cryo in the parent comment, not to "suspended animation".
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Thanks for the post, I really liked the article overall. Nice general summary of the ideas. I agree with torekp. I also think that the term consciousness is too broad. Wanting to have a theory of consciousness is like wanting to have a "theory of disease". The overall term is too general and "consciousness" can mean many different things. This dilutes the conversation. We need to sharpen our semantic markers and not to rely on intuitive or prescientific ideas.Terms that do not "carve nature well at its joints" will lead our inquiry astray from the beginning.
When talking about consciousness one can mean for example:
-vigilance/wakefulness
-attention: focusing mental resources on specific information
-primary consciousness: having any form of subjective experience
-conscious access: how the attended information reaches awareness and becomes reportable to others
-phenomenal awareness/qualia
-sense of self/I
Neuroscience is needed to determine if our concepts are accurate (enough) in the first place. It can be that the "easy problem" is hard and the "hard problem" seems hard only because it engages ill posed intuitions.
I agree re: consciousness being too broad a term.
I use the term in the sense of "having an experience that isn't directly observable to others" but as you noted, people use it to mean LOTS of different other things. Thanks for articulating that thought.