You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eitan_Zohar comments on Open Thread, May 25 - May 31, 2015 - Less Wrong Discussion

3 Post author: Gondolinian 25 May 2015 12:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (301)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eitan_Zohar 29 May 2015 11:59:11AM *  0 points [-]

What's wrong with hive minds? As long as my 'soul' survives, I wouldn't mind being part of some gigantic consciousness.

Also, another thought- it may take an AI to solve philosophy and the nature of the universe, but it may not be far beyond the capacity of the human brain to understand it.

I appreciate the long response.

Comment author: Pentashagon 30 May 2015 01:45:12AM 0 points [-]

What's wrong with hive minds? As long as my 'soul' survives, I wouldn't mind being part of some gigantic consciousness.

A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...