Comment author: Jiro 30 April 2013 01:23:40AM *  1 point [-]

The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important).

I don't know about that. Probably the most important question that can be asked in politics is "how can we produce a perfect society in every which way according to the following list of criteria...."

The trick, of course, is that for most people, the "most important" questions are defined by more than just what the impact of the answer would be when we get one. Likelihood of finding an answer, feasibility of being able to implement an answer, ability to implement it using partial steps, and similar real-world considerations are also part of what makes a question the "most important". Based on those real-world criteria, the questions that you call privileged actually score pretty high on the importance scale. If enough people vote for gay marriage or gun control, we can have it tomorrow (maybe not literally tomorrow, since the system takes time, but still fairly soon). It may be harder to get, for instance, life extension tomorrow.

With the worst privileged questions I frequently find that the answer is "nothing,"

What? "Vote for a politician who I feel has a chance of stopping/expediting (depending on my conclusion) gay marriage, gun control, and such" isn't "something"? Even just discussing a subject and affecting public opinion (to the extent that one person out of millions can do so at all) is "something".

Comment author: seanwelsh77 01 May 2013 08:52:46AM 1 point [-]

Probably the most important question that can be asked in politics is "how can we produce a perfect society in every which way according to the following list of criteria...."

The kind of questions pols actually think about. (I used to work for one...)

  1. How do I get re-elected?
  2. Which event/announcement relating to the party platform (the list of 'improve society' criteria that the party has approved) will get airtime and make me look good and my opponent in the next race look bad?
  3. Within the current budget what money can I win for my electorate through the normal processes?
  4. Who can I help within the limits of my power and influence and the laws and budget as they are?
  5. What changes to the current party platform (the list of criteria) do we need to make to achieve 1.

Different pols are more or less diligent about these points.

So long as the people can SACK pols. I.e. vote them out. Democratic politics seems to work tolerably well...

Comment author: seanwelsh77 01 May 2013 08:41:49AM 1 point [-]

Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.

Journalists are not paid to print the truth. They are paid to sell newspapers. (This correlates to your "most views" idea.)

However, people buy newspapers (and consume other forms of media). People choose to read celebrity gossip and trivia rather than constructive solutions for world peace (and other things you might think 'important'). I think its intellectually lazy to blame the media. They produce for their audience.

Also, there are diverse media with diverse views of what is 'important'. And a lot of people don't want answers to questions. They don't want solutions to problems. They want to be entertained. They want to be amused.

Is this so terrible?

Comment author: Will_Newsome 01 May 2013 02:58:32AM *  6 points [-]

(This is not a good characterization of Leibniz's actual conceptual system, for what it's worth;---the arguments that this is the "best of all possible worlds" are quite technical and come from the sort of intuitions that would later inspire algorithmic information theory; certainly neither blind optimism nor psychologically contingent enthusiasm about life's bounties were motivating the arguments. Crucially, "best" or similar, unlike "awesome", is potentially philosophically simple (in the sense of algorithmic information theory), which is necessary for Leibniz's arguments to go through. (This comment is directed more at the general readership than the author of the comment I'm replying to.))

Comment author: seanwelsh77 01 May 2013 04:21:25AM -2 points [-]

My recollection of Leibniz's view is dim but I recollect that the essence of it is that the perfection of the world is a consequence of the perfection of God. It would reflect poorly on the Omnipotence, Omniscience, Benevolence & Supreme Awesomeness &c of the Deity and Designer if he bashed out some second-rate less than perfectly good (or indeed merely averagely awesome) world. For the benefit of the general readership, the book to read on this is Candide by Voltaire. You will never see rationalists in quite the same way again... :-)

Link to Candide

In response to Morality is Awesome
Comment author: seanwelsh77 01 May 2013 12:11:47AM 0 points [-]

According to Leibniz, this is the most awesome of all possible worlds.

Comment author: Juno_Watt 30 April 2013 10:00:38PM *  2 points [-]

Let's not get started on the medical profession's bias towards health..maybe it's just their job to teach reason..have you ever met someone who couldn't do emotional/system-I decision-making right out of the box?

Comment author: seanwelsh77 30 April 2013 10:16:20PM -2 points [-]

In my experience homo sapiens does not come 'out of a box.' Are you a MacBook Pro? :-)

But seriously, I have seen some interestingly flawed 'decision-making systems' in Psych Wards. And I think Reason (whatever it is taught to be) matters. Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don't think Reason alone (however you construe it) is up to the job of friendly AI.

Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are 'valid' or 'correct?'

Comment author: shminux 25 April 2013 05:30:09AM *  1 point [-]

Welcome!

I can't honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of 'Reason.'

Not sure why you link rationality with "Academy" (academia?). Consider scanning through the sequences to learn with is generally considered rationality on this forum and how Eliezer Yudkowsky treats metaethics. Whether you agree with him or not, you are likely to find a lot of insights into machine (and human) ethics, maybe even helpful in your research.

Comment author: seanwelsh77 30 April 2013 09:25:23PM -2 points [-]

Not sure why you link rationality with "Academy" (academia?).

Pirsig calls the Academy "the Church of Reason" in Zen and the Art of Motorcycle Maintenance. I think there is much evidence to suggest academia has been strongly biased to 'Reason' for most of its recorded history. It is only very recently that research is highlighting the role of Emotion in decision making.

Comment author: MugaSofer 25 April 2013 12:21:02PM -2 points [-]

I think the Academy puts for too much faith in their technological marvel of 'Reason.'

I don't think I'm parsing this correctly. Could you expand on it a bit?

I would say at the outset that I think 'the hard problem of ethics' remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.

Well, you'll find plenty of agreement here, for certain definitions of "unsolved".

Comment author: seanwelsh77 30 April 2013 09:21:01PM 0 points [-]

I don't think I'm parsing this correctly. Could you expand on it a bit?

You need the Sith parser :-)

I guess the point I am making is that Reason alone is not enough and a lot of what we call Reason is technology derived from the effect on brains on being able to write. There is some interesting research on how cognition and reasoning differs between literate and preliterate people. I think Emotion plays a critical role in decision making. I am not going out to bat for Faith except in the Taras Bulba sense: "I put my faith in my sword and my sword in the Pole!" (The Polish were the enemy of the Cossack Taras Bulba in the ancient Yul Brynner flick I am quoting from.)

Comment author: seanwelsh77 25 April 2013 04:06:05AM 3 points [-]

Hi Less Wrong,

My name is Sean Welsh. I am a graduate student at the University of Canterbury in Christchurch NZ. I was most recently a Solution Architect working on software development projects for telcos. I have decided to take a year off to do a Master's. My topic is Ethical Algorithms: Modelling Moral Decisions in Software. I am particularly interested in questions of machine ethics & robot ethics (obviously).

I would say at the outset that I think 'the hard problem of ethics' remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.

I can't honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of 'Reason.' However, I have a healthy and robustly expressed disregard for all forms of bullshit - be they theist or atheist.

As Confucius said: Shall I teach you the meaning of knowledge? If you know a thing, to know that you know it. And if you do not know, to know that you do not know. THAT is the meaning of knowledge.

Apart from working in software development, I have also been an English teacher, a taxi driver, a tourism industry operator, online travel agent and a media adviser to a Federal politician (i.e. a spin doctor).

I don't mind a bit of biff - but generally regard it as unproductive.

View more: Prev | Next