Comment author: fizolof 08 March 2015 04:12:41PM 1 point [-]

I think ultimately, we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother. What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?

Comment author: Emile 08 March 2015 08:01:21PM 2 points [-]

What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?

If that was the real reason you would treat your brother better than one kid in Africa, than you would be willing to sacrifice a good relationship with your brother in exchange for saving two good brother-relationships between poor kids in Africa.

I agree you could evaluate impersonally how much good the institution of the family (and other similar things, like marriages, promises, friendship, nation-states, etc.) creates; and thus how "good" are natural inclinations to help our family are (on the plus side; sustains the family, an efficient form of organization and child-rearing; on the down side: can cause nepotism). But we humans aren't moved by that kind of abstract considerations nearly as much as we are by a desire to care for our family.

In response to comment by [deleted] on Impartial ethics and personal decisions
Comment author: Vaniver 08 March 2015 03:17:45PM *  3 points [-]

I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea.

Suppose I get my weights from outside of me, and you get your weights from outside of you. Then it's possible that we could coordinate and get them from the same source, and then agree and cooperate.

Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.

Comment author: Emile 08 March 2015 07:55:30PM 3 points [-]

Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.

In practice people with different values manage to coordinate perfectly fine via trade; I agree an external source of morality would be sufficient for cooperation, but it's not necessary (also having all humans really take an external source as the real basis for all their choices would require some pretty heavy rewriting of human nature).

In response to Plane crashes
Comment author: imuli 08 March 2015 05:52:12PM 10 points [-]

Your question is: after an airliner accident, how often do any of the next n flights following the same route also have an accident?

Guessing (2/3 confidence) lower than the base rate.

In response to comment by imuli on Plane crashes
Comment author: Emile 08 March 2015 07:51:45PM 2 points [-]

Yeah, that was my thought too - after an accident, everyone is more careful and diligent, because there will be a search for someone to blame, and that's really not a good time to be asleep at the wheel, whatever your level of responsibility.

Comment author: [deleted] 03 March 2015 08:21:58AM 1 point [-]

I totally know I suck at editing. Or writing. Yes, my posts are dumps of internal dialogue, a "save as" on my brain set. Can you recommend an e-book or something that would teach me this? At some level this is the issue with the Internet: everybody can publish, but most of us do not have access to a professional editor. I wonder if I could find an editor on Fiverr. It would totally worth me $5 per article.

Pop-psych, well, the issue is, 1. people suffer 2. there are basically NO ideas kicking around why. Any beginning is better than none. Even if the only result is someone disproving the whole thing in a good way gets us a step closer to some kind of a solution.

I see my role here as a non-scientific shaman healer trying to treat diseases by random herbs. It may work, out of pure luck, but even if not, you have to start your medicine somewhere, a real doctor executing a professional takedown on the shaman could accidentally solve the problem.

Comment author: Emile 03 March 2015 09:55:21AM 0 points [-]

Maybe practice editing more? If you suck at it, rewriting /editing your posts will only make you better at it. It might be a bit of work, it might take a bit of time, but it's nice to take ten minutes of your time to save thirty seconds to a hundred readers (and more importantly to save all the time wasted by comments who misunderstood part of what you said and the ensuing back-and-forth).

(I personally don't have much time to spend reading long preachy walls of texts telling me about my supposed self-hatred; I didn't downvote your post but skipped to the discussion because the post itself wasn't very engaging and seemed to get things wrong fairly quickly)

In response to Ask me anything.
Comment author: Emile 16 February 2015 05:53:55PM 5 points [-]

I suspect most people here find this post very confusing as they don't know who you are (I don't recognize your username), and it's not really clear what you're getting at or why we would want to ask you anything.

Comment author: Emile 12 February 2015 06:00:04PM 5 points [-]

A bit of a nitpick (which could explain some of the reception you're getting here): I don't think the term "Pragmatarianism" is a good description for your proposal, it's just an unrelated name that sounds good. Might as well say 'I'm calling this proposal "Sensible Tax Policy"' or 'My idea, called "Reasonablism", is that...', etc.

A more modest and descriptive name would probably be better received, especially in places who dislike marketing.

Comment author: Dr_Manhattan 09 February 2015 07:18:42PM 2 points [-]

And how is it going?

Comment author: Emile 10 February 2015 08:49:07AM 4 points [-]

Okay, though we're still far from a true robot butler. I don't know if we're ten years away though, especially if you're tolerant in what you expect a butler to be able to do (welcome guests, take their names, point them in the right direction, answer basic questions? We can already do it. Go up a flight of stairs? Not yet.)

Comment author: Daniel_Burfoot 10 February 2015 12:28:25AM 2 points [-]

Cool project! Do you think those robots are going to be a big commercial success?

Comment author: Emile 10 February 2015 08:45:10AM 3 points [-]

There are already quite a few of them deployed in stores in Japan, interacting with customers, so for now it's going okay :)

Comment author: is4junk 09 February 2015 01:17:55AM 9 points [-]

Robotics will get scary very soon. quoted from link:

The conference was open to civilians, but explicitly closed to the press. One attendee described it as an eye-opener. The officials played videos of low-cost drones firing semi-automatic weapons, revealed that Syrian rebels are importing consumer-grade drones to launch attacks, and flashed photos from an exercise that pitted $5,000 worth of drones against a convoy of armored vehicles. (The drones won.) But the most striking visual aid was on an exhibit table outside the auditorium, where a buffet of low-cost drones had been converted into simulated flying bombs. One quadcopter, strapped to 3 pounds of inert explosive, was a DJI Phantom 2, a newer version of the very drone that would land at the White House the next week.

Comment author: Emile 09 February 2015 01:23:15PM 6 points [-]

It's debatable how much a "remote controlled helicopters with a camera" should fall under "robotics"; progress in that area seems pretty orthogonal to issues like manipulation and autonomy.

(Though on the other hand modern drones are better at mechanical control "just" remote control: good drones have a feedback loop so that they correct their position)

Comment author: Daniel_Burfoot 09 February 2015 03:50:07AM 9 points [-]

I have anti-predictions:

  • We won't have robot butlers or maids in the next ten years.
  • Academic CV researchers will write a lot of papers, but there won't be any big commercial successes that are based on dramatic improvements in CV science. This is a subtle point: there may be big CV successes, but they will be based on figuring out ways to use CV-like technology that avoids grappling with the real hardness of the problem. For example, the main big current uses of CV are in industrial applications where you can control precisely things like lighting, clutter, camera position, and so on.
  • Assistant and intent-based technology will continue to be annoying and not very useful.
  • Similar to CV, robotics will work okay when you can control precisely the nature of the task and environment. We won't have, for example, robot construction workers.
Comment author: Emile 09 February 2015 11:12:51AM 5 points [-]

We won't have robot butlers or maids in the next ten years.

(for what it's worth, I work on this robot for a living)

View more: Prev | Next